NPS Survey Questions That Actually Drive Insights
The standard NPS question only gets you a number. The follow-up question, timing, segmentation, and feedback loop are what turn that number into retention insights. Data-backed guidance from Bain & Company, CustomerGauge, Pendo, and Groove, with practical templates for bootstrapped SaaS founders.
Two-thirds of Fortune 1000 companies now use NPS. Most of them waste it. They send the survey, log the score, and let the written feedback collect dust in a spreadsheet. Sopact’s research confirms the pattern: up to 60% of NPS surveys return no explanatory text when open-ended comments are optional. Gartner puts it more bluntly: 95% of companies collect customer feedback, only 10% use it to improve their product, and only 5% tell customers what they did about it.
The NPS question itself takes thirty seconds to deploy. Getting useful, actionable insight from it takes deliberate choices about what you ask after the score, when you ask it, who you ask, and what you do with the answers. This post covers each of those decisions with data from Bain & Company, CustomerGauge, Pendo, Groove, and Gainsight, plus practical templates you can deploy today.
Key Takeaways
-
The standard NPS question works because “likelihood to recommend” outperformed every alternative survey question as a predictor of repurchase and word-of-mouth in Reichheld’s original research across 400 companies and 28 industries. Don’t modify the core question. Customise what comes after it.
-
Follow-up questions are where the value lives. A two-question format (NPS score plus one open-ended follow-up) achieves 83.34% completion versus 41.94% for surveys with 15+ questions. Customise the follow-up by segment: promoters, passives, and detractors need different questions.
-
In-app surveys achieve 21.71% response rates versus roughly 12-15% for email. Companies switching from email to in-app collection see 2x to 10x increases in response volume. Use both channels, but lead with in-app for active users.
-
Closing the feedback loop within 48 hours produces a 6-point NPS lift and generates 3x more promoters at the next survey. Yet only 26% of B2B companies close the loop with all their customers. This is the single highest-ROI NPS action.
-
Segmenting NPS by user role reveals hidden problems. Gainsight found executive buyers score a median NPS of 46 versus end users at 36. If you only survey decision-makers, you’re measuring satisfaction with the purchase decision, not the product experience.
-
When detractors cite billing, payment, or account access issues, that’s not a product problem. It’s a dunning problem. With 20-40% of SaaS churn being involuntary, routing billing-related NPS feedback to your payment recovery process fixes detractor scores faster than any roadmap change.
The standard NPS question
Fred Reichheld introduced the Net Promoter Score in his December 2003 Harvard Business Review article, based on two years of research at Bain & Company linking survey responses to actual customer behaviour. The exact standard wording is: “How likely is it that you would recommend [company/product/service] to a friend or colleague?” Respondents answer on a 0-10 scale, where 0 means “not at all likely” and 10 means “extremely likely.”
The “recommend” framing was chosen deliberately. Reichheld’s team tested alternatives including satisfaction questions, loyalty questions, and repurchase intent questions. Willingness to recommend outperformed every one of them because customers put their own reputation on the line when making a referral. That personal stake makes the response more predictive than questions about abstract satisfaction.
The 0-10 scale is modelled on the Juster Purchase Probability Scale (1966), which demonstrated that an 11-point scale explains roughly twice as much variance in purchase behaviour as traditional 5-point scales. Zero was chosen over one because the number one can represent first place, introducing ambiguity. On the 11-point scale, two scores (9 and 10) denote promoters, giving respondents room to express a range from “great” to “perfect.” On a 5-point scale, only one score represents a promoter, inflating the passive category and compressing the data.
The three scoring categories were determined through clustering analysis, not theory. Promoters (9-10) correlated with active referral behaviour and high repurchase rates. Passives (7-8) were satisfied but switchable, referring at less than half the rate of promoters. Detractors (0-6) accounted for roughly 80% of negative word-of-mouth. The Nielsen Norman Group notes that the apparently harsh cutoffs account for response generosity bias: raters naturally give fairly high scores, so a 6 signals real dissatisfaction even though it’s above the mathematical midpoint.
Acceptable wording variations
Delighted (where Reichheld serves as an advisor) confirms that context-appropriate substitutions are fine: “friend or family member” for B2C, “friend or co-worker” for B2B, or “someone like you” for niche products. However, no published A/B testing data exists comparing the score impact of these variations. The critical best practice is consistency: compare NPS scores only when the question, scale, context, and channel remain constant. Qualtrics XM Institute’s 2021 study of 17,509 consumers across 18 countries found stark cultural variance: India and Mexico produced NPS scores of 60+ for liked companies while Japan produced -47. The question works, but context shapes the answer.
Relational NPS versus transactional NPS
Relational NPS (rNPS) measures the customer’s overall, long-term sentiment toward your brand. It’s sent at regular intervals, typically quarterly or semi-annually, independent of any specific interaction. The question is simply the standard NPS prompt.
Transactional NPS (tNPS) measures satisfaction after a specific touchpoint. It’s triggered immediately after a support call, purchase, onboarding completion, or feature interaction, with contextual framing: “Based on your recent [experience], how likely are you to recommend [Company]?”
Bain & Company explicitly recommends using both. A digital banking app case study illustrates why: tNPS of 85 for payments (users were happy with the core function) versus rNPS of 64 (revealing negative experiences elsewhere in the product). For SaaS companies, the typical deployment is quarterly relational NPS via email or in-app as a baseline health check, plus transactional NPS triggered at high-churn-risk touchpoints: post-onboarding, after support interactions, after major feature releases, and before renewal.
Follow-up questions that reveal real insight
The NPS score tells you what. The follow-up question tells you why. Without it, NPS is a number on a dashboard that nobody can act on. Andrew Chen has argued that the most actionable part of any NPS survey is the categorisation of open-ended verbatim comments from promoters and detractors, not the score itself.
The single most universally recommended follow-up across Userpilot, Delighted, SurveyMonkey, Qualtrics, Retently, and CustomerGauge is: “What is the primary reason for your score?” This open-ended prompt avoids leading respondents while capturing the specific driver behind their rating. Expert consensus converges on a two-question format: one NPS rating question plus one open-ended follow-up. Survicate’s analysis of 267,564 responses confirms the logic: surveys with 1-3 questions achieve 83.34% completion versus 65.15% for 4-8 questions and just 41.94% for 15+ questions.
Tailoring follow-ups by respondent segment
The most effective SaaS NPS programmes customise the follow-up based on the score received. This isn’t optional sophistication. It’s what turns a generic data collection exercise into targeted intelligence.
For promoters (9-10), ask what they love to identify competitive strengths and fuel advocacy. Effective questions include “What do you enjoy most about [Product]?”, “What’s the one thing we do well that you’d want us to keep doing?”, and “Would you be open to sharing your experience on G2/Capterra?” HubSpot reaches out to promoters with opportunities to join referral programmes or participate as case studies. Slack’s Bill Macaitis (former CMO, previously of Zendesk and Salesforce) focused not on signups or conversions but on the stage where customers start recommending the platform.
For passives (7-8), the key question is: “What would it take to move you just one point higher?” This framing, specifically recommended by Delighted, is concrete and bounded. It produces actionable feedback rather than vague complaints. Survicate notes that passives are often the largest NPS segment yet the most neglected. Other effective variants: “What’s the one thing that would make you love our product?” and “Is there something a competitor does that you wish we did?”
For detractors (0-6), the priority is understanding the root cause and signalling that you care. CustomerGauge recommends branching logic: show a three-option quick-tag list plus an optional comment box. Effective questions include “What’s the primary reason for your score?”, “If you could change one thing, what would it be?”, and “Would you be open to a quick call so we can better understand your experience?”
The wording detail that doubled response rates
Groove HQ documented an instructive finding in their survey optimisation. Their closed-ended exit survey (a dropdown with pre-filled answers like “too expensive” and “didn’t get value”) achieved a 1.3% completion rate with essentially useless data spread evenly across options. Switching to an open-ended email asking “why did you cancel?” produced a 10.2% response rate, roughly a 785% improvement. But the real insight was subtler: further optimising the wording from “why did you cancel?” to “what made you cancel?” pushed the response rate to approximately 19%. The “why” framing felt accusatory. “What made you” felt neutral and produced more candid answers.
This matters beyond exit surveys. The principle applies to NPS follow-ups too. “Why did you give this score?” puts the respondent on the defensive. “What’s the primary reason for your score?” keeps it neutral. Small wording changes produce measurably different response behaviour.
Template: a practical two-question NPS survey for SaaS
Screen 1: “How likely are you to recommend [Product] to a friend or colleague?” [0-10 scale]
Screen 2 (conditional):
If score is 9-10: “Glad to hear it. What do you enjoy most about [Product]?”
If score is 7-8: “Thanks for the feedback. What’s the one thing that would make [Product] even better for you?”
If score is 0-6: “We appreciate your honesty. What could we do differently to improve your experience?”
Screen 3 (optional, low-friction): “Would you be willing to elaborate in a brief follow-up conversation? [Yes/No]”
HubSpot’s Service Hub supports this exact segment-specific follow-up customisation natively, allowing different questions and thank-you messages for each NPS group. Most dedicated NPS tools (Retently, SatisMeter, Userpilot) offer the same branching logic.
When to send NPS surveys
First survey timing
The consensus across Retently, Delighted, Userpilot, and Refiner is to send the first NPS survey 7-30 days after signup, anchored to the customer’s first value moment. For monthly billing SaaS, 30 days after conversion to paid is the standard trigger. For free trial models, 3-5 days after the trial ends captures sentiment while the experience is fresh. The principle: don’t ask for feedback before the customer has experienced enough of the product to form a meaningful opinion, but don’t wait so long that early impressions have faded.
Ongoing cadence
Quarterly is the gold standard for relational NPS in B2B SaaS. CustomerGauge’s research found that companies surveying quarterly see a 51% improvement in retention versus 44% for annual-only surveys. At minimum, survey every six months. If you’re just starting, begin with biannual surveys; once you have processes to close the loop, move to quarterly.
For transactional NPS, timing is event-driven: 1-2 days after a support ticket is resolved while the experience is fresh, 3-5 days after a major feature launch, and a few months before renewal to proactively identify at-risk accounts. Sean Mancillas, Head of Delighted’s Customer Concierge Team, confirms that sending a post-support survey 24 hours after an interaction versus 7 days produces much better results.
Day and time optimisation
Best send days are Tuesday through Thursday, with 9-11 AM in the recipient’s local time zone producing the highest response rates according to SurveyMonkey and Gainsight research. Monday mornings also perform well for short B2B surveys. Avoid Fridays, weekends, and holidays. However, CustomerGauge’s extensive analysis found that weekday differences are often just 1-2 percentage points. Contextual timing (triggering based on customer interactions) matters more than day-of-week fine-tuning.
Avoiding survey fatigue
Survey fatigue is real and measurable. SurveySensum reports that 52% of respondents drop out of surveys taking more than 3 minutes. One case study showed monthly 30-question engagement surveys saw response rates collapse from 65% to below 25% within six months; switching to shorter quarterly pulse surveys with closed-loop follow-up rebounded rates to 58% in a single quarter.
The guardrails: survey each individual contact no more than once per quarter. Space invitations at least 60 days apart. Coordinate across departments to prevent marketing, product, and CS teams from surveying the same contacts simultaneously. Survey no more than 10% of your user base at any given time if using a rolling sample.
A follow-up reminder email increases response rates by approximately 15% with only 0.5% unsubscribe rates. The optimal reminder cadence: initial invitation, 48-hour follow-up, and one final reminder at 5-7 days.
In-app versus email: a response rate gulf
The channel gap is substantial. Refiner’s 2025 report, analysing 1,382 in-app surveys with 50.3 million views, found an overall response rate of 27.52% for in-app surveys, with NPS specifically achieving 21.71%. Mobile in-app surveys performed even better at 36.14%. Email NPS surveys typically achieve 12-15%, though well-structured programmes can push toward 25%.
Pendo’s research shows companies moving from email to in-app collection see jumps of 2x to 10x in response volume. A Delighted customer case study found that switching from email NPS to in-product pop-ups generated 6x the responses in 10 days that they had collected in the previous six months.
The hybrid approach works best: serve in-app surveys to active users for contextual, high-response-rate feedback, and use email to reach churned or inactive users, executive buyers who don’t log in daily, and stakeholders outside the product. One critical caveat from Pendo: changing modal type (banner versus lightbox) can cause a 78% change in NPS score, so methodology consistency across channels is necessary for benchmarking.
Triggered (event-based) surveys consistently outperform scheduled (time-based) surveys. Specific.app’s research shows triggered surveys regularly reach 25-45% engagement, more than doubling static programmes.
How to segment NPS responses
An overall NPS of +40 sounds healthy. Until you discover your enterprise segment scores -15 and your free tier inflates the average. Without segmentation, NPS is a blunt instrument that masks the very insights you need to act on.
The buyer-user gap
The most striking segmentation finding comes from the Gainsight Customer Success Index 2022, which revealed that median NPS for executive buyers was 46 versus just 36 for end users, a 10-point gap. Decision-makers evaluate software at a strategic level (does it solve the business problem?), while end users encounter daily UI friction, bugs, and workflow inefficiencies. If you only survey decision-makers, you get an inflated view of satisfaction while the users who determine actual retention suffer silently.
Refiner’s best practice: for transactional NPS about product features, target end users. For questions about pricing, renewals, or sales performance, target decision-makers. This dual approach surfaces signal from both constituencies.
Segmentation dimensions that reveal hidden problems
By plan tier: Free and freemium users tend to give higher NPS (average +42 for freemium SaaS versus +29 for subscription SaaS, per NPSpack). A score of 7 from a power user on your enterprise plan carries entirely different meaning than a 7 from a free-tier trial user.
By customer tenure: Feedback from early users shapes onboarding, while scores from long-term users speak to ongoing value delivery. Pendo’s white paper on NPS methodology suggests that features mastered by long-term promoters can be worked into onboarding content to accelerate time-to-value for new users.
By company size: HubSpot’s customer experience team segmented NPS by customer tier and discovered that their mid-market customers were much less satisfied than either enterprise or small business users, completely invisible in the aggregate score. This led to targeted improvements in their mid-market offering.
By usage level: Pendo emphasises correlating NPS with product usage data. Features used heavily by promoters that passives and detractors don’t use likely provide real value. This analysis directly informs roadmap prioritisation: increase adoption of high-value features among passives and detractors rather than building new ones.
By acquisition channel: Segmenting by organic, paid, referral, or partner acquisition reveals which marketing sources produce the happiest, most loyal customers. Invaluable for CAC optimisation.
Sample size considerations
For statistical validity, MeasuringU calculates that detecting a 10-point NPS difference between segments at 95% confidence requires approximately 236 responses per group. For smaller B2B companies with limited survey volumes, Retently advises focusing on qualitative verbatim analysis rather than statistical rigour. Start collecting and acting immediately, even with small samples. A pattern of three detractors citing the same onboarding friction point is actionable regardless of statistical significance.
What to collect alongside every NPS response
The metadata that makes segmentation possible: plan tier, user role, company size, customer tenure, usage level, specific features used, MRR/ARR, lifecycle stage, acquisition channel, assigned CSM, and support ticket history. Tools like ChurnZero, Gainsight, and Retently allow automatic enrichment of NPS responses with account metadata from CRMs and product analytics platforms.
Improving response rates
Average NPS response rates cluster around 10-30% for email and 25-40% for in-app surveys, with a sobering statistic in the background: only 1 in 26 unhappy customers actually complains directly to a business. The rest simply leave. Of the 96% who don’t complain, 91% will churn silently, and 13% will share their negative experience with 15 or more people. Every response you fail to collect amplifies this blind spot.
Keep it radically short
CustomerGauge recommends capping relationship NPS surveys at 2-6 questions. SurveyMonkey’s analysis of 100,000 surveys found that surveys starting with a multiple-choice question (like the NPS scale) achieve 89% completion versus 83% when starting with an open-ended question. Lead with the score, follow with the open-ended prompt.
Personalise the sender
Personalised survey emails increase response rates by up to 48% according to SurveySparrow research, and personalised invites specifically boost rates by 7.8% while reducing drop-off by 2.6%. Send from a real person’s name and email (CEO, CSM, or support lead) rather than no-reply@company.com. Slack’s Bill Macaitis personally sent NPS surveys saying the feedback was important and that he read every comment. However, Intercom’s A/B test found that generic senders produced more candid feedback because customers felt less inhibited when not responding to someone personally. Test both approaches.
Embed the first question in the email
Rather than linking to an external survey, embedding the NPS scale directly in the email body sharply increases click-through. Use a CTA button instead of a text link for a 28% increase in click-through rate. Adding a progress bar boosts responses by 12%. Avoid the word “survey” in subject lines, as Pointerpro data shows this increases response rates by 10%.
Optimise for mobile
Mobile app surveys average 36.14% response rate versus 26.48% for web apps according to Refiner’s 2025 data. With 23.63% of all emails opened within the first hour, mobile-friendly design captures impulse responses. Mobile-optimised surveys see a 10-15% higher completion rate compared to non-optimised versions.
Use incentives cautiously
A $10 incentive increases survey return likelihood by 30% and boosts returned surveys by 18%. Well-structured incentive programmes can push completion from 30% to 50%. But incentives risk biasing responses and attracting low-effort clicks. For B2B SaaS, the most effective incentive is sharing useful, relevant findings from the research itself. If offering tangible incentives, reward depth (completing follow-up questions) rather than just clicking a score.
Pre-notify and remind
Retently found that pre-notifying users 2-7 days before a survey increases response rates from 4-29%. A follow-up reminder at 3-7 days boosts rates by up to 14%, with only 0.5% unsubscribe rates. The optimal sequence: pre-notification email, survey invitation, 48-hour reminder, and a final reminder at 5-7 days.
Turning NPS feedback into action
Here’s where most companies fail. Just 26% of B2B companies close the loop with all their customers according to CustomerGauge. This is where the real NPS ROI lives. Not in the collection, but in the response.
The inner loop: individual follow-up
The inner loop, as defined by Bain & Company, promotes individual learning. Frontline employees hear customer feedback directly, in the customer’s own words, and follow up with anyone whose feedback merits action. CustomerGauge’s data is unambiguous: companies that close the loop within 48 hours experience a 6-point NPS increase. Companies that implement systematic loop closure have 3x the number of promoters in their next survey compared to those without a closed-loop process. Conversely, companies that don’t close the loop increase churn by a minimum of 2.1% every year.
The practical workflow: detractor responses (0-6) generate immediate tickets routed to the appropriate team with a 48-hour SLA. High-value account detractors escalate to management. Passives queue for customer success follow-up within one week. Promoters route to marketing for testimonials and referrals. CustomerGauge adds a compounding benefit: customers are 21% more likely to answer the next survey if you closed the loop on their previous feedback.
The economic impact scales fast. CustomerGauge calculates that a $500M company reducing churn by 2.3% over five years through multi-level loop closure adds $234 million to the bottom line. For a bootstrapped SaaS founder, the maths work at any scale. If your MRR is $10,000 and closing the loop prevents 2.3% of your customers from churning each year, that’s $2,760 in preserved ARR from a process that costs nothing but time.
The outer loop: systemic improvement
The outer loop, per Bain, supports improvements that go beyond the individual or team. New policies, processes, technology, pricing, or product features that cut across functions and require significant investment. Four data sources feed it: patterns from the inner loop, operational data analysis, competitive intelligence, and cross-industry best practices. Companies that share best practices from their outer loop experience over 2.2% higher retention rates according to CustomerGauge.
The process: tag verbatim responses by theme, assign themes to responsible departments, validate top themes with 5-10 customer calls, bring insights into a prioritisation framework, create action plans, and track NPS score changes per theme over time to measure impact. Atlassian’s product teams are required to address top detractor themes in their quarterly roadmaps, resulting in a 15-point NPS increase across their product suite over two years.
Routing feedback to the right team
Not all detractor feedback is a product problem. This is the distinction that matters most for SaaS founders, and it’s where NPS feedback intersects directly with payment recovery.
When detractors cite billing confusion, payment failures, or account access issues in their verbatim comments, they’re flagging involuntary churn, a dunning problem rather than a product deficiency. ProfitWell research shows that 20-40% of total SaaS churn is involuntary from failed payments. The average SaaS business loses approximately 9% of recurring revenue annually to failed payments according to Stripe.
These are customers who want to stay. When their payment fails and service gets interrupted without clear communication, the experience goes: confusion, frustration, loss of trust, lower NPS, negative word-of-mouth. A customer who was a promoter yesterday becomes a detractor today because their card expired and nobody handled it well.
The practical implication: route billing-related NPS verbatims directly to your dunning and payments team. Every “billing was confusing” or “my account got locked” comment is recoverable revenue, not a product deficiency. Best-in-class companies achieve 70-85% payment recovery rates through smart dunning management. Subscriptions recovered through smart retry logic continue for an average of 7 more months according to Stripe data. For founders already running NPS surveys, tagging and routing billing-related detractor feedback to a dedicated payment recovery workflow is one of the fastest ways to convert detractors back into promoters.
Beyond NPS: the Sean Ellis test as a complement
The Sean Ellis product-market fit question, “How would you feel if you could no longer use [product]?” with three options (very disappointed, somewhat disappointed, not disappointed), is not an NPS alternative but a powerful complement for early-stage SaaS. Ellis benchmarked approximately 100 startups and found that companies where 40%+ of users say “very disappointed” almost always had strong traction, while those below 40% almost always struggled.
Superhuman CEO Rahul Vohra detailed his implementation in First Round Review: starting at just 22% “very disappointed” (no product-market fit), his team used a four-question survey to identify their high-expectation customer segment, understand what those supporters loved, and prioritise the roadmap by cost-to-impact ratio. The PMF score nearly doubled from 33% to 58% in three quarters. The key difference from NPS: the Ellis test measures product necessity (how dependent users are), while NPS measures advocacy likelihood. Use the Ellis test as a leading indicator pre-PMF, then layer in NPS for continuous monitoring post-PMF. Both can run in parallel, and at around 40 respondents you get directionally useful results from either.
Wrapping Up
The NPS question is deceptively simple: one question, one scale, one score. But the gap between companies that collect a score and companies that use it to drive retention is enormous. Gartner found that 95% of companies collect feedback and only 5% tell customers what they did about it. That gap is the opportunity.
For SaaS founders starting from scratch, the hierarchy is clear. Step one: deploy the standard NPS question with a single open-ended follow-up, customised by segment. Step two: time it right (quarterly relational, event-triggered transactional) and lead with in-app to maximise response rates. Step three: segment responses by plan tier, user role, tenure, and usage to find the hidden problems your aggregate score masks. Step four: close the loop within 48 hours with every detractor, route feedback to the right team, and feed themes into your product roadmap.
And step five: recognise that when detractors cite billing or payment issues, the fix isn’t better product. It’s better dunning. If you want to calculate your current NPS, our NPS calculator lets you input survey responses and see where you land against SaaS benchmarks.
Sources
NPS Origins and Methodology
- Fred Reichheld: The One Number You Need to Grow (Harvard Business Review, 2003): Original NPS research across 400 companies and 28 industries; “recommend” question outperforming all alternatives as a growth predictor.
- Bain & Company: Measuring Your Net Promoter Score: Standard question wording, scoring methodology, and classification tiers.
- Bain & Company: The Inner Loop: Framework for individual customer follow-up and frontline learning.
- Bain & Company: The Net Promoter System’s Outer Loop: Systemic improvement framework; cross-functional action plans driven by aggregated feedback patterns.
Survey Design and Question Wording
- Delighted: NPS Survey Question Examples and Templates: Acceptable wording variations; “friend or family member” versus “friend or colleague”; segment-specific follow-up recommendations.
- Qualtrics: NPS Survey Questions: Follow-up question frameworks; cultural variance data from 17,509 consumers across 18 countries.
- Groove HQ: How We Grew Our Exit Survey Responses by 785%: Open-ended achieving 10.2% versus 1.3% for closed-ended; “what made you” versus “why did you” wording optimisation.
- Survicate: NPS Analysis: Survey completion rates by question count (83.34% for 1-3 questions versus 41.94% for 15+).
Timing, Cadence, and Response Rates
- CustomerGauge: When to Send Your NPS Surveys: Quarterly surveying producing 51% retention improvement; weekday variance data; 48-hour closed-loop window.
- Gainsight: Best Time to Send NPS Survey: Tuesday-Thursday optimal send days; 90-day individual guardrail; pre-notification timing.
- Refiner: In-App Survey Response Rates (2025): 27.52% overall in-app response rate; NPS achieving 21.71%; mobile at 36.14%; 78% score change from modal type changes.
- Pendo: In-App NPS Surveys: 2x-10x response rate improvement from email to in-app; feature-usage correlation with NPS segments.
- SurveyMonkey: Best Practices for NPS Response Rates: 89% completion for multiple-choice-first surveys; 100,000-survey analysis.
Segmentation and Analysis
- Gainsight: NPS Scores Reveal Disparity in Value Delivered to Users and Buyers: Executive buyer NPS of 46 versus end user NPS of 36; 10-point gap analysis.
- NPSpack: SaaS NPS Benchmarks 2025: Freemium (+42) versus subscription (+29) model gap; company size breakdowns; named company scores.
- Pendo: Making Sense of NPS: Usage-NPS correlation methodology; feature adoption analysis for promoters versus detractors.
- MeasuringU: NPS Replication: Sample size requirements (236 per group for 10-point difference at 95% confidence).
Closing the Loop and Revenue Impact
- CustomerGauge: Close the Loop: 48-hour response producing 6-point NPS lift; 3x more promoters; 2.3% annual churn reduction; $234M impact calculation; 21% higher next-survey response rate.
- CustomerGauge: NPS Impact on Revenue: 26% of B2B companies closing the loop; revenue correlation findings.
Involuntary Churn and Payment Recovery
- Stripe: Combating Subscription Churn: 9% of recurring revenue lost to failed payments; recovered subscriptions continuing 7+ months.
- Recurly: Subscriber Retention Benchmarks: 15% average credit card failure rate; 7.2% monthly involuntary churn risk.
Product-Market Fit and Complementary Surveys
- Superhuman / First Round Review: How Superhuman Built an Engine to Find Product-Market Fit: Four-question survey methodology; PMF score from 22% to 58%; Sean Ellis 40% benchmark across ~100 startups.
Response Rate Optimisation
- Pointerpro: Average Survey Response Rate Benchmarks: “Survey” in subject lines reducing rates; progress bar +12%; pre-notification +4-29%.
- Retently: Customer Feedback Loop: Pre-notification timing; reminder cadence; 0.5% unsubscribe rate from reminders.
- Chameleon: NPS Survey Best Practices: Modal type impact on scores; in-app survey design patterns.