Skip to main content

What to Do With Your Survey Results

Written by Jon
Updated today

What to Do With Your Survey Results

Collecting survey responses is the easy part. The harder part is knowing what to do next, how to interpret the data, what to act on, and how to close the loop with respondents. This article walks through what to do after the responses start coming in.

Start with the numbers, then read the words

For NPS, CSAT, and CES surveys, start with the score to get a directional read. Is sentiment positive, neutral, or negative? Is it moving in the right direction compared to last period? The number gives you a quick signal.

Then read the open-ended responses. The score tells you how users feel, the text tells you why. A low NPS score with no open-ended responses gives you nothing to act on. A low score with written responses gives you a specific problem to investigate.

๐Ÿ’กDon't chase the score. NPS and CSAT are indicators, not goals. A team that optimizes for a higher NPS number without fixing the underlying issues will find the score bounces back down. Focus on the responses, not the metric.

Segment before you conclude

An average score across all users often hides more than it reveals. Before drawing conclusions, segment your results:

  • New users vs long-term users: new users often score lower as they're still learning the product

  • Free vs paid users: different expectations and usage patterns

  • By feature area: if you're running feature-specific surveys, compare across areas

  • By account or company: a single unhappy enterprise customer can drag down your overall CSAT

If you've set up User Identification, you can filter survey results by any attribute you pass, like plan, account, role, signup date. This is where the investment in User Identification pays off most visibly.

Identify what to act on vs what to monitor

Not every piece of negative feedback requires immediate action. Prioritize by:

Signal

What it means

Action

Same theme appearing across multiple respondents

Systemic issue, not an edge case

Prioritize for sprint planning

Low score from a high-value account

Churn risk

Follow up personally, same day

Score drop after a specific release

Release introduced a problem

Cross-reference with bug reports, investigate immediately

One-off negative response with no pattern

May be an edge case or personal preference

Log it, watch for recurrence, don't act immediately

Close the loop with respondents

If a user left written feedback, especially negative feedback, follow up. This doesn't have to be a lengthy response. A brief reply acknowledging what they said and letting them know it's been logged goes a long way toward rebuilding trust and encouraging continued feedback.

Feed results into sprint planning

Survey results should inform your next sprint, not just sit in a dashboard. Pull the top themes from open-ended responses into your planning meeting. If three respondents mentioned the same friction point independently, that's stronger evidence for prioritization than a single feature request from a vocal user.

Did this answer your question?