Framework15 min read

RICE Prioritization Framework

Learn how to prioritize features objectively using Reach, Impact, Confidence, and Effort. Master the most popular quantitative prioritization framework used by product teams at Intercom, Atlassian, and leading tech companies.

The RICE Score Formula

Reach×Impact×Confidence÷Effort=Score

Higher scores = higher priority. Compare scores to make objective decisions.

Why Use RICE?

Every PM faces the same challenge: too many good ideas, not enough resources. Stakeholders push their priorities, engineers have opinions, and without a framework, the loudest voice wins. RICE provides an objective way to compare opportunities and make defensible decisions.

Developed by Intercom, RICE forces you to think through four critical dimensions of any initiative. The resulting score gives you a single number to compare features—removing bias and politics from prioritization. When someone asks "why did we prioritize X over Y?" you have a clear, data-backed answer.

RICE works best for comparing medium-term initiatives (features, improvements, experiments) where you have some data to inform estimates. It's not meant for critical bugs, strategic bets, or urgent opportunities—those require different decision frameworks.

The Four Components of RICE

R

Reach

How many users will this impact in a given time period?

How to measure:

Number of users per quarter

Examples:

Homepage redesign100,000 users/quarter

All visitors see the homepage

Power user feature5,000 users/quarter

Only 5% of users would use this

Onboarding improvement25,000 users/quarter

New signups per quarter

Tips:

  • Use product analytics for accuracy
  • Be consistent across all features
  • Specify the time period (usually quarterly)
  • For new features, estimate conservatively
I

Impact

How much will this affect each user who encounters it?

How to measure:

Scale: 3 = Massive, 2 = High, 1 = Medium, 0.5 = Low, 0.25 = Minimal

Examples:

Core workflow 2x faster3 (Massive)

Dramatically changes daily work

New integration2 (High)

Significant value for users who need it

UI polish0.5 (Low)

Nice improvement, not life-changing

Tips:

  • Most features are 0.5-1 impact—be honest
  • Consider impact on your north star metric
  • Ask: "Would users notice if we removed this?"
  • Reserve 3 for truly transformative features
C

Confidence

How confident are you in your Reach and Impact estimates?

How to measure:

Percentage: 100%, 80%, or 50%

Examples:

A/B test data exists100%

We have direct evidence

User research + analytics80%

Good data, some assumptions

Gut feeling50%

Limited data, significant uncertainty

Tips:

  • Be honest—overconfidence skews results
  • Low confidence = opportunity for research
  • Document what would increase confidence
  • Consider running experiments first
E

Effort

How much work is required to ship this?

How to measure:

Person-months of work

Examples:

Copy change0.1 person-months

2-3 days total work

New feature2 person-months

Design + eng + QA over 6 weeks

Platform rebuild12 person-months

Major multi-quarter initiative

Tips:

  • Include all roles: design, eng, QA, etc.
  • Get estimates from the team, don't guess
  • Round to standard increments (0.5, 1, 2, 3...)
  • Higher effort isn't bad—just factor it in

RICE Scoring Examples

Here's how five different features compare using RICE scoring. Notice how the framework surfaces quick wins and deprioritizes low-confidence ideas.

FeatureReachImpactConfidenceEffortRICE Score

Mobile app push notifications

Good reach, medium impact, low effort—quick win

40,000180%132,000

Checkout page redesign

High reach (all purchasers), high impact on conversion, good confidence from user research

50,000280%326,667

Dark mode

High reach but low impact (nice-to-have), medium effort

80,0000.580%216,000

Smart recommendations

High potential but low confidence—consider running an experiment first

60,000250%415,000

API v2 migration

Low reach (only developers), clear scope, high effort

5,0001100%6833

Key Insights

  • Push notifications wins due to low effort (quick win)
  • Checkout redesign is high priority despite significant effort
  • Smart recommendations drops due to low confidence—run an experiment first
  • API migration scores lowest—limited reach reduces priority

How to Run a RICE Scoring Session

1

List your candidates

Gather all features, improvements, and ideas you're considering. Include items from stakeholder requests, user feedback, and team ideas. Don't filter yet—get everything on the table.

2

Estimate Reach first

Use product analytics to estimate users affected per quarter. For new features, look at similar features or user research. Be specific and document your assumptions.

3

Score Impact honestly

Ask: "How much will this move our north star metric?" Most features are 0.5-1. Reserve 2-3 for truly significant changes. Be conservative— overestimating impact is the most common RICE mistake.

4

Assess Confidence

Be honest about uncertainty. Low confidence isn't bad—it signals where you need more data. Consider running experiments on low-confidence, high-potential ideas before committing resources.

5

Get Effort from the team

Don't guess effort—ask engineering, design, and QA. Use person-months and include all work required. It's okay to be approximate; the goal is relative comparison, not perfect estimates.

6

Calculate, sort, and discuss

Calculate RICE scores and rank features. Use scores as a starting point for discussion, not the final answer. Consider dependencies, strategic fit, and other factors that RICE doesn't capture.

Common RICE Mistakes

Mistakes to Avoid

  • -Inflating Impact scores to get features prioritized
  • -Using 100% Confidence without strong data
  • -Guessing Effort instead of asking the team
  • -Treating RICE scores as absolute truth
  • -Comparing features with different time horizons

Best Practices

  • +Document assumptions for each estimate
  • +Use consistent measurement periods (quarterly)
  • +Recalibrate regularly based on actual results
  • +Use scores for discussion, not dictation
  • +Consider running experiments for low-confidence items

When to Use (and Not Use) RICE

Use RICE For:

  • Comparing multiple feature ideas
  • Quarterly/sprint planning
  • Justifying prioritization decisions
  • Identifying quick wins
  • Deprioritizing pet projects objectively

Don't Use RICE For:

  • Critical bugs (just fix them)
  • Strategic bets (use different criteria)
  • Time-sensitive opportunities
  • Technical debt (impact is hidden)
  • Compliance/legal requirements

Frequently Asked Questions

What is the RICE prioritization framework?

RICE is a prioritization framework developed by Intercom that scores features based on four factors: Reach (how many users will be affected), Impact (how much will it affect them), Confidence (how sure are you about estimates), and Effort (how much work is required). The RICE score = (Reach × Impact × Confidence) / Effort. Higher scores indicate higher priority.

How do you calculate a RICE score?

RICE Score = (Reach × Impact × Confidence) / Effort. For example: Reach = 10,000 users/quarter, Impact = 2 (high), Confidence = 80%, Effort = 3 person-months. Score = (10,000 × 2 × 0.8) / 3 = 5,333. Compare scores across features to prioritize objectively.

What are good Impact scores in RICE?

Impact is scored on a 0.25-3 scale: 3 = Massive impact (dramatically changes behavior), 2 = High impact (significant improvement), 1 = Medium impact (noticeable improvement), 0.5 = Low impact (minor improvement), 0.25 = Minimal impact (barely noticeable). Be conservative—most features are 0.5-1 impact.

How do you estimate Reach in RICE?

Reach is the number of users affected within a time period (usually per quarter). Use product analytics to estimate: total users × percentage who use the affected feature. For new features, estimate based on similar features or user research. Be specific and consistent across all features being compared.

What Confidence percentage should I use?

Confidence reflects certainty in your estimates: 100% = High confidence (solid data, clear requirements), 80% = Medium confidence (some data, reasonable assumptions), 50% = Low confidence (limited data, significant assumptions). Use 50% for gut feelings, 80% for educated estimates with some data, 100% only when you have strong evidence.

How do you measure Effort in RICE?

Effort is measured in person-months (or person-weeks for smaller teams). Include design, engineering, QA, and any other work required. Get estimates from your team—don't guess. The goal is relative comparison, so consistency matters more than precision. Round to 0.5, 1, 2, 3, 5, etc.

When should I NOT use RICE?

RICE works best for comparing medium-term feature investments. Don't use it for: critical bug fixes (just fix them), strategic initiatives (use different criteria), time-sensitive opportunities (speed matters more), or technical debt (has hidden impact). RICE is a tool, not a replacement for judgment.

How does RICE compare to other prioritization frameworks?

RICE is more quantitative than MoSCoW or Kano, making it better for comparing many options. Unlike ICE (which is simpler), RICE separates reach from impact and includes confidence. WSJF (SAFe) is similar but uses different factors. Choose RICE when you want rigorous comparison across many features with clear metrics.

Ready to Prioritize Like a Pro?

RICE is just one tool in your PM toolkit. Explore our other frameworks and templates to level up your product management skills.