Save to My Latticework unsave

RICE Score
RICE Score
RICE Score
save0 saved view6.7K views
Share this with your network
Share this with your network
Overview

How do you choose which products or features to ship and which should remain in your backlog? RICE Scores are one way of making a call.

RICE Scores are used to prioritise product and/or product features based on your confidence in the predicted reach and impact of a feature as compared to the effort required to create them. 

THE FORMULA.

The formula to determine a RICE Scores multiplies reach, impact and confidence, dividing the result by the effort. In other words, it’s: (Reach x Impact x Confidence) / Effort = RICE Score.  

To break down each element: 

  • Reach: the projected audience of a new product/ feature. 
  • Impact: how much will an individual user value the product/ feature, this might consider conversion rates, user attraction, or retention. It’s often measured on a scale of: 
    • 3 = massive 
    • 2 = high 
    • 1= medium
    • 0.5 = low
    • 0.25 = minimal
  • Confidence: the level of confidence in the above two estimates, typically expressed as a percentage.
  • Effort: the work required to deliver the product/ feature, typically expressed as a person’s effort in a month. 

ALTERNATIVE PRIORITISATION METHODS.

Alternatives to the RICE Scores include the Kano Model and Impact Effort Matrix, though RICE is particularly useful to facilitate team discussions between difficult to compare alternatives.

Share this model with your network to be smarter, faster, together!
Actionable Takeaways
  • Identify the product or feature under consideration. 

This might be part of a backlog or from a brainstormed or even user-generated list of possible features. Ideally, continue the following steps with a group to help interrupt individual biases. 

  • Check the initiative against the product vision. 

While not part of the RICE Score, it is important to base decisions 

  • Estimate reach. 

Consider the audience for the new product/ feature, use past initiatives and uptake as a reference point. 

  • Estimate impact. 

Identify a metric to measure impact by considering what the desired user response will be if successful. This might relate to conversion or purchase, retention, recommendation, or upgrading. Ideally compare other initiatives to gain a comparison before making an estimation. 

  • Establish confidence. 

Ask the group to rate their confidence as a percentage on the above two metrics — if there were few comparison points of past experiences, the rating should be lower. 

  • Estimate effort. 

Predict how long the product/ feature will take to create. For example, if it will take a week of planning, a week of design, and 2 weeks of development — that equates to a single person month. 

  • Compare and use RICE Scores. 

The RICE Score for a variety of options can then drive a broader conversation and define priorities.

Limitations

The elements behind RICE are still subjective and potentially influenced by biases, though the ‘confidence’ factor is a positive inclusion in this respect. 

The ‘effort’ factor does not consider different values of people hours so, for example, there is no distinction between 1 week of senior developer time which might cost twice as much as 1 week of a junior designer.

In Practice

Intercom. 

The RICE Score was first developed by Intercom as a way to develop product priorities and their original post included this link to a spreadsheet example to establish RICE scores.

Build your latticework
This model will help you to:

RICE scores are typically used by product managers to help prioritise products and product features, similar to the impact effort matrix or kano. 

Use the following examples of connected and complementary models to weave RICE score into your broader latticework of mental models. Alternatively, discover your own connections by exploring the category list above. 

Connected models: 

  • Impact effort matrix and kano are alternative prioritisation methods. 
  • Pareto principle: in establishing the ‘20’ features to deliver the ‘80’ value. 

Complementary models: 

  • Golden circle: to ensure that new products and features are aligned with a deeper direction. 
  • Lock-in effect: as a potential consideration in relation to impact.Agile methodology: an iterative approach that works well with this form of prioritisation. 
  • Personas: as a potential tool to explore and establish reach and impact. 
  • Zawinski’s law: a warning to prioritise and avoid product bloat. 
  • Agile methodology: an iterative approach typically requiring fast and ongoing prioritisation. 
  • Minimum viable product: as an approach to cut out unnecessary features in the first instance.
Origins & Resources

The RICE Scoring model was developed by Intercom and is outlined in some detail in their original post about it here

My Notes

Already a ModelThinkers member? Please log in here.

Oops, That’s Members’ Only!

Fortunately, it only costs US$5/month to Join ModelThinkers and access everything so that you can rapidly discover, learn, and apply the world’s most powerful ideas.

ModelThinkers membership at a glance:

Small mark
UNLOCK EVERYTHING
Access all mental models and premium content.
Small mark
BUILD YOUR LATTICEWORK
Save models to your personal list.
Small mark
QUICKLY MEMORISE MODELS
Use our Learn function to embed models to memory
Small mark
PERSONALISE MODELS
Add your own notes and reminders.
Small mark
BUILD YOUR ‘EXTERNAL BRAIN’
Discover a new idea? Capture it by adding a new model.
Small mark
JOIN THE MT COMMUNITY
Rate models, comment and access exclusive events.

“Yeah, we hate pop ups too. But we wanted to let you know that, with ModelThinkers, we’re making it easier for you to adapt, innovate and create value. We hope you’ll join us and the growing community of ModelThinkers today.”

Arun Pradhan & Shai Desai
CoFounders, ModelThinkers.