Prioritisation is a constraint problem.
Not an ideas problem.
Most teams have more good ideas than delivery capacity. The hard part is selecting what to do next without turning the roadmap into a negotiation theatre.
Frameworks help because they force the conversation into explicit trade-offs:
value vs effort
confidence vs ambition
customer satisfaction vs operational reality
short-term wins vs long-term bets
Used well, they reduce politics.
Used badly, they add spreadsheets.
RICE: Reach, Impact, Confidence, Effort
RICE is a scoring model. It is designed to create comparability across options.
Inputs
Reach: how many users you will affect in a time window
Impact: how much it moves a key metric or changes user experience
Confidence: how sure you are about reach and impact
Effort: how much work it takes (person-days, weeks, story points)
Formula
RICE score = (Reach × Impact × Confidence) / Effort
When to use it
Use RICE when you have enough data to make sensible estimates, and when you need to compare features with different scope.
The trap
RICE can create fake certainty.
If Reach is guessed, Impact is vibes, and Confidence is a finger in the air, the score still looks “scientific”. It is not. It is arithmetic on assumptions.
Practical example
Two options:
Improve onboarding flow
Build a new analytics dashboard
Onboarding might have higher Reach, clearer Impact, and lower Effort. It often wins, even if the dashboard feels more exciting.
That is the point.
MoSCoW: Must, Should, Could, Won’t
MoSCoW is not a scoring system. It is a categorisation system.
Buckets
Must: required to ship, meet baseline needs, or satisfy constraints
Should: important, but can slip without killing the release
Could: nice-to-have if time allows
Won’t: explicitly out of scope for now
When to use it
Use MoSCoW when you need stakeholder alignment and scope control, especially for time-boxed work like MVPs and releases.
The trap
Everything becomes a “Must”.
If you allow that, MoSCoW stops working. A good MoSCoW session ends with uncomfortable “Won’ts”.
Practical example
For a new release:
Must: authentication, core flow works end-to-end
Should: notifications, basic settings
Could: dark mode, “nice” animations
Won’t: major redesign, new platform support
MoSCoW is less about ranking and more about protecting delivery.
Kano: satisfaction, not utility
Kano is a customer satisfaction lens.
It separates features into three types:
Basic needs: expected. Absence causes frustration. Presence does not delight.
Performance needs: more is better. Satisfaction increases with quality.
Delighters: unexpected. Not required, but creates emotional lift.
When to use it
Use Kano when retention and satisfaction matter, and when you want to balance fundamentals with differentiation.
How to do it properly
Run a Kano survey. Don’t guess. Ask users how they feel if a feature exists vs if it does not.
Practical example
Messaging app:
Basic: sending and receiving messages
Performance: search speed, reliability, delivery rate
Delighter: smart replies, polished micro-interactions
Kano stops teams over-investing in delighters while basics are shaky.
Value vs complexity matrix
This is the simplest framework to run in a workshop.
Plot items on two axes:
Value (to users or business)
Complexity (effort, risk, dependencies)
Then you get four categories:
High value, low complexity: quick wins
High value, high complexity: strategic bets
Low value, low complexity: fill-ins
Low value, high complexity: time sinks
When to use it
Use it to align quickly, visualise trade-offs, and spot the “why are we doing this?” work.
The trap
Complexity is not just dev effort.
Include:
technical risk
cross-team dependencies
compliance overhead
operational support cost
If you ignore those, “quick wins” become expensive surprises.
Dealing with conflict
Frameworks do not remove disagreement. They make disagreement visible.
When things get spicy:
Anchor on outcomes: what metric or customer result are we optimising for?
Separate facts from assumptions: label the guesses
Write down trade-offs: what are we sacrificing by choosing this?
Be transparent: publish the reasoning, not just the decision
A framework is only useful if people trust the process.
Choosing the right framework
Pick based on the problem you are trying to solve:
Need comparability and you have data: RICE
Need scope discipline and stakeholder clarity: MoSCoW
Need a satisfaction lens and balance fundamentals vs delight: Kano
Need alignment fast and a workshop-friendly view: Value vs complexity
You can combine them.
A common pattern:
Value vs complexity to shape the field
RICE to rank the shortlist
MoSCoW to protect the release
Final thoughts
Prioritisation is not about being “right”.
It is about being explicit.
Explicit about goals.
Explicit about trade-offs.
Explicit about why this, now.
One implication for builders: if you want calmer teams and cleaner roadmaps, invest in prioritisation as a shared system, not a PM-only ritual.