AI is not “smart” in the way people assume.
It is responsive. It reflects the structure of your input.
Vague in. Vague out.
Clear in. Useful out.
Prompting is not a party trick.
It is specification.
For Product Managers, that matters because our work is already specification-heavy. We turn messy inputs into structured decisions. Prompts are just another interface for that.
What prompt engineering actually is
Prompt engineering is the practice of writing instructions in a way that produces:
accurate outputs
relevant outputs
consistent outputs
outputs in the format you need
It is like briefing a designer.
“Make it look good” produces generic work.
“Design a mobile-first finance flow using our brand tokens, with accessible typography and clear error states” produces usable work.
The model is the same. The brief changes.
What prompting helps with in Product Management
Good prompts can accelerate common PM workflows:
summarising and extracting insights from feedback
generating options for prioritisation
drafting user stories and acceptance criteria
planning sprints and shaping roadmaps
scanning competitors and synthesising patterns
This is augmentation, not replacement.
Framework choice still matters.
Data quality still matters.
Judgment still matters.
The golden rules for effective prompts
Treat this as your prompt checklist.
1) Be specific about the output
State what you want and how you want it delivered.
format: bullets, table, JSON, checklist
length: short, medium, long
tone: direct, neutral, friendly
scope: include and exclude
Specificity reduces drift.
2) Use role prompting to set context
Tell the model what stance to take.
“You are a UX designer…”
“You are a data analyst…”
“You are a security reviewer…”
This is not magic.
It reduces ambiguity.
3) Provide examples when quality matters
If you have a preferred style, show it.
A single example can anchor structure and tone faster than a long explanation.
4) Add constraints and structure
Constraints increase reliability.
Ask for:
numbered steps
clear sections
assumptions listed up front
a decision matrix
a table with fixed columns
More text is not always better.
Better structure is better.
5) Ask for reasoning without encouraging rambling
For complex work, request a short explanation of why, plus assumptions.
Use prompts like:
“List assumptions, then provide the answer.”
“Explain the key trade-offs in 5 bullets.”
“Show calculations and inputs, then the result.”
This improves auditability without turning the output into an essay.
6) Iterate like you would iterate a product
Prompting is a design loop.
Run it. Review it. Tighten it. Save the version that works.
A simple Notion prompt library is enough.
Prompt templates you can borrow
Copy, paste, edit. Keep the structure.
Brainstorming features
You are a [insert profession] for a [insert business] designed for [insert users].
Generate 10 feature ideas that could give us a competitive advantage.
For each feature, provide:
- A short description
- The strategic value (why it matters)
- A rough dev complexity (low/med/high)
- The potential impact on user engagement (low/med/high)Prioritising features with RICE
Use the RICE framework to prioritise the following features for a [insert product type]:
- Feature 1 (e.g. Biometric login)
- Feature 2
- Feature 3
- Feature 4
- Feature 5
For each, show:
- Reach
- Impact
- Confidence
- Effort
Then rank the features from highest to lowest RICE score.
Assumptions:
- If data is missing, state assumptions explicitly.
- Keep explanations brief and structured.Writing user stories
Write 5 user stories for a [insert feature type] in a [insert product type].
Use this format:
- As a [user type], I want to [action], so that [benefit].
Also include:
- 2–3 acceptance criteria per story
- Notes on edge cases or exceptions
- SEO and data requirements (events, properties)Prepping for user interviews
Create 15 open-ended questions for user interviews about our [insert your product].
Goals:
- Uncover user pain points and unmet needs
- Use neutral, non-leading language
- Start broad, then get more specific
Group the questions into:
- General habits and context
- Common workflows or tasks
- Communication and collaboration needs
- Pain points and frictionEstimating effort
Estimate the effort required to implement [insert feature] in our [insert product or platform].
Break the work down into:
- Key components (frontend, backend, APIs, infrastructure)
- Dependencies or third-party services
- Testing, QA, and deployment steps
Then provide a rough estimate per part using [insert estimation method].
Constraints:
- If unknowns exist, list them.
- Call out risk hotspots.Building a roadmap
Create a 12-month product roadmap for our [insert product type or industry].
Focus on these initiatives:
- [Initiative 1]
- [Initiative 2]
- [Initiative 3]
- [Initiative 4]
- [Initiative 5]
Break the roadmap down by quarter.
For each quarter include:
- Key themes
- Prioritised initiatives
- Milestones or success indicatorsAnalysing competitors
Build a comparison framework for analysing competitors in the [insert category].
Include criteria:
- UI and UX quality
- Core and differentiating features
- Pricing and monetisation
- Target audiences or personas
- Market positioning and brand tone
Then apply it to 3 competitors:
- [Competitor A]
- [Competitor B]
- [Competitor C]
Summarise findings in a comparison table.Getting more advanced without overengineering it
Once you’re comfortable, you’ll notice two common failure modes:
the model forgets context across sessions
outputs vary too much run to run
There are three ways to reduce this.
1) Fine-tuning
Fine-tuning trains a model on your examples so it mirrors your product language, formats, and standards.
It can reduce prompt length and improve consistency.
It also introduces risk.
Do not fine-tune on sensitive business or customer data unless you have governance and approval.
Useful starting points:
2) Prompt tuning and reusable prompt “wrappers”
If fine-tuning is heavy, create stable wrappers.
A wrapper is a saved prompt that defines:
role
output structure
constraints
evaluation criteria
Then you only inject the variable context each time.
This is often enough.
3) Dynamic context injection
Instead of pasting context manually, you pipe it in.
From Jira.
From your CRM.
From a feedback store.
From your docs.
This creates a repeatable workflow.
Be careful with sensitive data.
Treat prompts as part of your security surface.
Building an internal “PM sidekick” in a sane way
If you want an assistant that behaves consistently, do this in order:
Pick 2–3 use cases (spec drafts, feedback synthesis, PRD review)
Gather good examples of inputs and outputs
Standardise formats (what good looks like)
Choose a model that fits cost and risk
Test with real work, not toy prompts
Add guardrails (what it must not do)
Iterate and version the workflow
Treat it like a product.
Because it is.
Wrapping up
Prompting is not about sounding clever.
It is about controlling outputs.
Clear inputs create reliable outputs.
Reliable outputs create faster decisions.
Faster decisions create more learning per week.
That is the point.
One implication for builders:
Stop thinking of prompts as messages. Start thinking of them as interfaces.