How to write PRDs using AI
And what is going to happen to PMs in the future
In the last year, the role of a PM has evolved more and faster than in the prior decade, all because of AI. As a follow up to my post about how to get started solving problems with AI, I wanted to outline how to leverage AI to do one of the core functions1 of any good2 PM’s job: writing Product Requirement Documents (the PRD). In my last six months at Paxos, I saw the amount of Engineers I could support as a PM roughly 10x3, all thanks to leveraging AI.
What is a PRD?
Generally it is a document that outlines the key things cross-functional stakeholders, namely engineering, need to know to collaborate on building a product with a Product Manager. Different types of products and different audiences need different components within a PRD. It probably needs components such as a summary of the product’s purpose, the problem it solves, goals and objectives around the business value it adds and how to know what success looks like, target audience(s), scope of a release (particularly what is out of scope), etc.
A solid PRD also outlines the actual requirements written in a way they can be understood with no ambiguity (user stories with acceptance criteria). This is the meat of a good PRD and even more important in a world of AI augmented development. There is a whole lot more that goes into a PRD but for purposes of this post that should be sufficient.
Who is this post for?
This is a guide to leveraging AI, not writing a PRD itself. This post assumes you are at least adjacent to the role of a PM and know enough to be dangerous. This post is also written from the point of view that you are writing requirements for human engineers and not writing a PRD for an AI agent developer to go build4.
Which tools are best?
The landscape on models continues to quickly evolve. As of this writing, I’m finding Claude’s Sonnet 4.5 with Extended Thinking5 disabled to be the best. Much of this is just personal preference though. Claude’s Projects feature coupled with the quality of the model makes it my go to. There are also a ton of specifically tailored tools6 (ChatPRD7 is the biggest and most well known), but I don’t care for any of them. I don’t write enough PRDs anymore for it to be worth doing some sort of market survey of specific PRD writing + AI tools.
Basic workflow example
Here is the basic workflow I’ve found to be most effective when using Claude to write PRDs:
Set up a new Project
Populate the context window by adding Instructions and Files
Prompt with “Write a PRD based on this outline”
Iterate with feedback same as you would give a junior PM feedback on a PRD they have written
After a ton of iterations8, move it into a Google doc9, clean it up10 and start to run the internal feedback process you like best.
Setting up a new project is pretty straightforward. I like to organize projects at the shared context level. Whenever I find myself copying instructions from another project or adding the same files for context, that’s a realization that those things should be in the same project.
Adding instructions is the second most important thing. Some basic ones I’ve found effective:
I am a Product Manager at [company name] working on [domain/feature].
The company does x,y,z which usually something about the focus of your product (consumer, enterprise, b2b2c, api first, etc.)
Our immediate competitors are x,y,z
Use the attached documents as specific business context. Assume any information from documents with more recent dates supersedes information from older documents.
My primary business goal is [business’s north start metric]
Adding files is the most important thing and worth investing a ton of time in getting right. Attaching key documents about the product area you are working on is a great way to make sure the context window has what you have in your brain. Remember though, garbage in, garbage out. Feeding it halfway complete or out of date docs just causes you problems further down the line. I would go as far as spending time building docs that are only used for setting the context window. I never showed these to another human but it was a good use of my time given the increase in quality of output.
Types of files I recommend adding:
Current overall product strategy doc for the area you work in
Past product strategy docs so it understands how the product has evolved
Public API docs to show how the product actually works
Prior finalized PRDs that have been shipped or will be shipped soon
Customer detail breakdown (size, scale, target markets, segmentation, etc.)
Outputs of regular meetings or ceremonies you run11.
A characteristics of docs that makes them useful for adding to the context window is lots and lots of appendix tables of data. AI interprets data tables much better than prose. Putting dates on things is also an effective tactic so AI can parse what information is more up to date when conflicts arise.
The prompt itself matters a lot less than everything above. A general guidelines for the prompting is to give it guardrails. If you start with a really broad prompt like “write a PRD for a brokerage trading platform” you are going to get some worthless slop back. Instead do a lot of prompts as part of a process of iteration and keep each one fairly self contained and targeted.
An example of a series of prompts I might use:
“Write a PRD about [feature area], start with just an outline of the areas of the document I want to detail, but don’t fill them out yet. For the initial outline use [bullet points of an outline of areas to cover12]”
“Fill out the overview section” (and then I’d do a few follow up prompts to tweak it as I see fit)
“Fill out the rest of the sections with details”
I’d then go section by section and give it prompts the same as I’d give a junior PM feedback. Prompts like “remove x,y,z” or “add more detail to this part” or “explain this part more clearly.”
Like all things AI, the way to figure it out is jump in and just try it to see what works for you.
Advance workflow examples
The value in the PRD process to me is less about the literal writing down of the requirements (although this is certainly important), but more about the process of using the structured approach writing creates to solicit feedback to improve the requirements. There is no full replacement to an objective set of human eyes poking at your idea, but it turns out AI can get you pretty far.
When I think a PRD is fleshed out enough13 to show to a human, I like to run it back through a series of AI prompts. Usually exporting it to a .pdf to just drop back into a new Claude prompt (but within the same project) is much easier than trying to get all the integrated file flows working and I like this approach because it gives me easier version control.
Example prompts of how to use AI to review your PRD:
“Take a look at this PRD and give me ten examples on how to improve it”
“Take a look at this PRD and give me some ideas on how to make it more concise”
“Take a look at this PRD and give me alternative ideas on how to implement the feature with the overall goal staying the same”
“Take a look at this PRD, now compare it to [competitor api docs url] and give me a summary table breakdown of how both approaches differ”
“I’m reviewing this PRD with [stakeholder position] soon. Give me a list of their most likely questions/objections and possible rebuttals”
You can then take all of this feedback as you continue to improve on the core doc itself.
What does all of this mean for the future of Product Managers?
AI certainly supercharges the abilities of a good Product Manager but at some point does it remove the need for them all together? In the context of a wider panic about what AI might do to jobs, it is interesting to think about the PM. Someone still needs to be doing this type of work. Even in a software development setup today where there aren’t people with the actual PM title, someone is still figuring out strategy, writing requirements, building a roadmap, driving alignment among stakeholders14, etc. In small teams those are the responsibilities of someone doing a lot of other things. In big teams to scale you focus on specialization so this turns into dedicated PM roles.
My prediction is that in the future there are a lot less dedicated PMs. A lot more of the traditional PM responsibilities will fold into other roles and the PM to Eng ratio will significantly increase. If the tech industry average is 1 PM for every 8 Engineers today, I bet in the next few years we see it push to look more like 1:25 or even 1:40. Team size overall is decreasing as people can do more with AI augmentation. In a world where breaking into PM is already really hard, I’m not sure how it ever happens in the future.
Maybe a bit of a grim prediction? Let me know what you think of the future of PM or how these tips help you write your own PRDs better and faster.

I acknowledge PM is different in different places, but to me, core to the role is defining what the product is, how it works and who it is for. It is impossible to do that effectively without writing something down. The thing you write down is the requirements.
What is a good PM? Subject for another time. A bad PM is almost certainly one who doesn’t or isn’t capable of writing things down, particularly for the engineers.
To be fair, this is also likely due to my very deep understanding of the subject matter from so many years of working on it. A PM who doesn’t understand the industry or problem space deeply but jumps into using AI risks being a negative force multiplier given the ability to generate so much of the wrong work so quickly.
Maybe at some point I’ll write up some thoughts on the differences but briefly when writing for an AI agent to build something, I’ve found including a lot more detail in the PRD to be more effective to avoid AI filling in the gaps with weird details. I’ve also found that a process where you just let it build based on your PRD to figure out the problems to be effective. For instance if the AI built a random feature or did some things you really didn’t like, just go back and add detail about those things into the PRD and start building over. Building purely with AI is so fast and cheap, it is easy to just throw out work, at worst you only lost a few minutes of time.
The Extended Thinking feature is only useful to me when a query is so complex it sort of times out. In other cases I view it as a “be unnecessarily verbose in an annoying way” button.
In general I am suspect of all of these AI tools that are really just a thin layer on top of a quality LLM. The amount of value these tools add is pretty nebulous to me. And if the tool itself turns out to be a great business, what’s stopping the provider of the LLM from just natively launching a feature that does the thing your tool does? If I was a PM at a major AI company I’d just go look at API usage from tools like these built on top of our LLM to figure out where user demand was and then build replacements within the core AI application.
What an excellent name, right? This is one of those products that I’m annoyed I didn’t come up with purely because of the perfect name. I haven’t used it for a while, but when I did, I found it to be a very meh product experience. I suspect it benefits highly from such a good name.
Sometimes I’d run 30-40 iterations only to finally get a better sense of how I want to approach the problem and at that point I’d start over with a more detailed prompt.
At my new gig we use Notion. I realize this is often the hot/trendy tool people like using but I find Google docs to be way better. Nothing beats the suggest edit feature for async feedback and otherwise all the features are basically the same.
This is a really important step. You aren’t done when you copy/paste your Claude output into a doc. You are just getting started. You need to heavily edit the doc itself still. My mental model is the Claude part is a way to take the first 80% of writing a PRD from days to hours but not a solve to go from 0 to 1 in minutes. If you think you are done at this stage and just throw it over the wall to engineering or other cross-functional stakeholders, you are going to likely lose trust for shoveling AI slop.
This is the one you need to be really careful on with the garbage in / garbage out thinking. If you are dumping random AI transcripts in for context, you’re probably going to have a bad time. Using another Paxos example, every Monday I would run a Business Review discussion with the Executive Team for the business I was responsible for running. This discussion was facilitated by a pre-read sent the Friday prior. The doc used for the pre-read (which included takeaways from the live discussion) was great for adding to the AI context window because it was a high quality, polished deliverable.
See “What is a PRD?” section for examples of what you outline might include. It really depends on what you are trying to build. You only need to be 80% correct for the first pass, I almost always ended up adding and removing sections through the iteration process.
Figuring out the level of “enough” that is just the right amount of effort to output is part of being a good PM and more art than science. AI is really bad at this.
The day AI can actually get people to all agree on something from a prompt is the day human jobs are truly cooked

A couple interesting startups I’ve seen in this space lately that you might find interesting: https://briefhq.ai/ & https://specstory.com/
The 10x engineer support with AI really stood out, beautifully extending your earlier pice on problem-solving into such practical PM applications.