How to Get Started Solving Problems with AI
Or at least what works for me.
While Paxos is a remote company, every year we get the entire company together in person for a giant offsite1. This year as part of a push to get everyone in the company to use more AI in our work, we invited AI researcher Ethan Mollick2 to present. He did about a 20 minute intro of himself and his research and then did two straight hours of live demos showing us how to do things. He would take a question from the audience and start his answer with “well let’s try it live” and then really answer the question in depth as he did the demo.
It was incredibly eye opening. For many of us, it was the nudge3 we needed to integrate AI more deeply in our day to day work. For me, it made me realize the power of AI to remove the tedious and boring parts of programming and get me back into having fun creating things.
Below are some best practices I recalled from his presentation4 as well as my own insights from heavy usage over the past few months.
There is no manual, you must learn by doing
Trying to take some of your day to day work and make it better / easier / faster with AI is probably going to be slower at first than just doing it yourself. That’s ok. Also some parts just won’t work. That’s also ok. The only way you are going to figure it out is by trying stuff. It’s going to be worth it.
The models are advancing and changing so quickly nothing is documented. Or what is being documented is rapidly getting stale. There are tons of prompt libraries5 out there but you aren’t going to figure things out in a few minutes of copying and pasting.
The models are constantly changing, try different ones
Every model is a bit different. They are good at different things but none have documentation and which ones are best is constantly changing (see lesson 1) and depends on the task. Literally overnight, Google Gemini went from being a joke to the best6 at a bunch of areas from one update.
You’ll learn tricks that are more effective for you and for a given model. Those tricks won’t work for other models but some will translate great. Again this is all fine, just try things (lesson 1).
Set the context window with a job function
The context window is the working memory for a prompt. It is what is known beyond the materials the model7 was trained on. Without some direction, AI is just going to assume things which is not always great.
One sentence which almost always helps is to set your role, something like: I am a [job function] at a [type of company] trying to do [specific outcome].
Another trick is setting the context window with a good overview or strategy document. Same as you would explain a task to a fellow human (see next lesson). So your prompt turns into: “based on doc x,y,z go and do this thing for me.”
Think of the AI as a super smart intern
The mental model I use is that AI is a hyper intelligent intern who can do a week’s worth of work in seconds. Talk to it like a human being. The upside is that you probably have a lot of experience talking to humans and telling them to do things.
The downside is that it acts a lot like a human being, often in some pretty annoying ways. You’ll have to check the work because it will cut corners and be confidently wrong at times. It will sometimes lack in common sense (see context lesson above). The confidently wrong part is particularly annoying though because…
Be clear in your directions, avoid ambiguity and break down steps whenever possible
AI is going to just fill in gaps on its own. Very often this is totally fine. Sometimes this is fun and even interesting. For low complexity, low difficulty tasks you don’t even notice. For important, high complexity, high difficulty task it becomes abundantly obvious when you review results. A lot of times it’s annoying and adds extra work for you but overall, you are still getting to a better result faster so it’s ok.
You will almost always have a better result if you break out steps into multiple prompts. Think them through and feed them in one at a time. Bigger the step, the more likely to have issues because ambiguity of the ask can seep in and the AI goes in some random direction. Make incremental steps.
I think of breaking out the work the same as if I’d do it myself. Build a foundation and then tackle iterations.
Think of yourself as an editor
The hype around “one shotting” prompts and getting good results is not great and not how it goes down in practice.
To me, this is the same as building a great product or quality writing.
Quality isn’t because of some overnight change. Quality is the result of the accumulation of thousands of small iterations
A mental model I use (for now8) is to view AI as a tool for block and tackle work to get the easy first 80% of something down on paper or a working mvp.
You can rapidly edit through additional prompts but for a good outcome you’ll need to heavily edit.
Past locations have been Miami and Austin. This year was New York.
Go to his UPenn profile (https://mgmt.wharton.upenn.edu/profile/emollick/) and highlight all the text on the page.
For everyone else it was the announcement that once a week for a month we were doing day long hackathons where the winners received 1 PAXG (worth $3,573 as of today).
He also has a ton of good content on YouTube but nothing as good as the live presentation we saw (https://www.youtube.com/results?search_query=ethan+mollick)
Ethan Mollick’s prompt library is one of the better general purpose ones (https://www.moreusefulthings.com/prompts)
Don’t spend any time at all trying to determine which model is “best” for something.
LLMs are trained on vast amounts of information (https://en.wikipedia.org/wiki/Large_language_model), generally (to date) the more information you can train your LLM on, the “better” it is at doing things
Even over the past few months this bar has moved considerably and is very exciting

