One of the best ways to know what kind of AI Agent to build for your customers and your specific industry is by looking at what’s already working. What does success look like for the leading companies?
For example, I’m using OpenAI’s Codex (their programming AI Agent). My first experiment is telling it to help me remember and explain a Python script I wrote a while ago.
“Explain what gen-know-what-you-own.py does.”
It “thought” for 19 seconds and then broke its answer down into three logical sections:
That’s a perfectly reasonable and structured way to respond to my fuzzy, high-level request. It totally makes sense when explaining code layout.
Traditionally, this happens all the time on programming teams. Senior programmers invest a lot of time onboarding junior programmers by walking them through the existing codebase.
Now imagine how a similar AI Agent might help in your domain:
Think about where an AI Agent like that could accelerate your teammates and customers. Reducing initial friction, cognitive load, manual chores, and wasted time can be a massive win!
You know you want to build an AI Agent, but you don’t exactly know what to build — or worse, how to get started. It’s a common problem I routinely hear when talking to execs at companies.
One of the best ways to get started isn’t with the tech. Start with the people. Look and listen to find their problems. What’s their moment of friction? Where do people pause, get stuck, or burn time trying to understand something new, complex, or intimidating?
A highly practical example is OpenAI’s Codex programming agent. I can request it to “explain what gen-know-what-you-own.py does.”
It’ll “think” for a brief time, then respond with a clean, structured explanation including:
But that pattern isn’t unique to programming. Explaining complex things clearly is a universal need. Imagine an AI Agent designed to do something similar to what Codex does for programmers, but in other domains.
Example: Bank Mortgage
Example: Credit Card Statement
Example: Recipe
Example: Song
The valuable pattern is the same:
This is where AI Agents shine — not by replacing human expertise, but by making complexity seem simpler.
“What can AI do?” is a perfectly fine way to start thinking about product design opportunities. Ultimately, we want to unlock the question, “Where is the economic value of applied AI?”
We're clearly imagining how AI Agents can explain complex things clearly. By looking at the popular Codex agent that helps programmers, the lessons learned may be applied far beyond programming. Then, we imagined what those moments could look like across different industries.
At some point, we need to shift from thinking about design to actually building. You need to do the thing.
Application architects, the way I see it, have a choice between two significant directions:
No-Code Builders
Full-Code Tech Stacks
The next step is to choose 1–2 focused use-cases where we can actually bring this idea to life. Think of something small enough to ship quickly, but meaningful enough to teach us something real.
A strong pilot candidate has:
The point is, I’m not worried about designing the perfect AI Agent from the start. Discover the right starting place and build the most helpful thing in the simplest way possible.
UXers and Product Managers, over the next weeks, surface three strong candidate use cases to evaluate as a potential AI Agent pilot. Try to plan and build quickly.
Treat everything as an experiment. Test-and-learn with real users. Watch carefully, and listen intently. Take feedback and use it to directly inform your development roadmap.
As you build, remember:
The goal isn’t just building capabilities — it’s creating a smooth, trustworthy user experience.