What Opportunity Looks Like After AI
Say AI three times in the mirror and the genie appears.
Building has never been easier. AI has redistributed who can do what and how long it takes to ship. That’s true today. But it hasn’t removed the process of building or the harder parts that come with delivering real value.
AI Tasks
- Organize this folder full of 200 documents - check.
- Build me a React app that looks like Reddit - check.
- Set up OpenClaw and send me reports daily - check.
- Generate a research report on X with 50 cited sources - check.
- Create 10 agents with 10 skill files and execute a plan in parallel - check.
And so on.
There’s a long line of checks now, and more possibilities than I could cover in one post. It’s impressive. Also a little overwhelming.
Personally, I’ve tried Claude Code, Codex, Cursor, Replit, and OpenClaw on both small and large apps. I’ve talked to coworkers, friends, and family using these tools daily, and I’ve used them outside of software too, just trying to plan life and get things done around the house.
You get to “wow” very quickly. Then you get to “what now?”
That’s the interesting part.
When the thing that used to block you becomes possible in days instead of weeks or months, the bottleneck moves. It doesn’t disappear. It moves back to you.
I think it’s possible to hit your own limit before the agent hits its own. There’s not necessarily anything wrong with that either, because in most situations we need time to digest, comprehend, and think about what’s next, not just input and output every second.
Execution is just one piece of building something. Vision, judgment, taste, context, and knowing what actually matters are just as important.
Capability changed. Responsibility didn’t.
Faster execution doesn’t remove the need to think, learn, or adapt. It raises the price of not thinking, which is not a skill you want to lose.
If you’re just one-and-done on an idea, maybe that’s fine. --yolo or --dangerously-skip-permissions it is. Maybe AI gets you 80% of the way there, and that’s enough. But if you want to grow something, improve something, measure something, maintain something, or build a business around something, the hard part was never only the code.
Software can be “done.” A feature can ship. A library can work and be complete. But a product, a team, a business, or a system worth maintaining is never really finished. They evolve and hopefully grow. Which means someone still has to decide what happens next, how it gets done, and when.
Execution, Growth, and Opportunity
If AI compresses execution, then the early parts of the development loop become even more important:
- What problem are we solving?
- Who is it for?
- What are we measuring?
- What does success actually look like?
- What is the technical plan?
- How are we releasing it?
- What tradeoffs are we making?
If you think code is the only lever to optimize, you’re starting in the wrong place.
That’s why I don’t think AI replaces the fundamentals. I think it forces us back to them, not just a subset of them, but the whole set.
Growth Still Has to Belong to You
One thing I keep coming back to is growth and comprehension.
Mid last year there were reports from METR citing that experienced devs were slower in their own codebases, when they were expected to be faster. Typing time has been transferred to validation, verification, and comprehension, and ultimately at mature orgs stability and reliability are keys regardless of the tools being used. An article from Anthropic earlier this year found AI-assisted developers scored lower on a follow-up mastery quiz when learning a new library.
Did you actually get better at owning a problem? At documenting bugs clearly? At spotting weak logic? At reviewing AI-generated code and seeing where it doesn’t fit? At making a technical plan that survives implementation?
Maybe asking these types of questions becomes more important.
There’s already enough evidence and personal experience to suggest that if people use AI only to burn through tasks, some skill decay is going to happen. Especially in engineering, that should concern people. You don’t want to become someone who can only ship by copying outputs you no longer fully understand just to complete a task.
AI is leverage. Anyone using it as a replacement has a different goal. That probably means there is culture-fit misalignment.
Where Opportunity Actually Moves
A healthy dose of skepticism should sharpen judgment, not turn into decision paralysis or an excuse not to engage. I think the opportunity is not just shipping more code. Once execution gets easier, what starts to matter more is not just output:
Vision. Judgment. Taste. Standards. Context. Ownership
- Maybe the product area isn’t tested well.
- Maybe the API performance is bad.
- Maybe nobody fully owns the conventions.
- Maybe the team is shipping code faster, but learning and sharing less.
- Maybe the output is technically correct, but strategically useless.
- Maybe the backlog is full of bugs that should be addressed, but you never had the time.
- Maybe there’s a workflow that mattered, but never got prioritized.
AI can’t own the consequences of a choice. It shouldn’t decide what’s next or why it’s worth doing. That comes from experience, insight, hypotheses, research, and talking with customers.
That’s something that can get lost in conversations about where AI fits into all of this. AI is going to force everyone to ask: What happens when execution is no longer the hard part? That question is exciting. It’s also uncomfortable.
Opportunity expands into areas people have always wanted to improve but never had the time, space, or leverage to get to. The real hope is that this pushes us back toward value, whether you’re building a feature, an API, a product, or a business.
The Human Job Isn’t Gone
As an engineering manager, I think about this a lot.
Teams weren’t created because execution was slow. They were created because building something real requires different kinds of thinking, ownership, and judgment, often in tension with different opinions. An engineer pushing back on scope because of technical implications. A product manager adding scope because it is ultimately right for the customer. A designer questioning whether the UX actually makes sense. That friction is the feature. It’s collaborative. You can mimic parts of it with agentic reasoning, but it’s not the same.
Development probably should get faster. Teams should remove debt. People should use better tools. There’s a real difference between what becomes a software factory and what remains a product organization. I believe the latter is more human-centered, regardless of what part of the problem you’re working on.
- Writing matters more.
- Asking the right questions matters more.
- Planning matters more.
- Architecture matters more.
- The ability to explain tradeoffs matters more.
AI can be a net positive or a net negative, depending on how it’s used.
Engineers are already flagging the bad version of this: PRs that ignore conventions, code that nobody fully owns, code that sort of works but doesn’t really fit, unmaintainable systems, and even more tech debt. Just telling a team “that’s bad” won’t fix anything either. Someone still has to set the standard, whether it’s code written by a human, AI-generated code, or even work reviewed by an agent.
There are a few blind spots here too. One is using AI as an excuse to stop growing. Another is using AI’s flaws as an excuse to do nothing; there is already visible upside that shouldn’t be diminished. Another is becoming so focused on output that you forget the story, purpose, and direction.
AI can remove friction. It can expand capability and depth. It can collapse the distance between idea and execution in a way that still feels wild. It cannot replace the responsibility of understanding what happens next or the responsibility of taking ownership of the outcome. It cannot hold a perspective you don’t have.
That part is still on us.