In my past life, I led product managers for a living. Today, I’m an airline pilot by day and a part time product manager (literally) by night.
Over the past several months, alongside my day job in the cockpit, I’ve been working with my former company, Applied Frameworks, to design and build a new roadmapping suite for product managers. It’s a project that feels both intellectually challenging and deeply personal.
For years, I’ve facilitated collaborative roadmapping sessions with customers across industries and around the globe. When those sessions are structured well, something remarkable happens. Engineers stop arguing about features. Executives stop speaking in abstractions. Product leaders stop defending scope. The conversation shifts toward value — economic tradeoffs, timing, architectural intent, and strategic alignment.
Those patterns are powerful. When combined with disciplined customer research, they create alignment that survives beyond the workshop. They turn roadmaps from static artifacts into decision frameworks.
We wanted to build a tool that supports that kind of thinking.
The challenge was straightforward: we have limited resources. No dedicated product team. No extended runway. This was a part-time effort layered on top of my new full-time career. What we have is conviction about the problem and urgency to move.
That’s when AI stopped being an interesting experiment and became an integral collaborator.
I began using AI as a strategic thought partner.
At first, the benefit was clarity. Ideas that had lived in my head for years — swimlane patterns, economic framing loops, collaboration dynamics between product and engineering — could be articulated, challenged, reorganized, and refined far more quickly than before. What might once have required several days of drafting and iteration could now be meaningfully advanced in a few hours.
It wasn’t about outsourcing thinking. In many ways, it required more discipline. The better I articulated context, constraints, and intent, the stronger the output became. Loose prompting produced shallow results. Precise framing produced strong insights.
That dynamic reinforced something I already knew: clarity of thought drives clarity of outcome. AI simply compressed the feedback loop.
Then the role of AI evolved.
At the suggestion of my colleague Clint — a developer by trade and a deeply thoughtful product manager — I began using Claude, particularly Opus 4.6, not only for conceptual work but also for development. The conversation shifted from shaping doctrine and positioning to building working software. Through structured dialogue and coding assistance, I could describe system behavior and watch prototypes emerge in real time. In fact, as I write this, Claude is simultaneously debugging an issue in my prototype.
As someone with limited coding experience, this was transformative. The distance between concept and artifact narrowed dramatically. I could refine research synthesis in the morning, adjust positioning midday, and explore architectural implications by evening. Decks that once required collaboration cycles were generated in minutes. Competitive matrices formed almost instantly. Early prototypes materialized from structured reasoning.
My role shifted in the process. I moved from being the primary producer of artifacts to acting more as a director of intent and refinement. I provide context, tradeoffs, constraints, and lived experience. The system generates. I adjust. It regenerates.
The efficiency gain is real.
But the more interesting impact isn’t about speed.
Everything I’m describing works because I’ve already spent years doing the slow version.
I’ve facilitated sessions where alignment fractured under executive scrutiny. I’ve defended roadmap tradeoffs to skeptical engineering leaders. I’ve seen strong strategies fail because the economic framing was weak. I’ve made mistakes in front of customers and internal teams and learned from them.
I built the decks manually. I synthesized research the long way. I sat with ambiguity long enough to begin recognizing its patterns.
That repetition built judgment.
And judgment is what AI amplifies.
Even prompting reveals this clearly. The quality of the response is directly proportional to the quality of the framing. When I provide rich context, articulate the economic tradeoffs at stake, and clarify constraints, the output becomes meaningfully useful. When the framing is thin, the result deteriorates quickly.
The ability to frame a problem well is itself a product of experience.
That is where I begin to see tension.
Product management has historically been an apprenticeship discipline.
Junior PMs built the decks. They conducted synthesis. They drafted briefs. They prepared for roadmap sessions. They sat in the room and watched how senior leaders navigated tradeoffs and ambiguity. Over time, through repetition and exposure, they developed intuition about where risk hides and where alignment breaks.
The so-called “legwork” was never just about producing artifacts. It was training.
AI now removes much of that repetition. It can generate briefs, analyses, and prototypes in minutes. It can structure narratives and synthesize research at a level that appears sophisticated.
Senior product leaders, understandably, are leveraging that acceleration.
But here is the structural question: if AI removes the repetition that once built intuition, how do junior product managers develop judgment?
At its current stage, AI is most powerful in the hands of someone who already knows what good looks like. It amplifies discernment. It does not create it.
A junior PM can now generate polished artifacts that look strategic and well-reasoned. But without deep exposure to real tradeoffs, how do they recognize when a prioritization model hides a fragile assumption? How do they sense when a roadmap story will collapse under executive pressure? How do they distinguish between coherence and conviction?
We may be entering a phase where output quality increases while experiential learning decreases.
That should give product leaders pause.
Because product management is not primarily about document production. It is about decision-making under uncertainty.
If we compress the artifact-building layer without redesigning the learning layer, we risk creating professionals who can generate, but struggle to evaluate.
I am genuinely impressed by what AI has enabled in this project. It has accelerated research synthesis, sharpened competitive positioning, clarified doctrine, and allowed me to influence early system design in ways that previously would have required significantly more coordination and time.
It has made me more effective.
But this is not merely a productivity story.
It is a capability pipeline story.
If we, as product leaders, embrace AI purely as a leverage tool without rethinking how experience is developed, we may unintentionally erode the apprenticeship model that shaped us.
The question is not whether AI makes senior product leaders more effective. It clearly does.
The deeper question is whether we will intentionally redesign how emerging product managers develop judgment in an AI-augmented world — or whether we will discover, over time, that we optimized for output while quietly weakening discernment.
That is the conversation I believe product leaders need to begin having now.
What do you think?
Applied Frameworks