Discussion about this post

User's avatar
Antonina Liske's avatar

Your point about outcome-based management hits even harder when you've lived through what AI can actually do.

Exactly one year ago, my team took on a project we would never have even considered without AI tools. None of us knew the programming language it required, we're primarily a Python shop, but this project needed JavaScript on the frontend and Node.js on the backend. When the request came in, we used Claude to quickly build an MVP during the estimation phase. Based on that, we decided to take the risk, even though we had serious doubts about whether we'd be able to maintain it or make changes later. We took the bet that our skills and Claude itself would continue to evolve enough to keep the system alive.

It was a 3D visualization service for designing playgrounds. I watched, in real time, as an incredible product emerged from basically nothing. Today, we still maintain it and here's the thing: we maintain it with Claude. What started as a one-time experiment became our new normal. The codebase we once feared we couldn't touch is now something we navigate daily, with AI as a permanent pair programmer.

But that experiment revealed something bigger. Now that we have AI-enabled people on the team, I'm realizing the real challenge isn't just about estimation it’s about org design. How do you organize people whose abilities have become much more rounded, and who can now cover a much wider range of responsibilities? The old structures, built around narrow specializations and sequential handoffs, no longer make sense. I'm starting to see that we might need to redesign a few processes from the ground up fewer handovers, fewer steps in the value flow because when one person (plus AI) can do what used to take three, the bottleneck shifts from "who has the skills" to "how we structure the work itself."

And that brings me to your point about outcomes vs. hours. It feels less like a scoping issue and more like a symptom of an org structure that hasn't caught up to what AI-enabled people can actually do. We're trying to fit these new capabilities into old containers.

You're absolutely right that the shift to outcomes is necessary. But I'd add: it also demands a shift in how we think about roles, handoffs, and team structure. Otherwise, we end up with AI-enabled people operating inside a machine that was built for a different era.

Thanks for sparking this conversation. It's exactly what founders should be wrestling with right now.

No posts

Ready for more?