A composer doesn’t play every instrument. They hold the structure. They understand what the strings can carry and where the brass will overwhelm everything else. They make decisions about what the piece is trying to accomplish — not by executing the notes, but by shaping the intent behind them.
That framing sat with me for a while before I understood why it felt so familiar.
Because that’s what the work has always been. Before AI, before vibe coding became a discourse, before anyone started debating whether non-technical people could now build — the actual job was translation. Business problem to system structure. Ambiguous question to buildable requirement. Stakeholder intuition to something an engineer could act on. You weren’t playing the instrument. You were writing the score.
The vibe coding debate missed this entirely. And it’s worth understanding why — because the framing matters for what comes next.
What the debate actually gets wrong
The discourse frames vibe coding as a technical access question. Can non-engineers build now? Does that make them dangerous, or unstoppable? It’s a real question. But for people who have spent careers doing translation work, it lands strangely.
The bottleneck was never “can I write this code.”
The bottleneck was: do I know what this should actually do, and why? Can I articulate the requirements clearly enough that the output survives contact with the business problem it was supposed to solve? Can I see the failure modes before they happen?
Those are not technical questions. They never were. They’re compositional ones.
AI gave the composer access to the piano. It didn’t make them a composer. The capacity to generate output — faster, with less friction — doesn’t resolve the upstream question of whether you’re generating the right output. It amplifies whatever judgment you brought in with you.
That’s the distinction the debate keeps missing. Everyone is debating the instrument. Almost nobody is talking about the score.
The musician’s edge is real — and it’s compressing
Give this its due, because dismissing it is the wrong move.
Deep technical fluency produces something real. Engineers who have spent years inside production systems develop a kind of structural intuition that’s hard to replicate from the outside. They know where things fail at scale. They recognize the patterns that look fine in design and break immediately in practice. They’ve felt the weight of technical debt in their hands.
That’s not nothing. A composer who has never played an instrument can still write music — but there are things they won’t hear until someone plays it back to them. The musician catches those things. The advantage is genuine.
But the edge is compressing.
AI is closing the distance between knowing what to build and being able to build it. The fluency advantage that took years to develop — reading code, writing scripts, navigating infrastructure — is becoming less decisive. Not gone. Not irrelevant. But no longer the moat it was. Someone with strong compositional judgment and access to good AI tooling can now close a gap that would have taken years to close manually.
The direction of travel is clear even if the destination isn’t settled. The question is what fills the space where raw technical fluency used to be the differentiator.
The conductor — what it actually requires
There’s a frame forming here that I think is directionally right, even if the practice of it is still being figured out.
The conductor.
Not a composer who learned to play every instrument. Not a musician who got promoted. The conductor holds the full picture — score, orchestra, performance — and leads with judgment rather than execution. Doesn’t play every part, but has played enough to know what’s possible and what isn’t. Can hear when something is off before the audience does.
What does conductor-level judgment actually require for data and product leaders?
Some of it is structural intuition about technical systems — how they degrade under real conditions, where the constraints actually live versus where stakeholders assume they live, what “done” means in a system that has downstream dependencies and upstream ambiguities. You develop this through proximity to production, not proximity to documentation.
Some of it is organizational. The conductor’s job isn’t just to read the score — it’s to hold the performance together when the orchestra changes around them. Different players, different tempos, different interpretations of the same notes. In enterprise data work, that means holding coherence across teams with different definitions of quality, different risk tolerances, different relationships to the underlying systems. AI doesn’t simplify this. It accelerates it. The organizational coordination layer becomes more consequential, not less.
And some of it is judgment about when to direct and when to let the orchestra lead. This is the part that’s hardest to teach and easiest to underestimate. Conductors who micromanage every measure produce technically correct performances that feel mechanical. Conductors who give too much latitude get beautiful individual moments that never cohere into a piece. The balance requires knowing the score well enough to know which deviations are expressive and which are errors.
I’m naming this as a direction, not a solved problem. What conductor-level judgment looks like inside a specific enterprise context — in practice, not in theory — is still being worked out by the people doing it. That’s honest. It’s also why it’s worth thinking about now.
What this means for enterprise data and product work specifically
Enterprise data work has a version of this that’s distinct from general product or engineering discourse, and it’s worth naming directly.
The distance between a business question and a working data product isn’t primarily technical. It’s organizational. You’re coordinating across functions with different definitions of done. You’re working inside systems where the data means something different to the team that produces it and the team that consumes it. You’re navigating risk tolerances that exist for reasons that aren’t always legible from the outside.
AI compresses parts of the execution layer. It does not eliminate the compositional challenge. What it does is raise the stakes for getting the score right.
This is the thing that gets lost in the speed conversation. Moving faster with worse judgment isn’t progress. It’s shipping the wrong thing at higher velocity. The execution layer becoming cheaper and faster doesn’t change what it’s executing. It just makes the cost of a bad score more visible, more quickly.
What I’ve watched happen — in environments where AI tools get introduced without this kind of clarity — is that the speed benefit materializes immediately and the judgment deficit surfaces later. The output comes faster. The quality of what was decided to build, and whether it addressed the actual problem, is still determined by whoever did that work upstream.
The composer question, in other words, doesn’t go away. It moves closer to the front.
The challenge
Here’s what I keep coming back to.
Most people I know who are good at this work have been doing compositional thinking their whole careers. They just didn’t call it that. It was buried inside a job description that emphasized execution, or hidden inside a role that required technical credibility to be taken seriously.
AI is making that layer more legible. Which is clarifying. But it’s also raising what the role requires.
The question now is whether you’re developing the conductor’s ear — the judgment to lead a faster, more capable orchestra — or whether you’re using new tools to move faster in the same direction you were already going.
Those are different things. One of them is a capability that compounds. The other is just acceleration.
The people who figure this out won’t be the fastest adopters. They’ll be the ones who got clearer about what the role actually requires, and built toward that — deliberately, not just reactively.
That’s the question worth sitting with.


