The Oldest Management Theory Explains AI Better Than Anyone in Tech Will
There's a concept from the early 1900s called scientific management. Break complex work into structured, assignable units. Figure out what a worker needs to know, give them that context, define success, and get out of the way.
That's it. That's the whole skill behind using AI well. Not prompt engineering, not understanding transformer architectures. Management.
The Intern With Amnesia
Here's how I think about it. Imagine the most talented intern you've ever seen. They can do anything you throw at them, instantly, at an absurdly high level. Code, analysis, writing, design, whatever. But they have complete amnesia. Every single morning they show up with zero memory of where they work, what the company does, or what happened yesterday.
That's what an AI agent is. Infinitely capable, zero context.
So the bottleneck is never talent. It's always the setup. Did you tell it what it needs to know? Did you scope the work clearly? Did you define what good looks like? Most people fail at this in both directions. They ask for something too small because they don't trust it, and then they don't even give it enough context to do that small thing well.
The people getting real leverage out of AI treat it like onboarding a brilliant new hire every single time. What does this person need to know to succeed? That's a management question, not a technical one.
The Learning Problem
Here's where it gets interesting. A real intern learns. You mentor someone in January, by June they don't need hand-holding anymore. They build judgment. They start anticipating what you want.
AI doesn't do that. The amnesia is permanent. Every session starts from scratch.
So who holds the learning? You do. The system you build around the agent does. You run it, review the output, figure out where it went wrong (almost always a context gap), and then you update the inputs for next time. Over weeks and months, the agent's environment gets sharper. The outputs get more reliable. It looks like the agent is getting better at its job, but the improvement actually lives in the system design, not in the worker.
You're doing the learning and encoding it into infrastructure. It's a feedback loop, but the experience accumulates in the architecture instead of in someone's head.
Where This Goes
Look at which skills are getting more valuable and which ones are getting less valuable. Anything that's primarily execution, writing boilerplate, summarizing documents, generating reports, is getting automated faster every month. If that's the core of your job, something that doesn't sleep and costs pennies an hour is coming for it.
But the ability to look at a problem, figure out how to break it apart, decide what context matters, and build systems that let capable workers succeed without knowing anything on their own? That gets more valuable every time the models improve. A better model doesn't make the person orchestrating it less necessary. It gives them more leverage.
I don't know where this ends up. Maybe companies shrink to skeleton crews, maybe it just reshuffles what everyone does. The endpoint is genuinely unclear. But the direction isn't. The skill that survives is the one that's been around since 1911: figuring out how to organize work so that workers can do it. The only difference is that the worker is now intelligence in a context window instead of a person on a factory floor.
The theory hasn't changed. The worker has.