What AI Cannot Replace

By Sophie Makonnen

Version française

Artificial intelligence is now part of daily professional work. Since 2022, generative AI tools have made drafting text, organizing ideas, and summarizing information fast and routine. Conversations focus on learning to use them well, prompt them effectively, and consider their impact on the workplace.

For those leading a team, however, the questions may lie elsewhere.

The central question for team leaders is not simply how quickly AI can deliver results, but how we can accurately judge the quality and relevance of what AI produces.  How does this evaluation impact the outcomes for which we are responsible as leaders?  The answer lies in understanding the tool's capabilities and limitations.  Generative AI cannot, without sufficient context, determine what truly matters in a given context, what deserves further attention, or what can be set aside.   That responsibility remains with the person doing the work, and as a leader, with ensuring your team recognizes this distinction.

The good news: AI can do a lot. What we can confirm: thinking is still on us.

The AI wall

A recent study by researchers from Harvard Business School and Stanford University, conducted inside a large global company, tested whether generative AI could help people outside a specialized role perform that role's work at the same level as those who do it every day.

The findings were clear on one point: AI is a strong equalizer when the work involves framing ideas, outlining a plan, or structuring a concept.   Across all skill levels and backgrounds, participants with AI access produced comparable results at this stage.

But when the work shifted to actual execution, writing, shaping an argument, making judgment calls about what to include and what to cut, the gap emerged  People with relevant domain knowledge used AI to accelerate their work.  People without it used AI confidently but produced weaker results, often without realizing it.  They were editing out the very elements that made the work effective, because they didn't know what those elements were for.

The researchers named this the "GenAI wall": the point at which AI can no longer compensate for the distance between someone's background and the real demands of the work.  Conceptual tasks are more easily bridged.  Execution tasks, the ones that require situated judgment built through experience, are not.

Before coming across this term in the research, I had been describing something similar to myself as an AI illusion! Indeed, AI can create the impression that the work is solid simply because it appears structured and polished.  But it is often generic and hollow, dressed up in buzzwords and clean formatting.  You keep reading, waiting for the idea to arrive.  It doesn't. Reading the research was somewhat validating.  I was not the only one who was questioning the quality and relevance of what was being generated.

Coming back to my own world, what does it mean for leadership when teams can produce more, faster, and sometimes with more polish than ever before?   When volume increases, but depth does not, what is the leader's role?

What this means for leadership

For leaders, the key lesson from this research is not primarily about technology. It is about judgment.

Over the past four years, we have seen that while generative AI can help structure thinking, outline ideas, and accelerate early work, it cannot provide contextual judgment. You can supply background and context, but the judgment about what matters comes only from experience, domain knowledge, and an understanding of consequences, not from the tool itself.

In practice, leadership often involves deciding what deserves attention and what does not.  It requires distinguishing between ideas that sound convincing and those that will actually work in a specific context.  AI can generate options quickly, but when it comes to determining which options make sense in the realities of an organisation, a team, or a project, there is still no substitute for the judgment that comes from being inside that context.

The "AI wall" highlights this boundary.  Tools can help people begin the work, but they cannot substitute for the understanding that develops through years of practice and exposure to complex situations.  And in complex, politically layered environments, that understanding is rarely visible in a document.  It lives in the people who have navigated those situations firsthand.

In this sense, mastering prompts is not the real skill. It is about understanding your context deeply enough to provide AI with precise guidance, as you would to a smart but inexperienced colleague. The quality of AI output relies on the clarity of your questions, the constraints you set, and the discernment you exercise in reviewing results.

That also means knowing when not to delegate to the tool at all.  Some decisions, some conversations, and some assessments of a situation require your full presence and your accumulated understanding.  No prompt captures that.

AI can accelerate work. Leadership still requires discernment.

And that discernment, shaped by experience and reflection, remains something no tool can replace…..well up to now.

Leading when AI is part of the picture

The idea of the "AI wall" highlights a boundary that many experienced professionals recognize.  

The difference is not access to the technology.  It is the ability to recognize what strengthens an argument, what weakens it, and what may be missing altogether.  That kind of judgment develops over time through experience with the subject, the context, and the consequences of decisions.  It also develops through failure, through having made the call that didn't land, and through knowing why.

From a leadership perspective, this has a few practical implications.

  • AI generates ideas quickly, but leaders shouldn't treat the first output as final.  The value comes from questioning, refining, and testing what emerges.  Ask yourself: Does this reflect my knowledge of this situation?  If not, revisit it.

  • AI operates on patterns, while leaders operate within context.  Elements such as organisational culture, stakeholder dynamics, political realities, and long-term consequences require the situational awareness that leaders contribute.  The more specific context you provide, the more useful the output. Generalities produce generalities.

  • The quality of AI output commonly reflects the quality of the input. Leaders who think clearly about the problem they are trying to solve will obtain more useful support from AI.  In that sense, prompting is less about technical skill than about clarity of thought.

    One practical way to develop that clarity with AI is to build your questions progressively. Start broad, then let each response inform the next, more specific question.  Each step sharpens the thinking.  The tool follows your lead, not the other way around.

These skills are not new, but with AI, they become even more necessary in such an environment.

The leader's role has not changed

AI has changed how quickly some work is produced.  It can add significant efficiencies.  On the other hand, it has not changed the fundamental leadership skill of knowing what deserves your full attention and what does not.  The experience and judgment lens.

Leaders still need to balance what is urgent with what is truly important and ensure that distinction guides how they and their team invest their time and attention.  In fact, AI amplifies the risk of losing that distinction.

Leadership and AI are not about speed.  It is not about better prompts.  It is not familiarity with the latest tools or knowing which platform produces the cleanest output.  It is still about applying experience, context, and judgment to determine what the work actually requires and what it will take to get it right.  This has always been what successful leadership demands.  AI has simply made it more visible.

Version française

 

Enjoyed this post? Share it with someone who might appreciate it

 
Next
Next

Leadership et l’IA : ce qui ne change pas