AI for Leaders: The Questions That Matter (And They Are Not Prompts)

By Sophie Makonnen

Version française

Recently, I wrote about what AI can’t replace in leadership, and why your judgment and experience are vital career assets as AI becomes a central work topic.

The stakes are real. The US CEO of PwC recently stated that those who resist AI have no place at the firm. Anyone who believes they have the opportunity to opt out, he said, is "not going to be here that long." And yet, in the same breath, he acknowledged that tracking usage alone is not helpful. In other words, it is not about using AI. It is about using it well.  It is about ensuring that AI creates real added value in your work, beyond what performance indicators can measure.  A recent study on AI in high-stakes simulations put it plainly: "Human leaders operate under the weight of historical memory and ethical caution. AI models are solely focused on achieving a goal." That distinction matters whether you are navigating a geopolitical crisis or preparing a business case.

A recent joint study by KPMG and the University of Texas at Austin, based on the analysis of over 1.4 million AI interactions from 2,500 employees, found that only around 5% used AI in ways that could be described as truly sophisticated.  And perhaps surprisingly, it was more senior employees, not junior ones, who tended to fall into that category. Comfort with the tool is not the same as sophistication in using it.

What set those users apart? The study identified four consistent behaviors.  They treated AI as a reasoning partner rather than an answer machine.  They pushed back on outputs, refined them, and iterated rather than accepting the first result.  They delegated complex, multi-step tasks with clear objectives and constraints.  And they used AI across a wide range of work, intentionally switching between platforms depending on the task.  In other words, they did not take AI-generated content at face value. They kept pushing.

That is precisely what I was pointing to in What AI Cannot Replace, not backed by research (I am not KPMG 😊), but on my own user experience. And that realisation also came with some exasperation.  As John Winsor writes in Has AI Ended Thought Leadership ? : AI has created a faux-expert crisis. Polished content, hollow thinking. Being able to generate a polished report does not make you an expert in a field you know nothing about. That is the trap: sounding proficient in subjects you know little about. For those who know their field, however, AI is a powerful tool to support and deepen the application of that knowledge.

So what does sophisticated use look like in practice? Whether you are just finding your footing with AI or already using it in more advanced ways, the six categories of questions that follow, drawing on research published in Harvard Business Review, can help deepen that engagement, so that AI becomes a catalyst for your thinking, not a replacement for it.  Prompting techniques alone are not enough.

  1. Investigative: What do we know ? 

This is the category where you use AI to map the territory before forming an opinion. What does the research say? What are the known positions on this? What has already been tried?

Suggested questions:

  • What are the main arguments for and against this approach?

  • What does the evidence say about this issue?

  • What am I potentially missing in how I have framed this problem?

  • A small business owner needs to draft a privacy policy for her website. Before calling a lawyer, she asks AI to summarize the relevant regulations, what a privacy policy typically needs to cover for a business of her size, and what others in her industry have put in place. AI maps the territory quickly. She now has a clear picture of what is expected and a strong draft to work from. When she does bring in a lawyer to review it, she walks in informed, with specific questions rather than a blank page. The conversation is shorter, sharper, and considerably more efficient.

2. Alternative paths: How else could we get there?

This is where you use AI to challenge your assumptions and explore roads you would not have considered on your own. The goal is not to find more options within the existing frame, but to question the frame itself.

Questions to ask:

  • What is a completely different way to approach this problem?

  • What assumptions am I making that I have not questioned?

  • What would an unconventional solution look like here?

  • A program manager learns that a significant budget cut will affect a project mid-implementation. A traditional use of AI here would be to ask it to identify where costs can be reduced with the least impact, a legitimate and useful starting point. But she pushes further. She asks AI to generate unconventional responses to mid-project funding cuts, drawing on what other organisations have done in similar situations. AI surfaces a co-delivery model with a partner organisation she had not considered, a phased pilot approach that could unlock additional funding based on early results, and a scope redesign that preserves core outcomes at lower cost. None of these came from her original thinking. She does not adopt any of them directly, but two of them open up directions worth exploring. The exercise does not give her the answer. It breaks open the question.

3. Action: What does it take to make this real?

Once you have a direction, the next question is whether you can actually get there. This is where you can use AI to think through what execution requires: the steps, the people, the sequencing, and the obstacles that could get in the way before they become surprises. it stand out

Questions to ask:

  • What would actually need to happen to implement this?

  • Who needs to be involved, and in what order?

  • What are the most likely obstacles, and how have others addressed them?

  • A team leader loses two key team members within a short period. She has already decided on a restructuring approach. Now the question is how to make it happen. She asks AI to help her map the concrete steps: what needs to be communicated first and to whom, what the realistic timeline looks like, where the highest risk of disruption sits, and how other leaders have managed similar transitions without losing momentum. AI helps her build a sequenced action plan. But she knows her team, the sensitivities, the unspoken dynamics, and the right moment to have certain conversations.  No action plan captures that. She goes into execution with a clearer map and her own judgment intact.

4. Reflective: So what did we learn, How can we improve?

This is where you step back and draw meaning from what AI has produced. It is not enough to have information. The question is what it actually means for your specific situation.

Questions to ask:

  • What does this tell me that I did not already know?

  • How does this change or confirm how I was thinking about this?

  • What is the most important takeaway here for my specific context?

  • A program coordinator has submitted a proposal twice and received vague feedback both times. She asks AI to analyze the proposal and identify where the argument may be weak, where the logic does not fully hold, or where leadership might have legitimate concerns. AI surfaces three angles she had not considered, including a gap in how the budget rationale was presented. She does not accept everything AI flags, but it changes how she reads her own work. And that is enough to go back in differently.

5. Subjective: What's unsaid?

This is the category where you surface what AI cannot see. The human dynamics, the political realities, the unspoken tensions that shape whether something will actually work. AI has no visibility on any of this. You do.

Suggested questions:

  • Who is likely to resist this, and why?

  • What is not being said out loud that could derail this?

  • Whose interests are affected here that are not visible in the document?

  • A team leader is asked to implement a new reporting process imposed by senior leadership.  On paper it is straightforward. In practice she knows her team is already stretched, that one team member has been vocal about feeling micromanaged, and that a key stakeholder outside her team will resist the extra administrative burden. She asks AI to help her think through a communication and implementation plan.  AI produces something clean and logical.  But none of those dynamics appear in it, because she did not put them in. The value of this category is the reminder that she needs to.  The questions AI cannot ask are often the ones that determine whether the plan actually lands.

6. Evaluative: How good is this?

This is where you step back and assess the quality and relevance of what AI has produced before you do anything with it. This is the category that connects most directly to the AI wall concept

Suggested questions:

  • Does this reflect the reality of my specific context or is it generic?

  • What is missing, oversimplified, or potentially wrong here?

  • Would someone with deep knowledge of this subject find this credible?

  • A leader is preparing a report based on publicly available statistics. She uses AI to help pull together the data, identify trends, and structure the analysis. Once a first version is ready, she does not stop there.  She submits it to a different AI platform and asks it to challenge the analysis, identify weak arguments, flag unsupported conclusions, and point to anything that does not hold up.  She also shares it with a trusted colleague who knows the subject.  The goal is not to find the perfect output. It is to pressure test it before it reaches an audience that will scrutinize it.  Evaluating what AI produces is not a final step. It is a habit.

 

AI is evolving fast. How we work with it needs to evolve just as fast. And for anyone serious about their career and their work, that evolution is not only about the technology. It is about critical thinking and curiosity.

I will admit something. There are moments when using AI makes me feel like I am cheating. That the work is not truly mine. That I am somehow less legitimate for using it. Then I think about a recent project where I used AI extensively to pull together data from multiple sources, analyse and organize it, which allowed me to present the big picture and identify trends, and draft sections of the document. Work that would have taken me several days on my own. What AI could not do was decide what mattered, question what did not add up, push the reasoning further, or ensure the final product reflected the reality I knew from experience.

AI did not replace my thinking. It freed it. When the retrieval, the formatting, and the organizing are handled, your energy goes somewhere else entirely. To the ideas themselves. To the train of thought. To stopping, reading back, questioning, refining. That is where the real work happens. And that is still entirely yours.

Version française

 

Enjoyed this post? Share it with someone who might appreciate it

 
Next
Next

L'IA pour les leaders : les questions qui comptent vraiment (et ce ne sont pas des “prompts”)