INSIGHTS & IDEAS
arrow

How I Actually Use AI as a Senior Executive

There’s a McKinsey study that’s been circulating in boardrooms for years — the one that distills what separates the best CEOs from the rest into six mindsets. Set the direction. Align people. Mobilize teams. Engage the board. Connect with stakeholders. Enhance personal effectiveness.

It’s solid work. I’ve referenced it in coaching conversations, in board discussions, in my own thinking about what the job of a senior executive actually is.

But something has been nagging at me over the past year.

I’ve been using AI — Claude, specifically, though the tool matters less than the practice — as a genuine thinking partner. Not for drafting emails or summarizing documents, though it does that too. For the hard stuff. Red-teaming a strategy before I present it. Pressure-testing assumptions I didn’t know I was making. Preparing for conversations where the stakes are high enough that I can’t afford to discover my blind spots in the room.

And what I’ve found is that AI doesn’t just make the six mindsets more efficient. It changes what some of them mean. That’s a more uncomfortable realization than it sounds, because it raises a question I haven’t fully answered: if AI can do a meaningful portion of the cognitive work that used to define executive value, what exactly is the executive for?

I don’t have a clean framework to offer. But I have three shifts I’ve experienced firsthand — and they’ve changed how I think about the job.

The First Shift: Strategy as Stress-Test, Not Storytelling

When I was CTO or CDO of a major Vietnamese banks, strategic planning followed a familiar rhythm. Build the thesis over weeks. Pressure-test it with a small inner circle — people who largely shared my assumptions. Refine the narrative. Present to the board. Defend it under questioning.

The problem with this process isn’t that it’s slow, though it is. The problem is that your inner circle has the same blind spots you do. They’ve been marinating in the same context, the same market data, the same organizational politics. The red team isn’t really red.

AI changed this for me in a way I didn’t expect.

Last year I was developing a strategy for a new market segment — a significant bet that would require board approval and meaningful capital allocation. I had the thesis. I believed in it. And before I shared it with anyone, I did something that felt strange at the time: I built few agents and I asked Claude to destroy my thesis.

Not “give me feedback.” Destroy it. I gave it the regulatory framework, the competitive landscape, the financial projections, and a specific instruction: act as the most skeptical board member in the room. Find the capital adequacy holes. Identify the assumptions I’m treating as facts. Tell me what I’m not seeing.

What came back wasn’t perfect. Some of the objections were generic. But three of them were genuinely sharp — angles I hadn’t considered, not because I’m not thorough, but because I was too close to the thesis to see its structural weaknesses. One of them — a regulatory risk I had unconsciously categorized as low-probability — turned out to be the exact question the board chair raised three weeks later.

I walked into that meeting having already rebuilt the argument to address it.

This is what I mean by strategy as stress-test rather than storytelling. The traditional executive process is: build the best case, present it persuasively, handle objections on the fly. The AI-augmented process is: build the best case, then systematically try to break it from every angle before anyone else sees it, then present the version that has already survived attack.

The difference isn’t speed. It’s that you arrive at conviction through adversarial pressure, not through the echo chamber of your own team.

The Second Shift: Managing by Signal, Not by Standup

I wrote recently about instrumenting my engineering team’s pipeline with LinearB — connecting Git and Jira data to surface DORA metrics, cycle time breakdowns, planning accuracy. The numbers were humbling. Our planning accuracy was 25%. Our pickup time — the dead silence between a developer opening a pull request and someone starting to review it — was eating nearly a third of our total cycle time. I had no idea until the data showed me.

But the deeper lesson wasn’t about engineering metrics. It was about how executives manage.

Most senior leaders manage through two channels: scheduled meetings (standups, 1:1s, QBRs) and escalations (something breaks, someone complains, a number goes red on a dashboard). Both are lagging indicators. By the time something surfaces in a meeting or an escalation, the underlying problem is weeks or months old.

AI-augmented management adds a third channel: continuous signal detection. Not more dashboards — those already exist and most executives ignore them. What’s different is the ability to ask natural-language questions of your operational data and get answers that would have previously required an analyst and a week of work.

Before a 1:1 with a department head, I can ask: show me the delta between what this team committed to in Q1 and what they actually delivered. Highlight any language shifts in their retrospectives that suggest a project is slipping before they’ve formally flagged it. Compare their velocity trend against the two quarters prior.

I’m not using this to catch people out. I’m using it to walk into a 30-minute meeting and spend the entire time on coaching, strategy, and unblocking — instead of spending the first 20 minutes on status updates that I could have absorbed asynchronously.

The shift is subtle but significant: from managing by meeting to managing by signal. The meetings still happen. But they’re about different things now. Harder things. More useful things.

And there’s a Goodhart’s Law dimension here that I’ve become very conscious of. When you can see metrics in real time, the temptation is to manage to the metrics — to make the numbers move. But the numbers are a system, not a scoreboard. Improving one in isolation often degrades another. The AI doesn’t just surface the data; it helps me see the interactions between metrics that would be invisible in a spreadsheet. That’s the difference between having data and having understanding.

The Third Shift: The Decision-Triage Problem

This one is the most personal, and the most unresolved.

A senior executive’s scarcest resource isn’t time. It’s decision quality. You make hundreds of small decisions a day. Most of them don’t matter. A few of them will echo for years. And the cognitive challenge is that the important ones don’t arrive wearing labels. They look like routine emails, offhand comments in meetings, approval requests buried in a queue of twenty others.

I’ve started using AI to triage — not to make decisions, but to sort the incoming stream into categories. What’s routine and can be handled by a direct report with a standard framework? What needs more data before I can decide intelligently? What is genuinely a decision that only I can make, because it involves judgment, relationships, or risk tolerance that can’t be delegated?

This sounds simple. It’s not. It requires being honest about which decisions actually need you and which ones you’ve been holding onto because letting go feels like losing control. AI is surprisingly good at this — not because it understands your organization’s politics, but because when you force yourself to articulate the criteria for “CEO-only decisions” clearly enough for a model to apply them, you discover how few of your daily decisions actually meet that bar.

The uncomfortable implication is most of what senior executives spend their time on isn’t executive work. It’s operational work that has drifted upward because the organization doesn’t have clear decision rights, or because the executive doesn’t trust the layer below, or simply because of habit. AI doesn’t solve the trust problem or the organizational design problem. But it makes the pattern visible in a way that’s hard to ignore.

I’m still working on this one. I don’t have it figured out. But the direction feels right: spend less time deciding, more time on the decisions that actually matter, and use AI to make the distinction sharper than human intuition alone can manage.

What This Doesn’t Resolve

I want to be honest about the limits of what I’ve described.

AI as a thinking partner is powerful. It’s also seductive in a way that should concern any executive who takes the job seriously. The risk isn’t that AI gives you bad advice — it sometimes does, but you learn to catch that. The risk is that it makes you feel more certain than you should be. You’ve stress-tested the strategy. You’ve analyzed the team data. You’ve triaged the decisions. Everything feels rigorous and evidence-based. But the model doesn’t know what it doesn’t know, and neither do you — and the sense of thoroughness can mask the gaps that remain.

The other risk is subtler. If AI can red-team your strategy, analyze your team’s performance, triage your decisions, and prepare your board materials — what is the executive’s unique contribution? I think the answer is judgment: the integration of context, relationships, values, and risk tolerance that no model possesses. The ability to read a room. The willingness to make a call when the data is insufficient. The emotional labor of leading people through uncertainty.

But I’m aware that “judgment” is what every profession claims as its irreducible human core, right until the moment it isn’t.

I don’t have a resolution for this tension. I suspect no one does yet. What I do know is that the executives who refuse to engage with AI as a thinking tool will be outperformed by those who do — and that the ones who engage with it most thoughtfully will be the ones who stay honest about what it can’t replace.

The McKinsey framework identifies six mindsets of excellent CEOs. I’ve come to believe there’s an unofficial seventh: the willingness to let a machine challenge your thinking, and the self-awareness to know when to listen and when to override.

That seventh mindset doesn’t have a name yet. But the executives who develop it will be the ones who define what leadership looks like in the next decade.