Every time I open LinkedIn, I feel like I'm watching one of those fitness commercials where everyone has a six pack and apparently got there without breaking a sweat. Someone built an AI agent before breakfast. Someone else automated their entire department in a weekend. Another person is posting their fourteenth "here's what I learned" thread this month, each one more polished than the last.
And I keep thinking the same thing: when are these people doing the actual work?
Because here's what I know from experience: the actual work doesn't look like that. It's not clean, it's not fast, and most of it will never make a good LinkedIn post. But the more I see the highlight reel, the more I feel like someone needs to say what's actually going on.
So let me try.
What It Actually Looks Like
I've been in enterprise tech for over a decade and now run an AI consultancy with my co-founder. You'd think that means I have this figured out. I don't.
I've spent entire evenings trying to connect AI tools to each other, following tutorials that promise a fifteen-minute setup. Step four doesn't work on your machine. Step seven requires a dependency nobody mentioned. Step ten assumes you already know something the guide never explained. You eventually get it working, but "fifteen minutes" turned into three hours and you're not even sure you did it right.
I've built things I was genuinely excited about. Workflows that solved a problem I was convinced I had. And then I didn't use them. Not because they were broken, but because once I had them, I realized the problem I was actually trying to solve was different from what I assumed. That's a lesson no tutorial will ever prepare you for, and it only comes from doing the work.
I've had AI generate outputs that sounded confident, well-structured, completely articulate, and were just wrong. Not approximately wrong. Factually, dangerously wrong. The kind of wrong that would have gone out into the world with my name on it if I hadn't taken the time to verify.
None of this makes it into anyone's LinkedIn post. But this is what the actual work looks like. It's trying, failing, adjusting, and slowly getting better. Not a breakthrough a day, but a long series of small frustrations that, over time, start compounding into something real.
The Gap Between the Signal and the Reality
When we talk to leaders about AI, we notice something interesting. Almost everyone has a position on it. Very few have experience with it.
Some are openly sceptical. Someone close to me refuses to use any AI tool because she believes it makes you stop thinking for yourself. And honestly, that's not a stupid concern. But there's a difference between healthy scepticism and standing still, and in a space that moves this fast, standing still has consequences.
A much larger group is genuinely interested but hasn't started yet. They ask good questions, they follow the space, they understand why it matters. But there's always a restructuring, a quarterly review, something more urgent. AI stays on the "I really should" list, and that list keeps getting longer. Every month that passes, the gap between intention and action gets a little wider.
And then there's the group that worries me the most. The people who talk about AI with complete confidence but have never actually built anything with it. They know every framework, reference every trend, and sound incredibly knowledgeable in any meeting room. But they've never opened a terminal. Never had something break and had to figure out why. I've been surrounded by these people for years, and the gap between what they signal and what they can actually do is becoming harder and harder to sustain.
The people who are genuinely good at this? They're usually the most honest about what they don't know. That honesty is exactly why they're ahead, even though most of them are convinced they're behind.
The Mindset Nobody Talks About
When organisations come to us with their AI ambitions, the first conversation almost always starts the same way. They've seen what's possible, they have big ideas, they want to automate an entire process or build an agent that handles everything.
And then we start looking at the reality together. There's no structured data. The systems don't talk to each other. The process they want to automate has twelve undocumented exceptions. That doesn't mean you shouldn't start, but it means that starting looks nothing like what anyone is posting about.
The people we've seen genuinely succeed with AI share something that has nothing to do with technical skill or budget. They were willing to be bad at it for a while. They tried things that didn't work and asked "what did I do wrong" instead of concluding "this tool is broken." They treated AI as a skill to develop, not a button to press. And they checked the output before they trusted it, every single time.
That willingness to sit with discomfort, to be a beginner when everyone around you expects expertise, to do the unglamorous work of verifying and adjusting and trying again: that's the actual differentiator. Not the tool. Not the model. Not the prompt template someone shared in a carousel post.
And here's what makes this compound. Once you push through that initial discomfort and something actually works, something you built, something you couldn't do before, you want more of it. You get curious about what else is possible. You try harder things. Your intuition for what works and what doesn't gets sharper. Each iteration builds on the last one, and at some point you realize you've crossed a line you can't even pinpoint. You don't remember when it stopped being hard. It just did.
But that flywheel only starts spinning if you do the work. And the work is messy.
And here's the part that should create real urgency. I notice it in my own work every week: the tools are getting better at working with me, not just the other way around. They remember what I've built, how I think, what I've tried before. Every session picks up where the last one left off. It's like learning a language by living in the country. The first few months are painful, everything is slow, you sound like a child trying to order coffee. But at some point you stop translating in your head and start thinking in it. And once that happens, someone who just downloaded Duolingo can't catch up to you by studying harder for a week. You didn't just learn the vocabulary. You built an intuition that only comes from being immersed, from making mistakes in real conversations, from the accumulated experience of thousands of small interactions that shaped how you think.
That's what's happening with AI right now. And the window to start building that fluency is open, but it won't stay open forever.

What Actually Worries Me
It's not the leaders who haven't figured this out yet. It's the people underneath them.
If you're leading a team and you haven't developed real fluency with these tools, your ability to guide that team is shrinking with every passing month. Not because you're bad at your job, but because the job is changing in a direction you haven't engaged with yet.
Your team members are going to figure this out on their own. They'll experiment, they'll build capability, and at some point they'll wonder why they're still following old processes when they already know a better way exists. The gap between what a leader understands and what their team can do is going to be one of the most defining dynamics in organizations over the coming years.
And here's what I think most conversations about AI and leadership get wrong. They frame it as a tech skills gap. It's not. It's an identity problem.
For decades, being a leader meant being the person in the room with the best answers. That worked when answers were hard to come by. But AI just made answers abundant. And when answers are everywhere, the leader who still builds their credibility on having them is standing on ground that's disappearing beneath their feet.
The leaders I respect most right now are the ones who stopped performing expertise and started modelling curiosity. They ask better questions than their teams, not because they know more, but because they've accepted that knowing isn't the job anymore. The job is creating the space where their people can experiment, fail, and build capability faster than the market demands it.
The ones who can't make that shift aren't bad leaders. But they're teaching their teams, without realizing it, that looking confident matters more than getting good. And that lesson compounds just as fast as the skills gap does.

The Honest Version
I wrote this because I think the honest conversation is more useful than another perfectly polished post about someone's AI morning routine.
If you haven't started yet, start. It will be ugly. You'll build things you don't end up using. You'll trust an output you shouldn't have. You'll spend three hours on something a tutorial said would take fifteen minutes. And that's exactly how it's supposed to go.
If you've been performing fluency instead of building it, you already know. The longer you wait, the harder the correction gets.
And if you're already in it, hands dirty, making mistakes, figuring it out as you go, keep going. The messiness isn't a sign that you're doing it wrong. It's a sign that you're actually doing it.
Because the real flex isn't posting about AI. It's shutting up and building something with it.
In the spirit of honesty: this article was written with Claude Opus 4.6, a voice-to-text tool using the Parakeet V3 model because I think better when I talk than when I type, and a personal context system I've built and iterated on over months of working with these tools. It went through multiple rounds of me pushing back, rewriting, and reshaping before it felt like mine. It wasn't magic. It was reps.
