5 min read

Built With People, Not Around Them

I lead AI at an employee-owned company with 165 owners across 13 offices. That changes how you think about adoption. When every person in the building has a stake in the outcome you can’t treat...

I lead AI at an employee-owned company with 165 owners across 13 offices.

That changes how you think about adoption.

When every person in the building has a stake in the outcome you can’t treat AI like a cost-cutting exercise. You can’t quietly automate workflows and hope nobody notices. You can’t lead with efficiency and skip trust.

The math is different when the people affected by your decisions are also the people who own the company.

It means every decision I make about AI has to survive a simple question: does this make our people better at what they do or does it make them feel disposable?

If it’s the second one it doesn’t ship.

The loudest conversation in AI right now is about capability. What it can do. How fast. How cheap.

That’s not the hard part anymore.

The hard part is people.

What happens to the work they do. Whether their experience still matters. Whether the person building AI systems at their company sees them as part of the plan or in the way of it.

Most companies skip that part.

They roll out tools. They send a training link.

They call it adoption.

It’s not.

That’s deployment.

And deployment without trust doesn’t hold.

Subscribe now

I believe you start with listening.

Not tools. Not platforms. Not a rollout plan. You sit with the people who do the work and ask what’s actually hard about their day. Where they’re losing time. What frustrates them. What they wish they had more space for.

The answers are almost never what you expect. The bottleneck is rarely where leadership thinks it is. It’s usually somewhere quieter. Somewhere nobody thought to look because nobody thought to ask.

If you skip that step everything you build afterward is guesswork dressed up as strategy.

I believe governance comes before adoption.

Clear boundaries. Approved tools. Explicit guidance on where AI operates and where humans stay in control. Not because compliance demands it but because trust demands it.

If you’re going to ask people to work alongside something they don’t fully understand yet you owe them rules they can see and hold you to.

Not buried in a policy doc nobody reads.

Visible. Referenced. Enforced.

Governance isn’t what slows AI down.

It’s what lets it move without breaking trust.

And trust in this space is hard to get back once you lose it.

I believe in meeting people where they are.

Not everyone processes change the same way. Some people want to experiment. Some people want to watch first. Some people need to sit with it before they’re ready to move.

The goal isn’t to wait.

It’s to make sure nobody gets left behind on the way forward.

That means building programs that support different learning styles. Creating spaces where it’s safe to be slow. Where asking a basic question doesn’t feel like admitting you’re behind.

If your adoption plan only works for the people who were already excited you’re not really adopting.

You’re just rewarding the early movers and leaving everyone else to figure it out.

I believe the best input comes from the middle.

Not the executive suite. Not the vendor pitch. The people closest to the work are the ones who see what AI can actually help with and where it falls apart.

They know which tasks eat their day. They know which processes are broken. They know when a tool is solving a problem nobody actually has.

If you don’t build a way to hear from them consistently you’re building blind.

I believe in killing things that don’t work.

If an initiative isn’t right for the people or the workflow I pull it. I don’t force adoption because it fits a narrative. I don’t keep a tool alive because it looked good in a demo.

Most AI leaders won’t talk about the things they stopped. But pulling something back is how people know you’re actually paying attention. That it’s not just running on autopilot.

Knowing when to stop is harder than knowing when to start.

And it matters more.

I believe silence is the real enemy.

Not resistance. Not skepticism.

Silence.

When people don’t know what’s happening they fill the gap with fear.

And fear is where you lose them.

It doesn’t come from understanding AI. It comes from not knowing what the people in charge of AI are doing with it. From hearing nothing and assuming the worst.

The answer isn’t a memo.

It’s showing up.

Regularly.

Saying here’s what’s happening. Here’s what’s coming.

Here’s where you fit in it.

And meaning it.

Not one town hall. Not one Teams message.

Ongoing presence. The kind that makes people feel informed instead of blindsided.

AI is good at speed.

Processing. Pattern recognition at scale.

It’s not good at knowing what matters.

That takes judgment. Relationships. Experience.

The things that don’t compress.

My job isn’t to replace those things.

It’s to clear the path so people can spend more of their time on them. To automate the parts of the day that drain expertise so the expertise has room to breathe.

If you lead AI at any company the question isn’t what can this technology do.

The question is are you building it with the people or around them.

I know where I stand on that.

Thanks for reading! Subscribe for free to receive new posts and support my work.