Building an AI-Ready Culture

By Micah Gaudet
Training & Education

We spend a lot of energy getting the procurement right. Drafting adaptive contracts. Piloting responsibly. Measuring efficiency. But none of that matters if the culture isn't ready.

You can buy the best tool on the market. You can follow every "responsible AI" guideline. But if your organizational environment is brittle — defensive, siloed, allergic to feedback — the technology will either be rejected outright or absorbed in ways that reinforce dysfunction.

So the real question isn't: What tool should we buy? It's: What kind of culture are we asking this tool to live in?

Culture Is the Environment

In living systems, culture isn't window dressing. It's the climate. It shapes what grows, what adapts, and what breaks down.

If an organization is open, trusting, and iterative, AI can become a catalyst for learning. If it is fearful, rigid, or closed off, AI will amplify those qualities instead. A defensive culture will use AI to surveil. A siloed culture will harden silos. A culture that values dialogue will use AI to expand dialogue.

I love how Cory Smith puts it in his Medium article. "AI isn't a tech project. It's a transformation of how people work, make decisions, and create value. And yet, in countless boardrooms and budget meetings, it's still approached like a software upgrade."

In short: AI doesn't transform organizations. People do. And people act within cultural conditions that either metabolize or resist change.

Even when investment is widespread, successful deployment remains elusive. An OECD.AI working paper argues that achieving outcomes isn't just about the tool—it's about cultivating the cultural conditions for its use. Governments must build an environment of trust, adaptability, and capacity—not just deploy technology.

Signals of Cultural Readiness

So how do you know if...scratch that...how do you get your organization ready for AI?

You see it first in diversity. Not necessarily the demographic variety, but the presence of multiple kinds of intelligence at the table: technical skill, historical perspective, frontline experience, and relational knowledge. When decisions are shaped by that blend of voices, the organization has adaptive range.

You see it in porosity. Healthy systems allow information to move across boundaries. A frontline worker can raise a concern without fear of reprisal. A resident can flag an issue without needing political capital. Feedback is not a quarterly ritual, it's a living pattern.

You see it in iteration. Instead of trying to control every variable before acting, the culture treats experimentation as wisdom. Pilots are not proof-of-concept theater; they are learning periods. The question is not, "Did it work?" but, "What did we learn?"

And beneath it all, you see trust. Not blind faith in technology, but mutual trust that change will not be weaponized. Trust that mistakes will be met with learning rather than blame. Trust that adaptation will be supported, not punished.

When these qualities are present, AI is less likely to destabilize. It can become a catalyst, a partner in reflection and renewal. When they are absent, even the best tool risks calcifying dysfunction.

AI as a Disturbance Event

Consider this from the UK National Audit Office. "Another feature of successful innovators is their ability to learn quickly what works and what doesn't, so that failed experiments can be stopped promptly and the resources redirected to more promising ideas. Being open about this can be challenging for government, with its ingrained worry that any failed project represents poor value for money."

Ecologists describe disturbance events — storms, migrations, droughts — that disrupt ecosystems. Some trigger collapse. Others spark renewal. What matters is not the disturbance itself, but the resilience of the environment it enters.

AI is a disturbance event. It rearranges decision-making, authority, and attention. It amplifies existing cultural patterns. If your culture is brittle, AI will fracture it. If your culture is adaptive, AI can surface tensions and open space for learning.

This means we need to treat adoption less like a technical upgrade and more like an ecological introduction. Not: "Does the tool work?" But: "What does it change in us?"

Humans as the Loop

If you've been around AI discussions, you've probably heard the phrase "human in the loop." In practice, it usually means placing a person at some checkpoint in the process — to review, approve, or override the machine's output. At its best, this is meant to provide oversight. But too often, it reduces people to gatekeepers, standing at the edges of a system they did not shape.

That framing devalues human contribution. It suggests that people exist to monitor technology, rather than to shape the outcomes, relationships, and purposes that technology should serve. Oversight becomes a checkbox: approve, escalate, sign off.

But public institutions don't need humans in the loop. They need humans as the loop. People are not interchangeable parts. They are the nervous system that detects what data cannot. They are the memory that carries institutional history. They are the judgment that balances context, values, and care. And crucially, they are the ones who know when it is the right time, place, and context to take a risk.

The World Bank observed that in public organizations, risk aversion and weak incentives often stifle innovation. Machines cannot resolve that dilemma. Algorithms can calculate probabilities, but they cannot sense cultural readiness or bear responsibility for a decision that stretches beyond precedent.

Remember: the airplane didn't build itself. Progress came because humans were willing to take a risk, in a particular time, place, and cultural moment, with all the judgment that required. AI can inform, but only people can decide when risk is worth it.

If AI dulls these human capacities, it doesn't just make mistakes. It erodes the very conditions that keep civic systems alive. Governance in the age of AI must protect judgment, discretion, memory, and relationship as essential functions — not inefficiencies to be optimized away.

Closing Reflection

Where does your culture already make room for experimentation, dissent, and trust?

When AI enters, will it serve as a partner for learning or a shortcut for control?

How will you know if a tool truly belongs in your civic ecosystem?

This concludes my three-part series drawn from Fragile Systems: An Ecological Approach to AI. If you'd rather listen than read, the ideas continue in my new podcast, AI Insights with Micah Gaudet, available on all major platforms.