Skip to main content

AI Isn't delivering ROI? Maybe you're solving the wrong problem

It seems many law firms are struggling to demonstrate returns on their technology investments. Forrester predicts that 25% of planned AI spend will be deferred to 2027 as decision-makers struggle to tie value to growth. However, the issue isn't the tech. It's the culture, as this blog post explains.

Little known fact.

I studied German all the way through secondary and for two years at university. During that time, I became pretty proficient, even to the point I was reading Brecht's Trommeln in der Nacht in its original language.

Why then, when I went to Berlin last summer did I nearly freeze ordering a coffee?

It had been so long since I'd actually used it (coughs, 30 years). The vocabulary was still in there somewhere, but the confidence had gone.

I've been thinking about that lately, because I'm watching the same thing happen with AI in law firms.

The tools are there. The capability's there. But it's been bought, not embedded — but it seems to me many aren't using it to its full potential for a number of reasons, including a fear of getting it wrong.

In fact, a familiar paradox is playing out in law firm boardrooms across the country. Despite the ever-increasing capability of the technology, many firms aren't seeing meaningful returns.

If your firm isn't generating significant ROI from AI adoption, the reason is simpler than you might think.

And it has little to do with the tools. You have a people problem.

The technology works (subject to caveats). So what's the problem?

Culture = friction

Technology is the easy part. The hard part is the human work of driving change.

LexisNexis data reveal only 17% of legal professionals say AI is embedded in their strategy, citing "slow-moving corporate cultures" as a primary barrier. Furthermore, the number of professionals identifying "keeping pace with new technology" as their top challenge has risen to 49%.

Three powerful (and entirely understandable) human factors drive this —

  • Fear of replacement: Employees worry AI is here to take their jobs.

  • Fear of error: The "black box" problem makes lawyers wary of putting their names to AI-assisted work due to hallucinations.

  • Incentive misalignment: When your business model rewards hours billed, a tool that dramatically reduces those hours feels like a direct threat to revenue.

Without addressing these human elements, the technology gathers dust and the investment is wasted.

The numbers don't lie: strategy is the differentiator

The difference between success and stagnation comes down to clear strategy.

According to Thomson Reuters’ 2025 Future of Professionals report, organisations with visible AI strategies are twice as likely to experience revenue growth and 3.5 times more likely to experience critical AI benefits than those without.

I wrote about the need for a strategic approach in this piece recently.

Reframing things

To fix this, we need to change how we explain the value of AI, internally and externally.

Clients aren't just paying for time; they are buying outcomes, risk mitigation, and the transfer of responsibility from their organisation to the firm.

We can illustrate this by comparing AI adoption to moving from a map to a GPS. The GPS gets you there faster and highlights hazards you perhaps wouldn't have seen (efficiency and risk mitigation). But the passenger isn't paying for the GPS, they're paying for the driver's ability to navigate complex traffic safely and take responsibility for the arrival. The tool changes the method, but the accountability remains with the driver.

The elephant in the room: job security

We must be honest though about the potential impact on jobs.

Pretending AI won't affect roles destroys trust faster than the change itself.

The deeper threat isn't just about employment numbers; it is about employability. If AI automates the routine drafting and research that traditionally defined the junior years, we lose the "foundational experiences" that train new lawyers. We risk breaking the apprenticeship model.

However, this isn't necessarily a zero-sum game. Economic history suggests that as technology makes tasks cheaper and faster, the demand for services often expands to cover work that was previously too expensive to justify. We are likely facing a messy transition period, followed by an expansion of what legal work actually covers.

The honest message isn't "AI won't take your job." It is that the definition of the job is changing. We are moving from a world of "tech-enabled lawyers" to "lawyer-enabled tech." The professionals who master the ability to manage and verify AI agents will see their value rise, while those who rely solely on execution will potentially see the opposite.

An internal comms approach

Generic advice won't cut it.

To turn resistance into adoption, you need a specific, tactical internal communications plan.

1. Equip middle managers

Partners set the direction, but middle managers are where information flow often breaks down.

Managers are your critical bottleneck. Give them specific talking points. If they can't explain why this matters and what it means for their specific team, the message dies before it reaches the people who need it most.

2. Make the most of "employee influencers"

Top-down directives often fail in resistant cultures; peer validation works better.

Identify your advocates: the fee earners who are already getting results. Make sure those voices are heard. Ensure they can easily demonstrate the wins to their peers.

3. Tailor the messaging

One message doesn't suit everyone. Map out what's important to different groups and tailor things accordingly. For example —

  • Operational excellence (efficiency/cost): Target this message to finance and operations.

  • Relationship (trust/personalisation): Target this to partners. AI frees up time for the high-touch strategic work that strengthens client relationships.

  • Quality (tailored approach): Target this to practice/team heads, focusing on deeper insights and specialist knowledge.

4. Combat "shadow IT"

People aren't just afraid of losing jobs; they are afraid of "doing it wrong."

I think, if we're honest, we all know instances where colleagues have used unsanctioned tools (perhaps not necessarily AI) at work.

However, simply blocking consumer apps isn't the answer. Governance shouldn't be a blocker, but a source of psychological safety.

The starting point is a clear, easy-to-understand AI policy than ensures all colleagues feel significantly more confident using AI and everything's grounded in trusted, sanctioned sources.

You must provide clear guardrails and specific tools so colleagues can innovate, test, and develop their skills with confidence, rather than looking over their shoulders.

5. Make training practical with "prompt packs"

There is no point having expensive Copilot licences if your people don't know how to use them.

Generic tech demos don't change behaviour. Operational teams need to see how the tool handles their actual workflows.

To reduce the "blank page syndrome," provide role-specific "prompt packs" and workflows. Don't just tell them to "use AI." Give the marketing team templates for competitive research and content strategy. Give HR the specific prompts for summarising employee feedback. Have an LLM acted as a cheesed off client and use this as scenario planning and training for junior team members.

This turns abstract anxiety into manageable skill-building

6. Stop broadcasting, start engaging

And... saving the best for last.

Most internal communication strategies fail because they are designed as a monologue, not a dialogue. Sending a firm-wide email is "broadcasting"; it tells people what is happening, but it doesn't help them understand how to adapt.

You need to shift from a "send" mentality to a "feedback" mentality.

This starts with your middle managers. They shouldn't just be forwarding emails from leadership; they need to be facilitated to hold "town hall" style discussions on a micro-level. Give them the talking points to invite questions, debate fears, and report feedback. If you aren't listening to what folks around you are saying, you aren't managing the change.

The bottom line

The secret behind firms that are AI-forward is the teams that are confident and clear about what they're doing with AI and how to make the most of it.

Your focus is everything at this point.

By investing in communication and safety, you empower your people to bridge the gap between technology and adoption, ensuring your firm doesn't just survive the transition, but helps lead it.

Speak to me today about tailored, compliant AI training for your firm, or have a read through some of my other commentary on what AI means for the legal sector.

PS. And when you're ordering coffee in German, remember it's "einen Latte", not "eine Latte" for reasons I won't explain here. You can thank me later.

Get strategic support that spots opportunities and AI training that frees up time for the work that matters most.

I'm in!