Why Is AI in Accounting NOT Widely Embraced?

What the Data Actually Says

By Daniel Lawrence, Founder, Bots For That



On a recent webinar, I opened by laying bare five uncomfortable truths about AI adoption in the accounting profession. They were the kind of statements any accountant could agree or disagree with depending on their own experience.

However, a study published this month, surveying over a thousand accounting and audit professionals across six countries, confirms each of them. I’ll spare the source, because the point here is the data, not who paid for the survey.

The truths are these.


1. Most firms haven't actually deployed AI.

In every country surveyed, the majority of firms are still pre-production, meaning they either have no AI in use, are exploring concepts, or are running pilots. In the UK, over 50%. In the Netherlands, over 65%. Even in the most “advanced” market in the study, the United States, more than half of firms have not moved past the pilot stage. The data reveals that the “AI era” is, for most part of the profession, still the AI pilot.

2. The firms that haven't adopted aren't planning to either.

Among UK firms not yet using AI, the most common answer to “when do you plan to adopt?” is “no plans within 24 months.” More than half. The runway is not a runway.

3. The barriers are practical, not philosophical.

Cost, talent, and regulatory uncertainty together account for over 75% of cited barriers in every country. Ethical concerns and cultural resistance are minor by comparison. The dominant industry narrative, that firms hesitate because they’re nervous about AI ethics, does not survive the data. Firms don’t seem to be wrestling with their conscience. They’re wrestling with their P&L, their hiring, and their regulator.

4. Most firms aren't prepared to put AI into their people's hands.

Asked how prepared their firm is to reskill staff for effective AI use, over 70% of UK firms answered somewhere between “not prepared” and “moderately prepared.” Across the six countries surveyed, the figure is roughly three-quarters everywhere. You cannot embed AI into firm strategy without people who can operate it.

5. Even firms using AI don't fully trust the outputs.

Between 50% and 75% of respondents in every country said auditors should always validate AI outputs relied upon for professional conclusions. In the UK, over 60%. In the US, 73%. That is the profession saying, in its own voice, that it does not yet trust AI enough to leave it unchecked. That is not the posture of an embedded technology.

So if my experience and the emerging data are saying the same thing, why is this the case? And more importantly, what should firms do about it?



Let me offer five practical explanations, and a strategy for each.

1. The profession has been sold tools, not outcomes.

Vendors lead with features. Our AI does X. Firms get pattern-matched into asking “what does this tool do?” rather than “what business outcome am I trying to achieve?”

The strategy: Begin with the end in mind. Define the outcomes first, half the time on monthly close, onboarding completed in a single afternoon, senior reviewers freed from preparation work, etc, then work backwards to the workflow change, then to the tool. The right tool is whichever one delivers that outcome. Not the one with the longest feature list in the brochure.

2. Cost is being measured against software, not against the work it replaces.

When firms evaluate AI as another line item in the software stack, the price feels high. When they evaluate it against what value it can/should deliver, staff hours, error rates, and unbilled rework, etc, the numbers change entirely.

The strategy: Run the comparison against actual labour cost on a defined task. If a tool removes two hours from every onboarding, multiply that by the number of clients onboarded per year. Compare it to the senior time it gives back. The decision becomes obvious, and so does the case to your partners.

3. Firms are waiting for talent they don't actually need.

“Lack of technical talent” is the second-biggest cited barrier, and the conversation in the profession is about hiring data scientists and prompt engineers. But this conflates two different things: needing to build AI, which firms don’t, and needing staff who can use AI competently, which they do. I’ve witnessed “democratised automation” fail miserably for the past decade, and whilst the dream of build-it-yourself is a desirable one, in reality, people just want to do what they’re trained for – accounting.

The strategy: Stop trying to hire AI engineers. Two paths, both valid: First, choose AI-first tools designed for accountants, tools where the AI is built into the workflow, not bolted on top. Second, choose AI tools that you can develop personalisation and specialisation with. Your team doesn’t need to understand the model per se, they need to understand the work and how/where the tool can and cannot help.

4. Regulatory uncertainty has become an excuse, not a constraint.

There’s no question frameworks are still emerging, almost 80% of UK respondents in the study want a globally harmonised AI framework urgently. But waiting for one is a choice, not a requirement. The current regulatory environment is workable. It just isn’t tidy.

The strategy: Build inside the perimeter that exists today. HMRC’s MTD framework, ICAEW guidance, established GDPR principles, all of these provide a working boundary now. Firms that operate inside that boundary will be ready to expand when the framework hardens. Firms waiting for clarity will start from zero when it arrives.

5. Reskilling is being treated as a training problem, not a workflow problem.

Three-quarters of firms aren’t prepared to reskill staff because they’re thinking about AI training the way they think about CPD, courses, certifications, classroom hours. AI doesn’t work that way. People learn it by using it inside their actual work.

The strategy: Don’t run AI training programs. Embed AI tools into the workflows your team already uses, with the prompts and support they need to begin. Competence builds through repetition, not lectures. The training is the doing.

A final topical point and where MTD fits in

The single most striking number in the study is this: almost 80% of UK respondents see a globally harmonised AI framework for audit and assurance as very or extremely urgent. The highest figure of any country surveyed.

That is not a profession resisting AI on principle. It is a profession waiting for the conditions to make adoption rational, clear rules, structured data, validated tools, trained people.

It is also why Making Tax Digital matters more than its compliance label suggests. MTD doesn’t just digitise a tax workflow. It creates the structured, validated, machine-readable data environment that AI in tax actually requires to function. The compliance milestone and the AI milestone are the same milestone, looked at from two angles.

Firms that treat MTD as an end will spend the next two years parked. Firms that treat it as the foundation of a working data infrastructure are the ones that will appear on the right-hand side of next year’s chart.

The data is uncomfortable. But it’s also clarifying. The AI era hasn’t started yet for most of the profession, and the conditions for it to start are already arriving. The question is whether your firm is treating them as an obligation, or as the opportunity they actually are.



Daniel Lawrence is the Founder of Bots For That and creator of the Beanieverse platform, a suite of AI-native powered tools for the accounting and bookkeeping sector. With over a decade of experience deploying enterprise automation and AI in financial services, and other highly regulated industries, he writes about AI transformation in accounting from the outside in.

© 2026 Bots For That. Part of the Making Accounting AI thought leadership series.