Waiting for Perfect: How the Accounting Profession Is Missing the Window
By Daniel Lawrence, Founder & CEO, Bots For That
There is a pattern I have seen play out in every industry I have ever worked in.
It happens in financial services. It happens in betting and gaming. It happened in logistics, in manufacturing, in legal services. And it is happening right now, in almost exactly the same way, in accounting.
It goes like this.
A new technology arrives that is clearly significant. The early evidence is compelling. The strategic case is obvious. Senior people in the organisation acknowledge it, discuss it, and agree that something needs to be done.
And then they wait.
They wait for the technology to mature. They wait for a competitor to go first and prove it works. They wait for their existing vendors to catch up. They wait for the regulatory picture to clarify. They wait for the perfect moment, the right time, the right tool, the right conditions, to make their move.
The perfect moment doesn’t come. It never does. And by the time the organisation stops waiting, the window has narrowed, the early advantage has gone, and the firms that moved are now the reference points everyone else is chasing.
This is not a prediction about accounting. It is a description of what is already happening.
The Intention Gap and Why It's Predictable
Karbon’s State of AI in Accounting report found something that should be pinned to the wall of every practice: 82% of accountants say they are intrigued or excited by AI. Only 25% are actively investing in AI training for their teams.
Karbon called this “the AI paradox in accounting.” I’d call it something simpler: the gap between knowing and doing.
But here’s the thing, this gap isn’t random, and it isn’t a failure of intelligence or ambition. It is entirely predictable. Everett Rogers mapped it in 1962, and Geoffrey Moore sharpened the picture in Crossing the Chasm thirty years later. In any market, technology adoption follows the same distribution: a small group of innovators moves first, followed by early adopters who see the strategic potential, followed by the early and late majority who wait for proof, followed by laggards who move only when they have no other choice.
The accounting profession, like most professional services sectors, sits predominantly in the early-to-late majority. That is not an insult, it is a structural reality. The regulatory environment, the professional liability framework, the billing model, and the culture of precision all create strong gravitational pull toward the proven and the familiar. The technology chasm, and the gap between early adoption and mainstream use, is wider here than in most sectors, and crossing it requires more than a compelling product. It requires a compelling case that the risk of moving is lower than the risk of staying still.
I believe we are at that inflection point now. The evidence base is no longer thin. The tools are no longer experimental. The early movers have results, not just theories. And the risk calculation, for any firm still waiting, is shifting decisively.
But here is the question that genuinely deserves an answer: does this have to be the way it goes? Is the late-majority pattern simply the inevitable fate of the profession, something to be managed and accepted? Or is there something that can actually change it?
I don’t think we have to accept it. But changing it requires a different kind of honesty about why it happens, and a different kind of response to it.
The Digital Champion
Anyone working in an accounting firm reading this will recognise at least one of the following scenarios.
The firm appoints a “digital champion”, usually a motivated senior manager or junior partner with a genuine interest in technology. They attend the webinars. They read the reports. They bring ideas back to the leadership team, maybe even present their findings at the next board meeting. And then those ideas sit in a document somewhere, awaiting sign-off that never quite comes, because there’s always something more pressing, always a busy season, always a reason to revisit it next quarter.
Or the firm forms a working group. Four people, a shared inbox, a monthly meeting. They evaluate three tools over the course of six months. They produce a recommendation document. The recommendation is received warmly, approved in principle, and then quietly deferred while the firm focuses on a client deadline or a staff recruitment issue.
Or, and this one deserves its own paragraph because it is so prevalent, the firm launches an initiative. It gets a name. Sometimes an acronym. Something that signals ambition: a Digital Futures Taskforce, an AI Innovation Committee, a Transformation Lab Program. The messaging is bold. The intent is genuine. There is a launch meeting, probably a slide deck, possibly a press release.
Six months in, the committee has met four times. The discussions have been lively but inconclusive. Every concrete proposal has been refined, qualified, and sent back for further consideration. The governance process, designed to protect the firm from making bad decisions, has, by its very nature, prevented the firm from making any decisions. And in the meantime, the firms that never bothered with a committee and simply started doing something have moved.
The uncomfortable truth about these initiatives is that they can be worse than doing nothing at all. Democratised exploration, individuals and small teams experimenting freely with AI tools in their daily work, generates real learning, real data, and real behaviour change. The taskforce, by contrast, generates process. It creates the feeling of momentum while producing very little of the substance. And because it exists, it provides cover for inaction: “We’re not ignoring this, we have a working group”.
Finally, and most commonly of all, the firm does adopt something. They turn on the AI features in their existing platform. They subscribe to a second tool that their software vendor recommends. They communicate this internally as “our AI strategy” and proceed with the comfortable belief that the box has been ticked.
None of these is transformation. All of them are theatre.
I do not say this to be unkind, the intention behind each of these patterns is genuine. The problem is that good intentions, in a market moving at the current pace, are not a competitive strategy.
The Regulatory Comfort Blanket
The accounting profession has a superpower that has served it well for a very long time: a deep, well-founded scepticism of anything unproven.
Accountants are trained to be precise. They work in a regulated environment where errors carry real consequences, for clients, for firms, for professional reputations. The instinct to move carefully, to verify before committing, to wait for certainty before acting, is not a character flaw. It is a professional discipline that has defined the quality of the work for generations.
But that discipline, applied to the question of AI adoption, has become a comfort blanket.
“We need to be sure the outputs are accurate before we rely on them”. Fair, but this is a governance question, not a reason to delay starting.
“We’re waiting for clearer regulatory guidance”. Understandable, but HMRC and the professional bodies have not prohibited AI use, and the firms that wait for formal endorsement will be the last to move.
“We don’t want to risk getting something wrong in front of a client”. Reasonable, but the manual processes you’re protecting are themselves a source of error. Human error in high-volume, time-pressured compliance work is not theoretical. It happens daily, in every firm, in every busy season.
The instinct for caution is correct. The conclusion it leads to, inaction, is not.
The answer to the accuracy question is governance, not avoidance. Define the parameters. Set the thresholds. Require human review at the right checkpoints. Build the audit trail. These are engineering and process questions, and they are answerable. Firms that are treating them as reasons not to start are using the right question to arrive at the wrong answer.
The Safe Option That Isn't Safe
When accounting firms do move, when the waiting finally gives way to action, there is a strong and understandable pull toward the familiar.
Add the AI features in the existing platform. Choose the tool with the biggest brand name. Pick the option that the professional body has tacitly endorsed, or that three other firms in the peer group are also using. Choose, above all, what feels safe.
I understand this logic completely. After a decade of deploying AI in regulated industries, I have watched this pattern repeat itself in boardrooms across all sectors. The safe choice, the one that minimises the risk of being wrong, is always the most crowded choice. And the most crowded choice is rarely the most effective one.
Here is what the “safe” AI options in accounting typically look like right now: a chat interface bolted onto an existing platform. An AI feature that categorises transactions using rules that were already in the software, repackaged with a new label. A monthly summary report generated by a large language model from data that was always there.
These are not useless. But they are not transformation. They are incremental improvements to the status quo, chosen because they feel like the lowest-risk path forward. And the hidden cost of that choice, the opportunity cost of not building genuine AI capability, of not developing the data infrastructure, of not training the team, of not changing the workflow, compounds quietly, month by month, until the firm looks up and finds itself years behind.
The firms that will define this profession over the next decade do not choose the safe option. They choose the right option. In most industries I have worked in, there is a significant difference between those two things.
A Different Way: The Ring-Fenced Laboratory
So if the taskforce model fails, the comfort-blanket instinct stalls, and the safe option disappoints, what actually works?
In my experience, the approach that consistently produces real results is one that most accounting firms haven’t seriously considered: a properly structured, ring-fenced innovation laboratory.
Not a committee. Not a pilot that quietly dies when the partner who championed it gets busy. A dedicated environment, separate from live client work, protected from the day-to-day operational pressures that kill every good idea, with a genuine mandate to explore, test, and transform.
But the word “laboratory” needs to be earned, because the concept is frequently misused. A lab with no purpose is a playroom. A lab with no accountability is a holiday. The model I’m describing has specific structural requirements that separate it from the initiatives described earlier:
Clear purpose and scope. The lab is not exploring “AI in general”. It is tackling a defined problem: automate the quarterly MTD workflow for a cohort of twenty clients, or reduce the time from data receipt to draft accounts by 40%. Vague mandates produce vague outputs.
Measurable success criteria, set in advance. What does success look like, specifically, after ninety days? Not “we learned a lot”, that is not a success criterion. Time saved, error rate, staff hours redirected, client satisfaction score. If you cannot define success before you start, you cannot know when you’ve achieved it.
A defined path from prototype to production. This is where most labs fail. They produce something interesting in a protected environment and then find that moving it into live operations is harder than anticipated, with different data, different volume, different risk profile, different stakeholders. The route from prototype to live client work must be designed from the beginning, not retrofitted at the end.
A mandate that includes the authority to change things. A lab that can test but cannot implement is a research exercise. The people running it must have the authority, or at least a clear path to obtain it, to redesign the workflow, change the tool, retrain the team, and alter the process. Without that, the lab produces recommendations that join the queue behind everything else.
A technology-agnostic brief. This is perhaps the most important point of all: the lab’s brief should not begin with AI. It should begin with the problem. What is slow, expensive, error-prone, or unnecessarily manual? What is preventing the firm from delivering better value to clients? Start there, define the problem precisely, and then, only then, ask what the right solution is.
Sometimes that solution will be AI. Sometimes it will be better process design, or clearer data standards, or a different workflow structure, or a better tool that is more robust and scalable. The firms that begin by asking “how do we use AI?” rather than “what problem are we solving?” frequently end up with AI solutions to problems that didn’t need them, and miss the problems that AI could genuinely transform.
Begin with the end in mind. Work backwards from the outcome. Let the solution follow the problem, not the other way around.
The Cost of Waiting
Let me make the cost of inaction concrete, because abstract urgency is easy to dismiss.
If your firm has ten fee-earners spending an average of 30% of their time on tasks that could be automated with current technology, like data entry, reconciliation, transaction categorisation, chasing missing information, preparing routine compliance outputs, that is three full-time equivalents of capacity sitting inside work that does not require human judgment.
At an average fully-loaded cost of £45,000 per person, that is £135,000 per year in salary cost attached to automatable work. Not counted: the opportunity cost of what those three FTEs could be doing instead, such as value-adding client conversations, building new client relationships, driving strategic projects that themselves drive real firm value.
Now consider what happens over three years of inaction. The cost compounds. The gap widens. The competitor who acted in year one has by year three a team that is faster, more profitable, and better positioned to offer the advisory services that clients are increasingly asking for.
This is not a technology argument. It is a business argument. And the numbers are not small.
A Simple Question
I want to end with something practical rather than rhetorical.
If your firm has identified AI as a strategic priority, and most have, at least in principle, then somewhere in your organisation there is an answer to this question: what specifically are we doing about it, with what tools, on what timeline, owned by whom?
If that answer is clear, specific, and has a named owner, you are ahead of most of your market.
If the answer is “we’re keeping an eye on it”, or “we’re evaluating options”, or “we discussed it at the last partners’ meeting and agreed to revisit”, that is not a strategy. At best that’s aspiration dressed up as action, at worst, it’s organisational procrastination and atrophy.
The late-majority pattern does not have to be your firm’s fate. The technology maturity curve describes what typically happens. It does not prescribe what must happen. The firms that are already on the other side of this, that moved deliberately, built something real, and are now reaping the compounding advantage, made a choice that the majority of their peers did not.
That choice is still available to you. But the window that makes it meaningful is narrowing. And in my experience, the firms that keep finding reasons to wait are rarely the ones that eventually move decisively.
Don’t be one of those firms.
Daniel Lawrence is the Founder of Bots For That and creator behind the beanieverse, their AI-native automated accounting platform. He writes and speaks on AI transformation in accounting from the perspective of someone who spent over a decade deploying enterprise automation and AI in regulated industries before returning his attention to the profession where his career began.
© 2026 Bots For That. Part of the Making Accounting AI thought leadership series.