Making Accounting AI: The questions you asked, answered in full.
Reflections from the DAS webinar with Dean Shepherd, 29 April 2026
By Daniel Lawrence, Founder & CEO, Bots For That
Last week I joined Dean Shepherd at the Digital Accountancy Show webinar series for an hour we’d called Making Accounting AI: What’s Real, What’s Hype, and What Happens Next. The premise was straightforward: have the honest conversation the profession has been avoiding about AI, cut through the noise, and give accounting professionals a clearer view of where this actually stands.
We covered the AI maturity spectrum, why most “AI features” in accounting software aren’t actually AI, what agentic AI really means, why manual is no longer the safe option, and a four-phase roadmap from compliance automation to a genuinely advisory practice model.
But the part of the session I enjoyed most was the questions and the chat. The engagement was sharp, the questions were the right ones, and the conversation we had together was better for it. There were a few I didn’t get to in full, and a few that deserved more than the live answer allowed. So here they all are, in one place, with the answers I’d want every attendee, and every accounting professional thinking about this, to have to hand.
1. What AI implementation methodology do you recommend, and do you use a sandbox before going to production?
Asked by Henry Jensen, and reinforced by his chat comment that AI implementation needs solid SOPs first, and by Karl Howe’s example of an agent that deleted a SaaS company’s database and backups.
Yes, absolutely, and the sandbox isn’t optional, it’s the foundation. The methodology I’d recommend has three phases that map closely to how I’ve deployed enterprise automation in regulated environments for over a decade.
The first is foundation. Ideally, get your processes documented and stable before you automate anything. Henry’s comment in the chat hit this exactly, if your SOPs are shaky, your AI implementation will be shaky. AI doesn’t fix bad process, it accelerates it, in both directions. However, in the real world, solidifying processes and automating them often happens in parallel as part of a single exercise. This can be faster and more effective, so don’t think you have to spend months getting all your processes right first. AI can be an incredible enabler for this step too.
The second is sandbox and supervised prototype. Build in an isolated environment, run it in parallel with the manual process for long enough to see the long tail of edge cases, not just the happy path that’s easy to handle. Karl’s example in the chat is exactly why this matters. A 90% working agent isn’t 90% useful; it’s potentially worse than no agent, because the failures are silent and the consequences are real. This is where your agent’s specialisation, personalisation and context come in.
The third is staged production rollout with measurable guardrails. Clear success criteria. Human-in-the-loop on judgement calls. Full audit logging. An explicit rollback plan for when something goes wrong. Not if. When.
The mistake firms make is collapsing these three into one, going from demo to production overnight, then reverse-engineering the governance after the first incident. It costs more, both financially and reputationally, than doing it properly the first time. It can also be harder to untrain and retrain your agent from wrong behaviour than to get it right the first time around.
2. Why do you think people are so impatient with AI and don't give the implementation time to bed in?
Asked by Deborah Spooner, and reinforced by Fozia Muddassir’s chat comment that trust needs to be built up like with a new junior, and Karl Howe’s “push of a button” client story.
Three reasons, and they compound.
The first is the demo problem. AI demos are extraordinary. They’re built to look effortless, and they show the tool at its best on a curated example. People watch a demo and assume that’s what Tuesday morning will look like. It isn’t. Tuesday morning is the long tail of edge cases, and that’s where the time investment goes.
The second is the language we use. We talk about “deploying” AI like it’s installing a printer, push of a button, as Karl’s client put it. The reality is closer to onboarding a new team member, which is exactly the framing Fozia used in the chat. You wouldn’t expect a new junior to be productive on day one. You’d expect to invest three to six months in their development before they’re contributing at full capacity. AI deserves the same patience, and for the same reasons, it needs context, it needs feedback, it needs trust to be earned through observed performance.
The third is that the cost of getting it wrong has historically been hidden. With manual processes, errors are absorbed into rework and nobody calls them errors. With AI, the errors are visible, which feels riskier even when it isn’t. The temptation is to abandon the tool rather than to give it the time and feedback it needs to improve.
The firms that succeed are the ones who treat the first six months as investment, not evaluation.
3. Don't you think AI helps us be more discerning, encouraging us to think how rather than what, and opening up the grey areas?
Asked by Deborah Spooner.
Yes, and this is one of the most genuinely optimistic angles on AI in the profession, and I don’t think it gets enough airtime.
Used well, AI is a thinking partner. It surfaces options you hadn’t considered, challenges assumptions, and lets you explore scenarios faster than you could on your own. That changes the nature of professional work in a useful way: less time assembling the answer, more time interrogating it. Less time on what the numbers say, more time on how to think about what they mean.
The grey-area point is the one I’d push hardest on, because that’s where professional judgement actually lives. Compliance has a right answer. Advisory work doesn’t. Most of the value an accountant brings to a business owner sits in the ambiguity, the should we, the what if, the what would happen if we changed this. AI is genuinely good at exploring those spaces, presenting tradeoffs, and stress-testing decisions. It doesn’t replace the judgement, but it expands the surface area you can apply judgement to.
The caveat: this only works if the accountant is genuinely engaged with the AI’s output, not deferring to it. The risk isn’t that AI starts thinking for us, it’s that we stop thinking with it and start outsourcing the thinking entirely. Used as a partner, it sharpens you. Used as an oracle, it dulls you. The discipline is choosing the first mode every time.
4. Should the onus be on software vendors to do the main AI integration, with users just treating these as feature enhancements, rather than us having to track every AI development?
Asked by Fozia Muddassir.
Honest answer, is partly yes, partly no, and the distinction matters a lot.
Yes, in the sense that no firm should have to become an AI research department to do their job well. The vendors you trust with your core stack should be doing the heavy lifting on integration, security, compliance, and keeping pace with the underlying models. That’s exactly what you’re paying them for.
But, and this is the bit I’d push back on, not in the sense of waiting for the incumbent ledger and practice management vendors to bring AI to you. Their commercial incentive isn’t to disrupt their own products with genuine AI capability; it’s to add AI-flavoured features that protect the existing model. The firms waiting for their software vendor to “catch up” are usually waiting in the wrong queue. AI models deployed by your vendor are also designed to be mostly effective at the overall enterprise entity level, rather than at your firm’s level where personalisation and specialisation are driven by your staff and your clients.
The healthier middle position is this: rely on a small number of trusted vendors who are AI-native, not AI-retrofitted, and let them do the integration and currency work for you. You shouldn’t need to track every model release; you should need to track which two or three vendors are genuinely on top of it, and trust them to keep you current. That’s a smaller, more manageable problem.
I’ve seen some genuinely astounding results by comparing the outcomes from various different models on the same scenarios, the results of which would probably have most accountants running for the hills and as far away from AI as possible. But this is an essential step for any vendor trying to develop the best solutions for the profession.
Where firms get this wrong is when they try to keep up with everything themselves, every product launch, every announcement, every new tool. That’s exhausting, expensive, and unnecessary. Pick the vendors whose business model is built on staying current and setting clear expectations for users. Let them carry that load.
5. Will firms ever fully embrace agentic AI, given the human need for control and certainty, and the connection with clients?
Asked by Deborah Spooner, and reinforced by Fozia’s comment that agentic AI is next-level and requires foundation and trust, and Karl Howe’s “augmented not agentic” framing.
My honest answer is: in some areas yes, in others no, and the distinction is healthy.
Karl’s framing in the chat is the right one, for a lot of accounting work, the goal isn’t full agency, it’s augmentation. A human-in-the-loop model where AI does the heavy lifting and the accountant retains judgement is not a stepping stone to full autonomy, it’s the destination, for any work where professional responsibility, client trust, or regulatory accountability sits with a named human.
Where full agency makes sense is in the high-volume, low-judgement work, bookkeeping reconciliations, transaction categorisation, basic compliance flows, etc, where the cost of an error is contained and the audit trail is robust. Where it doesn’t make sense, and may never, is in client-facing advisory, in judgement calls on materiality, in any conversation where the human relationship is the value being delivered. Deborah’s point about the human connection is exactly right, and it’s not nostalgic, it’s commercially correct. Clients pay for trust, and trust is hard to delegate.
So the right model isn’t “will we go fully agentic” but “where in the practice does agency belong, and where doesn’t it?”The firms that answer that question deliberately and curate their AI strategy will end up with the right balance. The firms that try to go all-or-nothing in either direction will get it wrong.
Don’t feel that it’s all or nothing with AI. Your clients will expect you to use AI, and they’ll start to perceive the value of your services differently too. So show them that you’re using AI with clear, well-defined, well-intended purpose, incorporating your firm’s collective years of experience and expertise to provide a hyper-personalised, specialised, irreplaceable AI-enabled service.
6. Can AI free up enough capacity in compliance work that accountants can spend more time understanding clients and helping them grow?
Asked by Deborah Spooner.
Yes, and this is the bit of the AI story that should make every firm owner optimistic, not anxious.
The profession has been talking about the shift from compliance to advisory, I personally prefer the term business insight support, for years. The barrier has always been capacity. Compliance work pays the bills and consumes the hours, and the advisory practice that everyone wants to build never quite gets the room it needs. AI changes that arithmetic. The compliance work doesn’t go away, but it stops being the bottleneck. The hours that used to be spent assembling, reconciling and producing become hours available for interpreting, advising and engaging.
That’s a profession-shaping shift, and it’s good news. UK SMEs need better advice, not less. They’re navigating a harder economy, more complex tax obligations, and a faster business environment than at any point in living memory. They’re not being well-served by the current model, where their accountant has time to file the return but not to talk about the strategy. AI lets the same accountant do both, for more clients, to a higher standard.
The opportunity isn’t to do less work, it’s to do more valuable work, with the same team, for clients who genuinely need it. That’s the case for AI in this profession, and it’s the one I find most worth making.
7. Will Claude Cowork create a more connected firm, and bring about the resurgence of the humble spreadsheet?
Asked by Mike Mason.
Yes, but with a real if, on both counts.
Tools like Cowork have the potential to create a more connected firm, used correctly and inside your wider tech stack. By themselves, they won’t. And I say that having watched ten years of democratised automation go almost nowhere in firm after firm. Every generation of citizen-developer tooling, RPA platforms, low-code, no-code, the early agent builders, has arrived with the same promise. Anyone can build, the firm becomes connected, the work transforms. And for ten years, in firm after firm, that hasn’t happened. Pockets of brilliance, yes. Firm-wide transformation, very rarely.
The reason isn’t the tools, the tools genuinely work. It’s that connection at the firm level isn’t a tooling outcome, it’s a design outcome. It needs someone to decide what should be connected to what, what data flows where, who’s accountable for what, and how it all fits with the rest of the stack. Without that, fifty staff building fifty agents gives you fifty automations, not a connected firm.
And there’s a hidden cost that very few firms see coming until they’re inside it. I regularly speak to firms who’ve built and deployed their own automations, only to find that the technical debt mounts up, often accounting for more than 50% of the total time spent on internal automation. Maintenance, retraining, fixing things when the underlying systems change, untangling work that the original builder didn’t document. It’s a very high hidden cost, and it almost never appears in the business case. I’m not saying don’t do it. I’m saying understand what you’re getting into, and what it actually takes to maintain automations beyond the day they’re deployed.
So Cowork has the potential. It’s a better tool than what came before. But it’ll go the same way as the previous waves unless firms approach it differently, and approach it differently means treating AI as part of a designed stack, not as a self-service buffet.
There’s a security and governance dimension worth flagging too. Citizen-developer tools are getting more capable on the action side faster than they’re getting better on the guardrail side, and in a regulated profession that holds client financial data, the gap between those two matters. Data exposure, unauthorised action, and lack of auditability are the three risks that bite first. None of them are reasons not to use the tools, they’re reasons to use them with proper governance around them. The discipline isn’t to ban citizen-developed AI; it’s to know which side of the line each use case sits on, and put governance around the line.
On the spreadsheet, honestly, spreadsheets are unlikely to go anywhere, and I think that’s a pity. They’re great for accountants. They’re flexible, structured, universally readable, and AI-native tooling is making them more powerful again. So they’ll persist, possibly thrive.
But they’re not great for business owners. A spreadsheet is a fluent language for someone trained in finance, and an opaque one for someone running a business. And they’re certainly not how the next generation coming through wants to interact with their numbers, that generation expects insight delivered in context, in the apps they already use, in the moment they need it. Not in a grid they have to interpret.
So the spreadsheet’s resurgence will be real, but it’ll be inside the firm, not at the client end. The firms paying attention to that gap, between how they work with numbers and how their clients want to work with numbers, are the ones who’ll build the right service model around what’s coming.
8. Where do you see the industry in three years' time?
Asked by Graham Burton.
Three years is a useful horizon for this question, because it’s long enough for real change to land but short enough to be accountable for. Here’s what I’d expect to see by 2029.
A meaningful split in the profession, more visible than it is today. On one side, the firms that have built AI into their stack deliberately, embedded it across compliance and advisory, and used the freed-up capacity to deepen client relationships and expand their book without expanding headcount in proportion. On the other side, the firms that experimented, didn’t commit, and find themselves competing on price against firms with structurally better economics. That split is starting now. In three years it’ll be a chasm.
Compliance work itself will look fundamentally different. Bookkeeping, reconciliation, basic returns and a meaningful proportion of preparation work will be running through agentic flows, with the accountant in the loop on exception and judgement, not on every transaction. MTD will have done its work as the data foundation that made all of this viable. The firms that grumbled loudest about MTD will be the ones it’s hurt most.
The advisory layer, what I’d rather call business insight support, will have grown materially as a share of the profession’s revenue. Not because firms suddenly found new advisory clients, but because AI made the existing client base more accessible to higher-value engagement. The accountant who couldn’t previously have had a quarterly strategy conversation with a £500k-turnover client because the economics didn’t work, that conversation becomes routine.
The vendor landscape will have consolidated. The retrofitted-AI products will mostly be gone, or merged into something more coherent. A handful of AI-native specialists will have established themselves alongside the incumbents, and the incumbents that survive will be the ones who genuinely rebuilt rather than rebadged.
And, I’d hope, the conversation itself will have matured. Less hype, less anxiety, more matter-of-fact integration. AI will have stopped being AI and started being how things are done, in the same way the cloud stopped being a topic and started being the assumption.
What I’m less sure about is the regulatory side. HMRC, ICAEW, ACCA, and the wider regulatory environment will have to catch up to a world where significant professional work is AI-mediated. Three years isn’t long for regulators, and I’d expect that lag to create some uncomfortable moments along the way.
9. Going forward, what will be the biggest change in accountancy?
Asked by Graham Burton.
The biggest change won’t be technological. It’ll be about what an accountant is for.
For a long time, the answer to that question has been mostly about preparation and compliance, assembling the numbers, filing the returns, keeping the firm and its clients on the right side of the rules. The technology layer underneath has changed several times, but the role hasn’t, fundamentally. AI changes the role itself.
Within five years, an accountant’s value to a client will sit much more clearly in interpretation, advice, and engagementthan it will in production. The production work doesn’t disappear, but it stops being where the time goes and stops being where the value is created. The accountant becomes a senior thinker rather than a senior assembler. That’s a different job, with a different skillset, requiring a different kind of training and different commercial model.
For most firms, that’s a more interesting profession, not a smaller one. The work that’s getting harder to justify is the work that nobody really enjoyed doing anyway. The work that’s getting more valuable is the work that drew people into accounting in the first place, understanding businesses, helping owners think clearly, being the trusted voice in the room when a decision matters.
But the change isn’t optional, and it isn’t gradual. The firms that lean into the new role will redefine the value of accountancy upwards. The firms that try to preserve the old role will find clients increasingly questioning what they’re paying for, because the production work is becoming a commodity faster than they realise.
So the biggest change isn’t a technology, a product, or a regulation. It’s an identity shift, from accountant-as-producer to accountant-as-advisor, and the profession has been talking about it for twenty years without being forced to actually do it. AI is the forcing function.
10. Would you advise your children leaving school to look at a career in accountancy?
Asked by Graham Burton.
Honestly, yes, with more confidence than I’d have given the answer five years ago.
The reflexive worry is that AI will hollow out the profession before today’s school-leavers reach the senior end of it. I don’t think that’s right. What AI hollows out is the assembly work, the parts of accountancy that were already moving offshore, already being outsourced, already being squeezed on margin. What it leaves intact, and arguably amplifies, is everything that requires judgement, relationship, and commercial understanding. That’s the senior end of the profession, and that’s where careers are actually built.
A young person entering accountancy today will have two structural advantages over the generation ahead of them. First, they’ll never have known a world where AI wasn’t part of the work, so they’ll skip the painful transition the current senior end of the profession is going through. Their fluency will be assumed, not learned. Second, the cycle time on building real expertise gets shorter when AI handles the assembly work. The hours a senior trainee used to spend on production are hours they’ll spend on interpretation, on client interaction, on the judgement-building that used to take a decade and now takes meaningfully less.
The skills that will matter most for that generation are probably different from the ones their parents trained on. Strong commercial instincts. Genuine curiosity about how businesses actually work. The ability to communicate clearly and engage with non-financial people. Comfort with ambiguity. Critical thinking about AI outputs rather than deference to them. The technical foundations still matter, you can’t interpret what you don’t understand, but the centre of gravity moves.
So yes, I’d advise it. With one caveat: pick a firm that’s serious about where the profession is going, not one that’s nostalgic about where it’s been. The firms that are leaning into this shift will give a young accountant a far better career than the firms still hoping it’ll all blow over. The choice of firm matters more now than it ever did.
If pressed for the shortest version: accountancy in 2026 is in the early stages of becoming a more interesting profession than it’s been for a long time. That’s a good time to join it.
A thank-you, and what's next
o everyone who joined live, asked a question, contributed in the chat, or just listened in, thank you. The session was sharper for your engagement, and the conversation in this profession is genuinely better when it’s this honest.
A particular thank-you to Henry Jensen, Deborah Spooner, Fozia Muddassir, Karl Howe, Mike Mason and Graham Burton, whose questions and contributions shaped the answers above. And to Dean Shepherd, who hosted the conversation with the directness it needed.
If you’d like to go any deeper into this:
- The white paper that anchored the session, Making Accounting AI, is available here.
- The webinar recording is available here.
- And if you’d like to talk through what any of this means for your firm specifically, twenty minutes with me is here, no pitch, no obligation, just a proper conversation.
For every demo booked and trial started, we plant a tree. One small reason more.
If there’s one thing I’d want every accounting professional to take from the hour we spent together, it’s this: an AI toolset and an AI strategy are not the same thing. Don’t let your firm confuse them.
Daniel Lawrence is the Founder of Bots For That and creator of the Beanieverse platform, a suite of AI-native powered tools for the accounting and bookkeeping sector. With over a decade of experience deploying enterprise automation and AI in financial services, and other highly regulated industries, he writes about AI transformation in accounting from the outside in.
© 2026 Bots For That. Part of the Making Accounting AI thought leadership series.