Can Accountants Trust AI?

Introduction: The Age of Bots and AI in Accounting

As AI continues to reshape industries, accounting is no exception. From automating repetitive tasks to providing data-driven insights, AI is becoming an indispensable tool for many firms. However, with great power comes great responsibility. Trust is a critical factor when relying on technology, especially in a profession where ethical decision-making and accuracy are paramount.

In a recent live broadcast, industry experts Tim Baker, CEO of Kloo, and Sean Smith, Accountant Evangelist at Sage, addressed the pressing question: Can we trust AI to make ethical decisions in accounting? This article will explore how accountants can ensure that AI remains trustworthy by focusing on practical steps related to data quality, bias mitigation, transparency, human oversight, and regulatory compliance.

trust

Section 1: Understanding AI and Trust

AI in accounting is more than just an efficiency tool; it’s becoming an integral part of decision-making processes. But can it be trusted? It is, after all, only as reliable as the data it’s trained on. To address the trust dilemma, accountants must adopt practical strategies that enhance both reliability and ethical performance.

Section 2: Ensuring High-Quality, Unbiased Data

Data Quality Control

The decision-making ability of AI hinges on the quality of the data it uses and receives. Therefore, AI can only produce accurate results if it’s fed clean, high-quality data.

Conduct Regular Data Audits

Accountants may want to consider implementing periodic audits of the datasets being fed to AI to use. This may involve scanning for anomalies, missing data, or inaccuracies. Simple data validation checks (e.g. range limits, completeness or reasonableness tests) could be automated to ensure better data integrity.

Clean and Standardise Data

Ideally before data enters the AI pipeline, accountants may need to at least check for any data cleansing needed. A good example of this is bank transactions and feeds. This sometimes involves standardising formats (e.g. dates, currencies), eliminating duplicates, or filling in missing information. Luckily, there are tools to help with this, and better still, routines and logic can even be built into the AI to completely streamline the process.

Diversify Training Data

Bias often stems from narrow training sets. To avoid this, firms should diversify the data they use, ensuring that it reflects a range of industries, geographies, and client types. By including diverse scenarios, the AI system can better generalise data across different cases, helping to reduce biased outputs.

Bias Mitigation

Even with high-quality data, bias can sneak into AI models. Accountants should consider any proactive measures they can to ensure the AI they use doesn’t perpetuate unfair or inaccurate outcomes – in practice, most are finding this very difficult to do, especially when the AI tool or data is a public one over which they have no control.

Bias Detection and Testing

There are tools available, such as IBM’s AI Fairness 360 or Google’s What-If Tool, that allow you to detect potential biases in AI models. These tools can test whether AI outcomes differ based on certain inputs, helping identify and eliminate unintended biases. These tools are generally “low-code”, open-source and available for anyone to use, but you may need a little technical assistance to get the most out of.

Implement Human-in-the-Loop Feedback

Integrating human oversight at key decision points may reduce the risk of bias. For example, if an AI system flags a set of transactions for further review, a human accountant should assess whether these recommendations are fair and accurate. This feedback loop should improve the system over time.

Section 3: Maintaining Data Transparency

Model Explainability

Transparency is crucial when using AI in decision-making. Accountants must be able to understand how an AI system reaches its conclusions, especially when communicating results to clients or regulators.

1. Explainable AI (XAI) Frameworks
Tools are becoming more increasingly available that can provide insights into AI decision-making processes. Explainable AI (or XAI for short) allows you to see the rationale behind a decision, making it easier to identify potential errors or biases. This is particularly important in audit scenarios, where transparency is non-negotiable. Some examples of open-source tools are LIME, SHAP and ELI5.

2. Automated Reporting of AI Decisions
Accountants can also consider systems that will generate clear, transparent reports of how AI models process data. These reports should detail the data sources, transformations, and logic used to produce outcomes. Having a clear chain of logic can make it easier to verify results and explain them to clients.

3. Maintain Thorough Documentation
From training data sources to model performance, every aspect of the AI’s functioning can be thoroughly documented. This not only helps with transparency but also facilitates smoother audits and external reviews, ensuring that the AI’s integrity can be independently verified.

Open AI Reviews

Transparency doesn’t stop at internal measures though – external perspectives could build further trust.

1. Third-Party Audits and Peer Reviews
This will likely be an unpopular suggestion, but firms could consider having their AI models reviewed by third-party auditors. It doesn’t have to be done by the competition, but these external reviews can provide independent validation of the AI’s fairness and accuracy, offering additional reassurance to clients and stakeholders.

2. Educating Clients
Clients often feel uneasy about AI because they don’t understand how it works. Firms can alleviate these concerns by providing plain-language explanations and offering transparent reports. Engaging clients in the AI process makes them more comfortable with its use in their accounting.

Section 4: Human Oversight and AI Decision Validation

Human Review Process

While AI can analyse and flag transactions faster than any human could, it’s essential that accountants remain involved in the decision-making loop.

1. Implement Human Review Checkpoints
Firms should establish review points where humans must validate AI-driven decisions. For example, while AI can flag unusual patterns in an audit, it is still up to human accountants to determine the validity of those flags.

2. Set Up Error Escalation Procedures
If an AI makes an unusual or incorrect recommendation, there should be a clear protocol for addressing these errors. Escalating problematic decisions is one thing, but it’s important that missteps are assessed and corrected quickly, to prevent the problem perpetuating.

Continuous Learning and Adaptation

AI models may always need regular tuning and updating to remain effective, and human input plays a crucial role in this process.

1. Establish Feedback Loops
By regularly feeding back human corrections into an AI model, you can improve the system’s accuracy over time. AI learns from both its successes and mistakes, so encouraging consistent human feedback is key to making the model more reliable.

2. Regular AI Performance Audits
Periodically evaluating an AI’s performance against actual outcomes ensures that the system is still delivering accurate results. Metrics like accuracy, precision, and recall can help firms measure and refine the AI’s capabilities.

Section 5: Adapting to Evolving Regulatory Requirements

Regulatory Compliance Frameworks

The regulatory landscape around AI in particular is rapidly evolving. Accounting firms must stay ahead of the curve to ensure any AI systems remain compliant.

1. Assign a Compliance Lead
Designate a member of the team to monitor regulatory updates relevant to AI and accounting. They should stay informed of changes in GDPR, industry-specific standards, and any emerging legislation that impacts AI use in accounting.

2. Use AI Compliance Tools
Leveraging automated compliance tools that monitor AI outputs in real time, flagging any potential violations of privacy or regulatory standards. These tools help accountants ensure they stay compliant with regulations like GDPR or country-specific financial laws.

Data Retention and Privacy Controls

Accountants must ensure that AI systems protect sensitive client data and adhere to privacy regulations.

1. Obtain Client Data Permissions
Always ensure that client data is used with explicit permission. Create processes that allow clients to easily withdraw consent for AI use or review how their data is being utilised.

2. Anonymise Client Data
When feasible, anonymise client data before feeding it into AI models. This protects privacy and ensures compliance with data protection laws while allowing the AI to still analyse relevant trends and insights.

Section 6: Building Trust with Clients through Transparency and Collaboration

Transparent AI Adoption

Building trust with clients starts with transparency. Firms need to make AI accessible and understandable for their clients.

1. AI Onboarding for Clients
Offer clients educational sessions about how AI is used in their services. Provide case studies that illustrate AI’s benefits, such as fraud detection or efficiency gains in audit processes. This can help to reassure clients that the AI is working in their favour.

2. Client-Friendly Reporting
Develop AI-driven reports that are designed to be user-friendly. Avoid jargon and present data insights in a clear, visual format that clients can easily understand. This builds trust by ensuring clients understand the AI’s outputs.

Co-Creation and Customisation

Involving clients in the AI process can deepen their trust in your firm’s approach.

1. Offer Customisation Options
Let clients have a say in how AI is used for their accounts. For example, offer flexibility in choosing the level of human oversight, or let clients customise their AI-generated reports.

2. Collect and Integrate Client Feedback
Continuously gather feedback from clients on their experiences with AI-driven accounting solutions. Show clients that their input is valued by adapting the AI tools to better meet their specific needs.

Conclusion: The Future of AI in Accounting – Trust, Transparency, and Ethical Responsibility

AI has the potential to add to the technological revolution of accounting, but trust is key to its successful adoption. By taking practical steps to ensure data integrity, mitigating against bias, maintaining transparency, and involving humans in-the-loop, accountants will be able to leverage AI without compromising on their ethics or trust.

However, while much of the discourse around AI focuses on trust and the fear of biased or erroneous outputs, we must not overlook the human element. After all, the “natural stupidity” of people – the very real risk of human error, fatigue, and bias – poses its own dangers, and it always has. AI, when used correctly though, can act as a powerful counterbalance to these human risks, ensuring more accurate, data-driven outcomes.

As regulations evolve, staying ahead of compliance issues will be critical. Firms that adopt these practices will not only gain a competitive edge but also solidify their reputation as trustworthy, ethical leaders in the accounting sector—combining the best of AI and human intelligence while minimising both “natural stupidity” and machine missteps.