IRBA Guidance on the Use of AI in Audits – What You Need to Know
This article will count 0.25 units (15 minutes) of unverifiable CPD. Remember to log these units under your membership profile.
The Independent Regulatory Board for Auditors (IRBA) has issued a Staff Audit Practice Alert addressing the integration of generative artificial intelligence (GenAI) into the audit process. The guidance aims to assist auditors in managing the risks and responsibilities associated with using GenAI tools during audits of financial statements. The Alert is limited in scope to audit engagements only. It does not apply to review, other assurance, or related services engagements. However, the risks and solutions identified in the Alert also applies in the work of accountants and reviewers.
Key Themes of the Alert
GenAI offers substantial benefits, including enhanced efficiency, improved data analysis, and support in identifying risks. IRBA cautions that its use introduces new challenges, including ethical concerns, regulatory compliance risks, and threats to audit quality if not properly managed.
🔥NOTE: Regardless of your role in the profession, all accountants should be aware of the potential risks and limitations that AI introduces across financial reporting, compliance, and assurance.
Managing Risks in the Practice (ISQM 1)
The alert places expectation on audit firms to implement robust governance policies for the use of GenAI, including:
Approval of AI tools used in audit engagements
Minimum training and qualification requirements for users
Defined responsibilities for engagement partners
Oversight frameworks to ensure accountability
Even when third-party AI tools are used, firms remain fully responsible for ensuring compliance with quality management standards and legal obligations (e.g. POPIA, Cybercrimes Act).
Engagement-Level Responsibilities
Audit teams must:
Maintain control and oversight of GenAI tools used
Avoid over-reliance on AI outputs (automation bias)
Ensure that professional judgement and scepticism are not compromised
Consider documenting the use of GenAI to support audit file review and transparency
When Clients Use AI in Financial Reporting
Many preparers are beginning to use AI-enabled platforms for efficiency and accuracy. However, this introduces new risks, such as:
Lack of audit trails or documentation
Inadequate governance over AI use
Explainability issues in financial disclosures
Auditors are required to assess these risks under ISA 315 and, where necessary, report breaches of law in line with ISA 250.
Key Risk Factors When Using GenAI in Assurance Work
IRBA highlights several factors that heighten risk when integrating GenAI into the audit process:
Automation Bias: Over-reliance on AI outputs without critical evaluation
Explainability Gaps: Inability to explain how AI arrived at conclusions
Data Privacy Concerns: Use of client data in insecure or unapproved platforms
Lack of Documentation: Poor audit trail when AI is used by clients
Non-Compliance: Risks of breaching privacy, IP, or cybersecurity laws
Unvetted Tools: Use of open-source AI without legal agreements
Inadequate Oversight: Absence of firm-level governance or staff training
The Bottom Line
GenAI is poised to reshape all assurance services, not only audits. Its use must be aligned with ethical, legal, and professional standards. IRBA's guidance highlights the need for well-defined governance, ongoing oversight, and sound professional judgement that is applicable for the entire accounting profession.