Skip to main content

Auditing AI

Auditing AI: assurance for governance and organization

AI applications are rapidly making their way into organizations and penetrating critical processes. This not only changes how decisions are made, but also what risks organizations face. For boards and executives, the central question is more urgent than ever: how confident are you that AI is functioning reliably, transparently and compliantly, so that you can maintain a grip on quality, integrity and oversight? An AI Audit offers the answer.

AI risks affecting the entire organization

An audit of AI systems goes beyond testing IT controls. It touches directly on governance, strategy and reputation. Key issues include:

  • AI Governance and Responsibilities: who owns AI policy, data quality and monitoring of AI?
  • Databias and ethics: how do you prevent AI from discriminating or producing wrong outcomes? In other words, what is the state of ethical AI application?
  • Transparency and explainability: can the organization explain how algorithms arrive at decisions?
  • Compliance and regulation: does the deployment of AI meet the compliance requirements of the AI Act, the AVG and industry-specific guidelines (e.g., from financial regulators or healthcare)?
  • Continuity: how dependent is the organization on suppliers or model updates - and what happens in case of disruptions?

    The five core components of the European AI Act are: risk-based approach, obligations, transparency, oversight and sanctions.

With the advent of the AI Act AI compliance will no longer be a casual topic, but a legal obligation. Depending on the risk profile of AI applications (from low risk to high risk), specific requirements apply to documentation, explainability, risk management and algorithm oversight. For board and management, this means: waiting with an Artificial Intelligence audit is actually no longer an option.

 

Three approaches in practice

More and more organizations are taking the first steps in AI auditing. In practice, we see three common approaches, each with its own level of maturity and impact.

Starting with governance and policy

Organizations that still have limited AI deployment often start with an examination of policies and responsibilities. This exposes fragmentation and reveals where governance is lacking. It forms the basis for clear frameworks and oversight. Formal audits usually follow only once AI adoption is further developed.

Conducting a baseline measurement

A systematic baseline measurement, based on frameworks such as ISO, NIST or the EU ALTAI guidelines, helps organizations sharply define their starting position. The result is a realistic picture of the current situation and a clear reference point for management and supervisors. Moreover, it outlines a growth path to structural assurance, in which AI Act compliance also plays a central role.

Adult AI Assurance

Organizations that are further along in their development perform full-fledged audits that lead to assurance on AI applications; with a broad scope ranging from governance to algorithm operation. Multidisciplinary teams thereby combine expertise from IT, data science, legal disciplines and compliance. The result is not only assurance on the controlled deployment of AI, but also strategic insights that add value to governance and business.

What this means for organizations

AI auditing is not a standard process. It requires customization to fit the chosen AI systems and the maturity of the organization. Governance and clear responsibilities are always the starting point here, even before algorithms themselves become the subject of audit.

Effective auditing also requires a risk-based approach. Not every application has the same impact. By focusing on material and high-risk AI applications, boards and supervisors gain insights that are directly relevant. Multidisciplinary cooperation between business, Risk, Compliance and Internal Audit is indispensable in this regard to achieve a complete and reliable opinion.

Finally, laws and regulations play a central role. New frameworks such as the AI Act and existing standards such as the AVG must be explicitly integrated into audit criteria and frameworks. This requires continuous investment in knowledge, not only among auditors, but throughout the organization. Only then can AI be applied responsibly and with confidence.

In summary, effective AI auditing requires:

    • A solid foundation in governance

    • A risk-based approach with a focus on material applications

    • Multidisciplinary collaboration

    • Integration of laws and regulations

    • Structural investment in knowledge

How ARC can support People with AI audits.

AI auditing is still relatively new territory. ARC People provides professionals at the intersection of Internal Audit, Risk and Compliance who help organizations test and use AI responsibly and effectively.

We conduct governance assessments and baseline measurements, develop audit programs focusing on the AI Act and AVG, and assess data quality, algorithm development and AI monitoring. We also assess compliance with industry-specific guidelines and European regulations, and strengthen organizations with specialized capacity where needed.

Want to know more or spar about your next step?

Contact ARC People for an exploratory discussion. We are happy to think with you about the next step in gaining assurance on AI.

More information on this topic

Are you interested in learning more about this topic? If so, please contact me or one of my colleagues. We are ready to answer your questions and help you further.

Our expert team, with years of experience, is ready to support you and offer personalized advice tailored to your specific situation. We strive to respond to your inquiries as quickly as possible so that you are always helped quickly.

Roy van Buuren

Senior Manager of IT Audit & Risk

06-42095266

Marc van Heese RO RE CIA

Partner

06-52073162