two laptops surrounded by neon lights
Professional Insights

AI auditing strengthens internal controls, compliance, and trust

Feb 11, 2026 · 4 min read · AICPA & CIMA Insights Blog

By Tamarra L. Brown, CPA, CGMA, MBA

Artificial intelligence (AI) is rapidly establishing itself as a business staple. As AI models continue to proliferate, how can professionals assure businesses and consumers that these models can be relied upon?

At the AICPA’s fall 2025 Government Performance and Accountability Committee (GPAC) meeting, Kavin Anburaj, internal audit director at Meta, shared her expertise about the importance of auditing AI models, different types of AI models, and the constraints that can affect an AI model audit.

“AI systems are becoming one of the most important areas for any internal auditor or professional to understand, so that we build trustworthy and reliable AI models and put them out there in the world for users and consumers to use,” Kavin said.

Done well, the results of AI model audits lead to stronger internal controls and compliance.

Why AI model audits matter

Kavin cited three factors making AI models a critical audit focus area.

  • The scale of AI adoption across industries is the first factor. More and more, decision-makers are relying on data generated through AI to set the focus and strategy of the business.

  • The second factor is regulatory trends. “Countries across the globe are defining their perceived key AI risks and the policy priorities needed to mitigate them … and for us to stay up to date with those regulations, we need to be able to understand how we can translate those regulatory trends into AI principles that we can audit against, so that our AI models can be built with trust and reliability,” said Kavin.

  • A third and critical factor is the reputational risk AI models pose for companies and their auditors. Kavin noted, “From an organizational perspective, users are also starting to understand and want to know how these AI models are impacting their data. … One mistake, or one biased AI model, could cause big headlines; it could cause organizational mistrust. AI professionals and risk professionals need to know how to make sure that AI controls are embedded.”

Techniques for auditing AI models

“No one audit is going to address every single type of risk that an AI model presents, so it's naturally up to us to pick and choose what that particular scope of the audit is, and what we want to gain out of it, and then choose these principles as the scope of the audit,” Kavin explained as she outlined three techniques: governance audits, model audits, and functionality audits.

  • Governance audits confirm the right rules are in place.

    Governance audits are foundational, focusing on AI model development and the AI model framework. This audit includes reviewing the existence and implementation of the right policies, roles, and responsibilities associated with the AI model/process — all of which should be very clearly documented (i.e., Have the right approvals, monitoring, and oversight techniques been implemented?).

  • Model audits confirm the AI model works as it should.

    Kavin calls model audits the “crux” of the AI audit. Model audits focus on checking for incorrect data or corrupt input data that is being used for the model. All AI models are built from data. Data will ultimately determine whether the AI model functions the way it's supposed to (i.e., Is the model built correctly, and is it working as intended?).

  • Functionality audits confirm that the outputs of the model are as expected.

    Performance validation, stress testing, benchmarking, and access management are all part of the model audit. Functionality audits check the output of the AI model and the real-world implications, seeking answers to questions such as: Does the audit output match what the AI model was built to do? Is it giving any unexpected or unplanned outcomes? If so, should there be limitations that are placed on the model?

Potential audit constraints

Kavin also shared four constraints that can affect an AI model audit.

  1. The first constraint is the lack of standardized frameworks across industries. Kavin noted that although IT audit frameworks have matured, AI is just beginning to develop best practices. “Each organization is having to figure out what is going to matter to their organization, what is the risk appetite of the organization. … They’re coming up with their own standard frameworks,” Kavin said.

  2. The second constraint in conducting an AI model audit is the speed of evolving technology. Internal and external auditors are required to continuously learn because new risk angles arise daily.

  3. The third constraint is third-party dependents. Many AI models are provided as third-party services. As a result, organizations require clear built-in vendor management principles and contractual obligations, so the organization has the capacity and capability to audit these vendor services when needed.

  4. The fourth and all-important consideration is how to mitigate organization resistance as the area of AI model audit develops. Kavin explained that, at a high level, audit professionals need to consider how to thoughtfully navigate the constraints and help internal audit or risk professionals and engineering teams build better, effective, and more dependable models that end users can trust.

“As we all know, doing AI audits is a new muscle that a lot of us are having to flex, and understanding what the business' perspective is and being able to build trust during this phase is really important for us to continue to do work in this arena,” Kavin said.

About the experts

Kavin Anburaj, M.S., is an internal audit director at Meta; her current work focuses primarily on privacy compliance and youth safety-related work. As a technology advisory professional, she has more than 15 years’ project management experience, focusing on risk management and process improvement services for banking and technology institutions. She shared her expert insights as a guest speaker at GPAC’s fall meeting.

Tamarra L. Brown, CPA, CGMA, MBA, is a member of the AICPA’s GPAC and Director, Administrative Services, for Alameda County Public Health Department, where she leads all aspects of financial and administrative functions for the department. For the past three years, Tamarra has served on the planning committee for the AICPA and CIMA Women’s Global Leadership Summit. Tamarra is also active in the California State Society of CPAs, serving as a committee member of the Elevate: Women’s Leadership Forum, and Government Accounting and Auditing Committee, a Trustee of the CalCPA Education Foundation as well as Council Member At-Large.

About the AICPA’s Government Performance and Accountability Committee

Comprising 12 volunteer committee members and in collaboration with staff at the Association of International Certified Professional Accountants, the Government Performance and Accountability Committee advises government officials, regulators, and stakeholders, advocating issues that matter to the accounting and finance profession. If you’d like to learn more about GPAC, please email Lori Sexton.

What did you think of this?

Every bit of feedback you provide will help us improve your experience

What did you think of this?

Every bit of feedback you provide will help us improve your experience

Mentioned in this article

Topics

Subtopics

Manage preferences

Related content