杏吧原创

Skip to Content

AI Audit Objects, Credentialing, and the Race-to-the-Bottom: Three AI Auditing Challenges and A Path Forward

Anton Grabolle / Better Images of AI / AI Architecture / CC-BY 4.0

A major issue that remains to be figured out for AI governance is how potential third-party audits will be performed and overseen. This piece, written by Benjamin Faveri, Graeme Auld, and Stefan Renckens, reviews three current challenges to AI audits and presents thoughts on ways forward.

by BENJAMIN FAVERI, GRAEME AULD, STEFAN RENCKENS /

The contours of artificial intelligence (AI) system鈥檚 risk regulations are becoming clearer. A consistent theme in many countries’ regulations is some role for AI audits, with the EU鈥檚 , Canada鈥檚 proposed , and the United States on the 鈥淪afe, Secure, and Trustworthy Development and Use of AI, 鈥漣ncluding language to this effect. Private and non-profit auditing firms are also offering or preparing to offer varied AI audit services (such as and performing ) to meet this growing demand for AI system and organization audits.

However, efforts to meet AI audit service demands, and by extension, any use of audits by public regulators, face three important challenges. First, it remains unclear what the audit object(s) will be 鈥 the exact thing that gets audited. Second, despite efforts to build training and credentialing for AI auditors, a sufficient supply of capable AI auditors is lagging. And third, unless markets have clear regulations around auditing, AI audits could suffer from a race to the bottom in audit quality. We detail these challenges and present a few ways forward for policymakers, civil society, and industry players as the AI audit field matures.

  published by Tech Policy Press.

This article is based on work done for a Social Sciences and Humanities Research Council (SSHRC) Connection Grant, Informing Canadian Regulation of High-Risk Artificial Intelligence.