Yesterday, I was on a Zoom with a group of consultants organised by CertBetter. They host a monthly session to talk all things ISO consulting and industry updates.
The hot topic this week was ISO 42001:2023 - the new AI Management System standard.
I’ve seen it mentioned online, and of course, we’ve just launched our ISO 42001 course here at Auditor Training Online. But sitting there listening to the conversation, I realised I hadn’t really stopped to think about it from my own perspective as an auditor.
So, after all these years of drilling “maintain vs. retain,” you can forget it. It’s all about to change.
But terminology, whatever ISO calls it, doesn’t change the core reality: clear, consistent documentation matters. Procedures, records, “things we maintain,” “things we retain”, however you label them, good documentation is non-negotiable.
So I did what I always do - I became even more curious and started asking myself questions.
How does this new standard cross over with what I already audit in ISO 9001, 14001 and 45001?
What’s actually new, and what’s just familiar concepts applied in a new context?
And maybe the bigger question - do auditors like me need to start paying more attention to this?
ISO 42001 is built on the same foundation - trust, accountability, and evidence which are the same things we already look for in Quality, Environment, and OH&S systems. It made me wonder whether we’re already touching parts of AI management without realising it, and if those same principles apply across all standards. With that in mind, let’s look at where AI might already be crossing over into the systems we audit every day.
Those questions I asked myself sent me down a bit of a rabbit hole. The more I looked at ISO 42001, the more I realised it isn’t a separate, tech-heavy standard. It shares the same management system structure we are already familiar with.
Where it becomes relevant is in how AI starts influencing the very processes we audit for Quality, Environment, and OH&S. In many cases, it introduces new risks, new decision points, and new expectations for evidence.
Here’s where those crossovers start to appear.
We may not be auditing AI itself, but we’re already seeing its fingerprints across the systems we audit every day.
1. Ask where AI is already in play.
During audits, look for AI-enabled tools or automation influencing decisions. This could be in quality checks, environmental monitoring, HR, scheduling, or safety reporting.
2. Evaluate how AI affects risk and evidence.
Consider whether AI-generated outputs influence product quality, safety outcomes, or compliance data. If they do, check that risks, validation, and human oversight are built into the process.
3. Check leadership awareness and accountability.
Ask leaders if they know where AI is used and who is responsible for its decisions and outputs. Lack of clarity here often signals weak governance.
4. Review training and competence.
Confirm that staff using AI tools understand how to interpret results and verify accuracy, not just rely on the technology.
5. Include AI considerations in audit planning.
Add questions about AI use and governance to your audit checklists. You’re not auditing AI itself however you are auditing how it’s managed within the system.
This article is just the beginning. Join us for the extended discussion on the podcast, available on Spotify and YouTube.