Lead the Standard

AI in Your Management System

Written by Jackie Stapleton | Oct 6, 2025 6:00:00 AM

Risky Business or Smart Strategy?

A few weeks ago, we added a new feature into our courses at Auditor Training Online, AI-generated audio and video clips created through NotebookLM. They’re a handy way for learners to review content in a different format. But we were careful to add a clear statement above them: this is a support tool, not a replacement for the full course content or assessment.

Not long after, I had a different kind of AI moment. A student contacted us, unhappy with the feedback on his workbook. He was convinced the marking had been done by AI and wanted to speak to a human. The irony? I had personally marked it, and every single word was mine. We sorted it out in a Zoom call, but it left me thinking: our clients and students already carry assumptions, doubts, and even mistrust around AI.

That’s the reality for any business today. Whether you’re using AI or not, people will question it. And if you are using it, they’ll want to know how, where, and with what safeguards. Which brings us to the bigger question: how do you integrate AI into your management system, not just as a tool, but in a way that manages the risks, sets the right boundaries, and maintains trust?

If you’d like to explore this further, Deloitte has a great article on how ISO/IEC 42001 is shaping AI governance and risk management. It highlights exactly why assessing risks, setting clear boundaries, and building trust are essential when using AI in business systems. You can read it here: ISO 42001 Standard for AI Governance and Risk Management.

The AIM Pyramid: Using AI with Confidence in Your Management System

AI is finding its way into almost every workplace tool,  from automated reporting dashboards to chatbots handling customer queries. That means it’s also creeping into management systems, whether you’ve planned for it or not. The AIM Pyramid is a simple way to make sure you’re not just using AI for the sake of it, but doing it in a way that manages the risks, sets boundaries, and builds trust.

1. Assess the Risks

Before you let AI into your system, you need to ask: what could go wrong?

  • Could it produce inaccurate or biased data?
  • Could it leak confidential information if connected to the wrong source?
  • Could people misinterpret its role?

Example: imagine you use AI to generate draft internal audit checklists. Helpful, but what if it pulls irrelevant clauses or skips a key requirement? Without assessing that risk, you’re opening a gap in your audit process.

This is your foundation. If you don’t understand the risks, the rest of the system can’t stand on solid ground.

2. Integrate with Boundaries

Once you know the risks, decide where AI fits, and where it doesn’t. AI can support, but it can’t replace accountability.

Example: at ATOL, NotebookLM is being integrated into courses for quick audio/video reviews. It’s useful, but we’ve set a clear boundary: it supplements learning, it doesn’t replace assessments or human feedback. In the same way, you might use AI in your QMS to summarise customer complaints, but you still need a human to decide which issues escalate to corrective actions.

Boundaries are what stop AI from drifting into decisions it’s not fit to make.

3. Monitor for Trust

Integration isn’t the end of the story. You need to review, adapt, and keep stakeholders in the loop. Trust is built through transparency.

Example: a student recently questioned whether his workbook had been marked by AI. It hadn’t, I had personally done every comment. But that pushback showed me something important: people want assurance that AI isn’t being used where it shouldn’t be. That’s monitoring in practice, checking both outputs and perceptions.

Monitoring connects back to risk assessment, because new risks will emerge over time. It also reinforces boundaries, because trust comes from consistency.

Next Steps for You

1. Map where AI already exists in your system

  •  Look at tools you’re already using (e.g., reporting dashboards, learning platforms, chatbots).
  • Ask: Are we relying on AI without even realising it?
2. Identify potential risks
  • Accuracy errors, bias, confidentiality leaks, or even perception issues.
  • Start a simple risk register for AI use.
3. Set boundaries for AI
  • Define what AI can do (support, summarise, automate) and what humans must own (decisions, approvals, communication).
  • Document those rules as part of your management system.
4. Build trust through transparency
  • Communicate clearly to staff, clients, or auditors how you are (and aren’t) using AI.
  • Review and monitor outputs regularly, just as you would with any other process.

For more on this topic, listen to the podcast...