Here's a reminder that as of today there's no AI tool that doesn't hallucinate, and this includes my own assistants (ISMS Copilot). While the situation goes in the right direction, and improves quickly, it's still a thing.

I'm the first user of my own tool when and also use chatGPT pro mode and Claude when needed. I don't mind verifying what matters. When people say "it misses its purpose" if you need to verify, I get it, but I think it lacks nuance.

There are many use cases that don't require "verification" (problem-solving, for example, because it's more about reasoning than dealing with factual references, i.e., "control 5.11 is return of assets").

Also, I'm a big advocate of giving the tool in the hands of people who know their thing. Why? Because it makes verification easier and corrections quick. I have my list of ISO 27001 annex A controls when I work with ISO.

If you ask to an AI "what are security controls related to cloud" and it tells you 5.24 instead of 5.23, you can spot it better because you're supposed to know it.

If the assistant is well trained, you would still get a correct title and description and guidance on how to implement it, or could ask the assistant how to precisely implement this control in your context. So I think it still meets its purpose despite punctual verification needs.

Bottom line:

We need to normalize communicating on hallucinations and performing verifications when needed, instead of pretending AIs are error-free and telling users to blindly trust it.

ISMS Copilot already took that path months ago and we'll continue following this line.