Minimizing Hallucinations in ISMS Copilot

Understanding Hallucinations

ISMS Copilot, like all AI assistants powered by large language models, can sometimes generate responses that appear plausible but are factually incorrect, inconsistent with provided information, or otherwise unreliable. This phenomenon is known as "hallucination."

As of today, no AI system is free of hallucinations, even if some do better than others, especially with the improvements of underlying models.

While we continuously work to rely on the best models and do what we can to minimize hallucinations, we’ve also elaborated guidance for users to ensure you receive the most accurate and trustworthy information possible.

PS: Remember that, in doubt, the best behaviour remains to check a factual information with additional external sources.

Built-in Features and Strategies

ISMS Copilot incorporates several techniques to mitigate hallucinations:

Explicit Uncertainty Handling

ISMS Copilot is designed to recognize when it lacks sufficient information to provide a confident answer. You can (and should!) encourage this by explicitly instructing it to acknowledge uncertainty. For example, include phrases like:

This simple technique significantly reduces the likelihood of the assistant fabricating information.

Post-answer checks

When the copilot provides an answer, ask:

“sounds good but can you do a sanity check on articles/recital references please? just looking for hallucinations, good if not.”

This is for looking into a regulation. Adjust the request depending on what you’re working with.

Asking to re-recheck answers is powerful because the assistant will check once again their training knowledge. Is it a bulletproof method? No. You’re better off checking it yourself if you want a 100% reliability. But this turns out to be effective in most situations.

image.png

Factual Grounding with Direct Quotes

When working with documents, especially longer ones, ISMS Copilot can be instructed to extract relevant quotes before performing analysis or generating summaries. This "grounding" process ensures that its responses are directly tied to the source material, reducing the risk of misinterpretations or invented facts. How to use it: When asking ISMS Copilot to analyze a document, include instructions like: