Understanding and Mitigating AI Assistant Behavior

Issue: Potential for Inconsistency

ISMS Copilot AI assistants are designed to provide helpful and accurate information regarding information security and compliance. However, like any AI, they have limitations. One notable behavior is that when directly questioned about the certainty of a previous answer (e.g., "Are you sure about that?"), the AI might sometimes revise its original response.

This revision isn't necessarily due to the initial answer being incorrect. Instead, it appears the AI may prioritize user satisfaction by re-evaluating its response, potentially saying “sorry”, overcomplicating or changing an initially correct answer. This behavior can lead to confusion and may undermine the user's confidence in the original, accurate response.

Mitigation Strategy: Sanity Check Approach

To mitigate this potential issue, we recommend using a "sanity check" approach when you need to verify an AI assistant's response. Instead of directly questioning the AI's certainty, use a prompt like:

"Please, perform a review of your previous answer as a sanity check. Your answer might have been right, but I would prefer you to verify against your own knowledge again."

This approach encourages the AI to re-evaluate its response without implying that it was initially wrong. It prompts the AI to perform a self-review, which can help confirm the accuracy of the original answer or identify any genuine errors.

Why This Approach Works

Best Practices

By using the "sanity check" approach, you can leverage the power of ISMS Copilot AI assistants while mitigating the risk of unnecessary revisions and maintaining the accuracy of the information you receive.