Transparency

Our AI assistants differ in hosting location, data retention, and purpose. We disclose these details clearly so you can make an informed choice:

Assistant Underlying Model Hosting Location Data Retention Data Processing Purpose
ISMS Copilot Free Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* Multi-framework compliance guidance (limited by usage quota)
ISMS Copilot X Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* Multi-framework compliance guidance
ISMS Copilot Pro Claude 4 Opus/DeepkSeek-R1 US Indefinite* Multi-framework compliance guidance
InfoSec Management Copilot Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* Specialized information security management guidance
Policy Assistant Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* Security policy creation and management
SOC 2 Copilot Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* SOC 2 compliance guidance
AI Governance Copilot Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* AI governance and risk management guidance
Privacy Management Copilot Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* Privacy management and compliance guidance
Quality Management Copilot Claude 4/Grok 3/Gemini 2.5 Pro US Indefinite* Quality management system guidance
ISMS Copilot EU GPT-4.1 EU (France) 90 days Multi-framework compliance with EU data residency
GDPR Copilot GPT-4.1 EU (France) 90 days GDPR compliance guidance
DORA Copilot GPT-4.1 EU (France) 90 days Digital Operational Resilience Act compliance
NIS2 Copilot GPT-4.1 EU (France) 90 days Network and Information Security Directive compliance
EU AI Act Copilot GPT-4.1 EU (France) 90 days EU AI Act compliance guidance
Cyber Resilience Act Copilot GPT-4.1 EU (France) 90 days EU Cyber Resilience Act compliance
Temporary Chats GPT-4.1 EU (France) 30 days Multi-framework compliance guidance with enhanced privacy

See further detail on the original page.

Explainability

Explainability means showing how and why each AI assistant arrives at its recommendations, without overwhelming you with technical jargon.

  1. Reasoning Summaries
  2. Source Attribution
  3. Confidence Levels
  4. Limitations

3. Interpretability

Interpretability focuses on understanding how the model processes information internally. Our assistants are built on Large Language Models (LLMs), which operate as complex “black boxes.” While we cannot expose every layer of the neural network, we do:

  1. Provide a Simple Reasoning Summary
  2. Acknowledge Black-Box Nature
  3. Encourage Validation

Feedback & Contact