At ISMS Copilot, we are committed to legal compliance and securely deploying AI systems. To enhance transparency, we have updated our licensing agreement and created this FAQ.
1. When does the new agreement come into effect?
The new agreement becomes effective on 1st September, 2024.
2. Which AI chatbots are covered by this agreement?
All AI chatbots accessible within the ISMS Copilot platform are covered, including ISO 27001 Copilot, Risk Assessment Assistant, Policy Assistant, and ISMS Copilot X.
3. Are custom versions integrated into other platforms covered?
No. Custom versions integrated by customers into their own platforms are not covered by this agreement. In any integrated scenario, we do not use those conversations to improve the model.
4. Do you train AI models on conversation data?
In principle, no. Our chatbots do not automatically learn from user conversations. If we identify an error in the AI’s guidance, an admin may manually select and anonymize a relevant Q&A pair to improve the assistant’s conceptual understanding. This process excludes personal or confidential information and focuses only on valuable, general insights (e.g., correcting a control reference for an information security compliance framework or a regulation).
5. Why exclude personal and confidential data from training?
We value user privacy and security. Since our goal is to enhance the AI’s accuracy on information security frameworks and regulations, using personal or company-specific information is unnecessary and undesired. By relying solely on anonymized, conceptual corrections, we ensure no identifiable or sensitive content influences our training process.
6. How is user interaction data chosen for improvement?