Version | Latest changes |
---|---|
v1.0 | Publication |
ISMS Copilot recognizes the transformative potential of Artificial Intelligence (AI) in information security management and compliance. As a provider of AI-powered compliance assistance, we are committed to deploying AI technologies responsibly and ethically, with a particular focus on transparency and user empowerment.
The purpose of this policy is to establish our public commitment to ethical AI governance in compliance advisory services. This policy demonstrates our alignment with international standards such as ISO/IEC 42001 and regulatory requirements such as the EU AI Act, while ensuring that our AI system supports organizations in their compliance journey without replacing human judgment.
This policy serves to:
This policy applies to ISMS Copilot's AI system, which provides guidance and assistance in information security management and compliance. Our scope encompasses the core AI-powered compliance assistant, its interactions with users, public API integrations, and all documentation and guidance generated by our system.
The policy is relevant to ISMS Copilot's development and operations teams, our users and their organizations, integration partners, and third-party service providers supporting our AI operations.
As a limited-risk AI system operating in the compliance advisory space, we place particular emphasis on transparency in AI-generated compliance guidance, clear delineation between AI assistance and human decision-making, protection of user data and privacy, ethical considerations in compliance advisory, and continuous monitoring of AI performance.
Understanding key terms is vital for the consistent application of this policy. The following definitions apply:
Artificial Intelligence (AI): In the context of ISMS Copilot, AI refers to our system's capability to understand and provide guidance on information security management and compliance frameworks, while maintaining clear boundaries of advisory support.