<aside> 💡
Founder note: Let’s set this straight. We don't train on your data. Why? Because we think it's not a good idea.
The performance of our assistants come from a clean knowledge base that AI models look into to provide you with a great answer. Training on user conversations could make this worse. More data risks degrading performance.
We don't want to deal with the headache that comes from potentially training AI on sensitive information (if it was shared). As a "calm" company, we decided our AIs aren't trained on any conversation. Read related post.
</aside>
At ISMS Copilot, we often get asked why we don't automatically learn from user conversations to improve our AI assistants. The answer might surprise you: it's a deliberate choice that puts quality and trust first. I made a post on this here.
Here's the thing about building AI for security compliance: it's really easy to get it wrong. Not just a little wrong. Completely wrong. While it might seem intuitive that more data equals better performance, the reality is more nuanced, especially in the compliance domain.
Instead of automatic learning, we:
This careful approach means our improvements are: