Below is a consolidated table of key AI terms. Where a term is defined by an official source (ISO/IEC 42001, ISO/IEC 22989, the EU AI Act, etc.), that source is cited.

Official Definitions of Key AI Terms

Term Official Definition (with Source)
AI (Artificial Intelligence) Definition: “Research and development of mechanisms and applications of AI systems.”Source: ISO/IEC 22989:2022, Clause 3.1.3 Note: In ISO terminology, “AI” often refers to the broader field of study devoted to developing AI systems.
AI Management System (AIMS) Definition: Management system that provides a framework for managing the unique issues and risks arising from the use of AI in an organization. Source: Based on ISO/IEC 42001:2023 (context). Note: ISO/IEC 42001 specifies requirements for establishing, implementing, maintaining, and continually improving such an AI management system.
AI System (AI Systems) Definition: “Engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.”Source: ISO/IEC 22989:2022, Clause 3.1.4
AI Product Definition: Not explicitly defined in ISO/IEC 22989, ISO/IEC 42001, or the EU AI Act. Generally used to mean a product that incorporates an AI system. ISO/IEC 42001:2023 references “AI product or service providers,” implying products embedding AI functionality.
AI Performance Metrics Definition: Not explicitly defined in ISO/EU sources. Generally refers to quantitative measures used to evaluate an AI system’s performance (e.g. accuracy, precision, recall). The EU AI Act requires disclosure of certain metrics for high-risk AI but does not formally define “performance metrics.”
AI Risk Definition: Not explicitly defined as a separate term in ISO/EU sources. In general, “risk” in the AI context follows the standard definition of “effect of uncertainty” (ISO Guide 73). The EU AI Act employs a risk-based approach but does not define “AI risk” per se.
Audit Definition: “Systematic and independent process for obtaining evidence and evaluating it objectively to determine the extent to which the audit criteria are fulfilled.”Source: ISO/IEC 42001:2023
Bias Definition: “Systematic difference in treatment of certain objects, people or groups in comparison to others.”Source: ISO/IEC 22989:2022Note: ISO 22989 distinguishes between “bias” in general and specifically unfair or discriminatory bias under the heading “unfairness.”
Change Advisory Board (CAB) Definition: Not defined in ISO/IEC 22989, ISO/IEC 42001, or EU AI Act. Commonly, a CAB is a group of stakeholders that evaluates and approves proposed changes (used in IT service management and other domains).
Change Request (CR) Definition: Not defined in ISO/EU AI standards. Generally a formal proposal for a change to be made to a system, product, or process.
Compliance Definition: Fulfillment of requirements or obligations. In AI, this means adhering to applicable laws, regulations, standards, and internal policies.Reference: ISO 37301 (compliance management) defines “compliance” as meeting all compliance obligations.
Concern Definition: Not explicitly defined in ISO/EU sources. Generally means an issue, worry, or question raised about an AI system (e.g. an ethical or operational concern) that may warrant attention or investigation.
Data Management Definition: Not defined in ISO/IEC 22989 or ISO/IEC 42001. Commonly refers to all practices for handling data—including collection, storage, processing, and governance—that an AI system uses or produces.
Documented Information Definition: “Information required to be controlled and maintained by an organization, and the medium on which it is contained.”Source: ISO/IEC 42001:2023, Clause 3.10
Ethical AI Definition: Not defined in ISO/EU sources. Generally refers to AI that adheres to recognized ethical principles such as fairness, transparency, accountability, and respect for human rights.
Evaluation Definition: Not explicitly defined in ISO/IEC 22989 or ISO/IEC 42001. Often used to mean the process of assessing or appraising an AI system’s performance, impacts, or outcomes against defined criteria.
High-Risk AI System Definition: An AI system classified as “high-risk” under the EU AI Act (Article 6). This includes systems intended to be used as safety components of regulated products or in specific use-cases listed in Annex III of the Act, posing significant risks to health, safety, or fundamental rights.
Impact Assessment Definition: “Formal, documented process by which the impacts on individuals, groups, or societies are identified, evaluated, and addressed by an organization developing, providing or using AI.”Source: ISO/IEC 42001:2023, Clause 3.24 (“AI system impact assessment”)
Incident Definition: Not explicitly defined in ISO/IEC 22989 or ISO/IEC 42001. Typically, an event that interrupts normal operation or causes harm. The EU AI Act references “serious incidents” leading to death, serious harm to health, property damage, or breach of fundamental rights.
KPI (Key Performance Indicator) Definition: Not defined in ISO/IEC 22989 or ISO/IEC 42001. Commonly a quantifiable measure used to evaluate success in meeting objectives or performance goals within a management system.
Legal Entity Definition: “Legal person created and recognized as such under law, which has legal personality and capacity to act, exercise rights, and be subject to obligations.”Reference: Based on EU Regulation 2021/697, Article 2(1).
Lifecycle AI (AI Lifecycle) Definition: No separate definition in ISO/EU. Generally refers to all stages of an AI system’s existence, from initial concept and development through deployment, operation, maintenance, and decommissioning. ISO/IEC 22989:2022 (Clause 6) provides a high-level life cycle model for AI systems.
Machine Learning (ML) Definition: “Process of optimizing a model’s parameters by analyzing data so that the model’s behavior reflects the data.”Source: ISO/IEC 22989:2022
Management System Definition: “Set of interrelated or interacting elements of an organization to establish policies and objectives, as well as processes to achieve those objectives.”Source: ISO/IEC 42001:2023, Clause 3.4
Nonconformity Definition: “Non-fulfilment of a requirement.”Source: ISO/IEC 42001:2023, Clause 3.16
Organization Definition: “Person or group of people that has its own functions with responsibilities, authorities, and relationships to achieve its objectives.”Source: ISO/IEC 42001:2023, Clause 3.1
Physical Site Definition: Not defined in ISO/EU AI standards. Generally means a physical location (facility, data center, office) where AI operations are carried out, as opposed to virtual or cloud environments.
Process Definition: “Set of interrelated or interacting activities that uses or transforms inputs to deliver a result.”Source: ISO/IEC 42001:2023, Clause 3.8
Quality Definition: “Degree to which a set of inherent characteristics of an object fulfills requirements.”Source: ISO 9000:2015
Residual Risk Definition: “Remaining risk after risk treatment.”Source: ISO Guide 73:2009 / ISO 31000
Responsible AI Definition: Not defined in ISO/EU sources. Commonly refers to AI developed and used in a responsible manner, adhering to ethical standards, regulatory requirements, and best practices to ensure trustworthiness.
Responsible Use (of AI) Definition: Not defined in ISO/EU sources. Typically means using AI ethically, lawfully, and in alignment with societal values and organizational principles.
Scope (of AIMS or project) Definition: The boundaries and applicability of the management system within the organization. In ISO management systems, the scope indicates which parts of the organization, products/services, and activities are included.Reference: ISO/IEC 42001:2023 context
SMART Objectives Definition: Not defined in ISO/EU standards. The acronym stands for Specific, Measurable, Achievable, Relevant, and Time-bound, a common framework for setting clear, trackable goals.
Stakeholder Definition: “Individual or group that has an interest in any decision or activity of an organization.”Source: ISO 26000:2010, Clause 2.20In AI, this can include data subjects, system users, regulators, suppliers, etc.
Supplier Definition: “Organization or person that provides a product or service.”Source: ISO 9000In an AI context, a supplier might provide data, components, or software for AI systems.
Threat Definition: “Potential cause of an unwanted incident, which may result in harm to a system or organization.”Source: ISO/IEC 27000:2018
Transparency Definition: “Property of an AI system that appropriate information about the system is made available to relevant stakeholders.”Source: ISO/IEC 22989:2022 (on AI transparency)
Vulnerability Definition: “Weakness of an asset or control that can be exploited by one or more threats.”Source: ISO/IEC 27000:2018