0% found this document useful (1 vote)
3K views2 pages

ISO/IEC 23894:2023 AI Risk Management

ISO/IEC 23894:2023 outlines various risk sources associated with AI systems, emphasizing the complexity of environments, lack of transparency, levels of automation, machine learning challenges, hardware issues, system life cycle problems, and technology readiness. It highlights that understanding the operational context and ensuring data quality are crucial for risk management. Additionally, the document stresses the importance of transparency and explainability in fostering trust and accountability in AI systems.

Uploaded by

Mohamed Amine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
3K views2 pages

ISO/IEC 23894:2023 AI Risk Management

ISO/IEC 23894:2023 outlines various risk sources associated with AI systems, emphasizing the complexity of environments, lack of transparency, levels of automation, machine learning challenges, hardware issues, system life cycle problems, and technology readiness. It highlights that understanding the operational context and ensuring data quality are crucial for risk management. Additionally, the document stresses the importance of transparency and explainability in fostering trust and accountability in AI systems.

Uploaded by

Mohamed Amine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

ISO/IEC 23894:2023

Risk sources

B.2 Complexity of environment


The complexity of the environment[9] of an AI system determines the range of potential situations an AI
system is intended to support in its operational context.
Certain AI technologies like ML are specifically suited to handle complex environments and are
therefore often used for systems used for complex environments like automated driving. A great
challenge however is to identify in the design and development process all relevant situations that the
system is expected to handle and that the training and test data cover all these situations.
Hence, complex environments can result in additional risks relative to simple environments. Special
consideration should be given to determining the degree to which the AI system environment is
understood:
— Complete understanding of environment that is only possible for simple predictable or controlled
environments, such that the AI system is prepared for all possible states of the environment that it
can encounter, allows for better risk control.
— In case of partial understanding due to high complexity or uncertainty of the environment, such
that the AI system cannot forecast all possible states of the environment (for instance, autonomous
driving), it cannot be assumed that all relevant situations are considered. This can result in a level
of uncertainty, which is a source of risk, and should be taken into account when designing such
systems.
B.3 Lack of transparency and explainability
Transparency is about communicating appropriate activities and decisions of an organization (e.g.
policies, processes) and appropriate information about an AI system (e.g. capabilities, performance,
limitations, design choices, algorithms, training and test data, verification and validation processes and
results) to relevant stakeholders. This can enable stakeholders to assess development, operation and use
of AI systems against their expectations. The kind and level of information that is appropriate strongly
depends on the stakeholders, use case, system type and legislative requirements. If organizations are
unable to provide the appropriate information to the relevant stakeholders, it can negatively affect
trustworthiness and accountability of the organization and AI system.
Explainability is the property of an AI system that the important factors influencing a decision can be
expressed in a way that humans can understand. An ML model can have behaviour that is difficult to
understand by inspection of the model or the algorithm used to train it, especially in the case of deep
learning. If such important factors cannot be expressed, validation of the AI system and the trust of
humans in the system are negatively affected as it is not clear why the system has made a decision and
if it can make the correct decision in all cases. This uncertainty can result in many risks and strongly
effect general objectives such as trustworthiness and accountability, and specific objectives such as safety, security, fairness and robustness. Explainability is therefore not
only relevant for stakeholders
as part of AI system transparency but also for the organization itself for its own validation and
verification of the AI system.
Excessive transparency and explainability can also lead to risks in relation to privacy, security,
confidentiality requirements and intellectual properties.
B.4 Level of automation
AI systems can operate with different levels of automation. They can range from no automation where
an operator fully controls the system to fully automated systems. AI systems are often automated
systems. Depending on the specific use case, the automated decisions of such systems can have an
effect on various areas of concern such as safety, fairness or security.
For a level of automation where an external agent must be ready when necessary, the handover from
the system to the agent can be a risk source (e.g. time constraints, attention of the agent).
For further information on levels of automation, see ISO/IEC 22989:2022, 5.2.
B.5 Risk sources related to machine learning
Many advances in AI are related to ML and subfields thereof such as deep learning. The behaviour of ML
systems is critically dependent not just on the algorithms in use but also on the data on which the ML
models are trained. Therefore, possible effects on AI characteristics include:
— Data quality: The quality of training and test data directly affects the functionality of the system.
Inadequate data quality can affect various objectives such as fairness, safety and robustness.
— For AI systems utilizing ML, the processes used to collect data are a source of risks that are especially
hard to diagnose and detect. For example:
— Data can become unrepresentative of the domain of application, leading to risks to business
objectives.
— Data sourcing and storage can incur significant ethical and legal risks. Failing to secure the
data collection process can lead to risks from adversarial attacks, data poisoning or other
manipulation.
— Continuous learning AI systems intends to improve the systems on the basis of the evolving
production data, at the same time can exacerbate risk as they can change their behaviour during
use in a way that was not expected when it was brought into use.
B.6 System hardware issues
Risk sources related to hardware issues include, but are not limited to:
— Hardware errors based on defective components. Examples are short circuits or interruptions of
single or multiple memory cells, defective bus lines, drifting oscillators, stuck-at faults or parasitic
oscillations at the inputs or outputs of integrated circuits.
— Soft errors such as unwanted temporary state changes of memory cells or logic components, mostly
caused by high energy radiation.
— Transferring trained ML models between different systems can be constrained due to differing
hardware capabilities of the systems in terms of processing power, memory and the availability of
dedicated AI hardware accelerators.
— When an AI system requires remote processing and storage, network errors, bandwidth restrictions
and increased latency due to the limited and shared nature of network resources.

B.7 System life cycle issues


Inappropriate or insufficient methods, processes and also usage of an AI system along its life cycle can
lead to risks. Examples of such risks are:
— Design and development: A flawed design process can fail to anticipate the contexts in which the AI
system is used, causing it to fail unexpectedly when used in these contexts.
— Verification and validation: An inadequate verification and validation process for releasing updated
versions of the AI system can lead to accidental regressions or unintended deterioration or
degradation in quality, reliability or safety.
— Deployment: An inadequate deployment configuration can lead to resource problems related to
memory, compute, network, storage, redundancy or load balancing.
— Maintenance, update and revision: An AI system no longer supported or maintained by the developer
but still in use can present long-term risks or liability to the developing organization.
— Reuse: A functioning AI system can be used in a context for which it was not originally designed,
causing problems due to differing requirements between the designed and actual use. For example,
a system designed for identifying faces in photos shared on a social network can be used to attempt
to identify faces of criminal suspects in surveillance footage, an application that requires a much
higher degree of precision than the original use case.
— Decommissioning: Organizations that terminate the use of a certain AI system or a component
based on AI technologies can lose information or decision expertise that have been provided by the
decommissioned system. Moreover, if another system is used to replace the decommissioned one,
the way an organization processes information and makes decisions can change.
B.8 Technology readiness
Technology readiness indicates how mature a given technology is in a given application context. Less
mature technologies used in the development and application of AI systems can impose risks that
are unknown to the organization or are hard to assess. For mature technologies a larger variety of
experience data can be available, making risks easier to identify and to assess. However, there is also a
risk of complacency and technical debt if technologies are mature

Common questions

Powered by AI

Hardware constraints such as processing power, memory capacity, and availability of AI hardware accelerators can limit the transfer and performance of machine learning models across systems. These constraints may hinder a model's functionality due to differences in system capabilities. Additionally, reliance on remote processing can be affected by network errors, bandwidth limitations, and increased latency, further challenging the model's performance .

Inadequate decommissioning can result in loss of valuable information or decision-making expertise that was embedded in the system. If not properly managed, transitioning to alternate systems may change an organization's decision processes, potentially leading to inefficiencies or errors. Proper planning and execution of decommissioning help preserve critical knowledge and maintain operational continuity .

Using less mature AI technologies can introduce unknown or hard-to-assess risks due to limited experience data. Organizations may struggle to identify or mitigate these risks effectively, potentially leading to unexpected issues in system implementation. While mature technologies offer a larger base of experience for risk assessment, less mature ones carry uncertainties that require careful consideration and management before deployment in critical applications .

AI systems operating in complex environments face challenges in identifying all relevant situations that might arise during their operation. This complexity necessitates the coverage of these situations in training and testing data. When environments are not completely understood, uncertainty arises, increasing risks. This is because AI systems might be unprepared for all possible states of the environment, especially in scenarios like autonomous driving where the environment is unpredictable .

Data quality directly influences the functionality of AI systems, especially those utilizing machine learning. Poor data quality can compromise objectives like fairness, safety, and robustness. Inadequate or unrepresentative data can lead to decisions that don't align with the intended domain, resulting in business risks. Ethical and legal issues can arise from improper data sourcing and storage. Furthermore, poorly secured data collection processes can be vulnerable to adversarial attacks and data manipulation risks .

Continuous learning AI systems aim to improve by leveraging evolving data. However, this can introduce risks such as unexpected behavioral changes post-deployment, differing from initial expectations. These changes might not align with the original system objectives and can undermine reliability, potentially leading to incorrect or unsafe decisions, which is risky if unanticipated throughout the system's operational context .

Different automation levels in AI systems present varying risks. Systems with partial automation require readiness from external agents for system handover, posing risks related to time constraints and attentiveness. Fully automated systems may lead to safety, fairness, or security issues due to the absence of human oversight. Identifying appropriate automation levels and ensuring readiness for interaction with human controllers is critical to mitigate these risks .

Insufficient understanding of an AI system's environment can lead to design inadequacies where not all environmental states are accounted for, resulting in higher uncertainty and risks. Proper risk management requires anticipation of relevant situations the system may encounter. Failing to do so can lead to unpreparedness and suboptimal risk control, making the system vulnerable to unforeseen circumstances and compromising its safety and robustness .

Transparency and explainability are crucial for stakeholders to assess AI systems against their expectations, impacting trustworthiness and accountability. They allow stakeholders to understand the capabilities, performance, and limitations of these systems. However, excessive transparency and explainability can lead to privacy, security, confidentiality, and intellectual property risks. This balance is important to maintain while ensuring stakeholders are informed .

Risks in the AI system life cycle can arise from various stages, including design flaw anticipations, inadequate verification, and deployment configurations. During design and development, failure to consider all use contexts can lead to unexpected operational failures. Weak verification and validation can cause accidental regressions. Insufficient deployment configurations may lead to resource-related issues. Long-term risks occur if systems are unsupported during maintenance or when reused in inappropriate contexts .

You might also like