ISO/IEC 23894:2023 AI Risk Management
ISO/IEC 23894:2023 AI Risk Management
Hardware constraints such as processing power, memory capacity, and availability of AI hardware accelerators can limit the transfer and performance of machine learning models across systems. These constraints may hinder a model's functionality due to differences in system capabilities. Additionally, reliance on remote processing can be affected by network errors, bandwidth limitations, and increased latency, further challenging the model's performance .
Inadequate decommissioning can result in loss of valuable information or decision-making expertise that was embedded in the system. If not properly managed, transitioning to alternate systems may change an organization's decision processes, potentially leading to inefficiencies or errors. Proper planning and execution of decommissioning help preserve critical knowledge and maintain operational continuity .
Using less mature AI technologies can introduce unknown or hard-to-assess risks due to limited experience data. Organizations may struggle to identify or mitigate these risks effectively, potentially leading to unexpected issues in system implementation. While mature technologies offer a larger base of experience for risk assessment, less mature ones carry uncertainties that require careful consideration and management before deployment in critical applications .
AI systems operating in complex environments face challenges in identifying all relevant situations that might arise during their operation. This complexity necessitates the coverage of these situations in training and testing data. When environments are not completely understood, uncertainty arises, increasing risks. This is because AI systems might be unprepared for all possible states of the environment, especially in scenarios like autonomous driving where the environment is unpredictable .
Data quality directly influences the functionality of AI systems, especially those utilizing machine learning. Poor data quality can compromise objectives like fairness, safety, and robustness. Inadequate or unrepresentative data can lead to decisions that don't align with the intended domain, resulting in business risks. Ethical and legal issues can arise from improper data sourcing and storage. Furthermore, poorly secured data collection processes can be vulnerable to adversarial attacks and data manipulation risks .
Continuous learning AI systems aim to improve by leveraging evolving data. However, this can introduce risks such as unexpected behavioral changes post-deployment, differing from initial expectations. These changes might not align with the original system objectives and can undermine reliability, potentially leading to incorrect or unsafe decisions, which is risky if unanticipated throughout the system's operational context .
Different automation levels in AI systems present varying risks. Systems with partial automation require readiness from external agents for system handover, posing risks related to time constraints and attentiveness. Fully automated systems may lead to safety, fairness, or security issues due to the absence of human oversight. Identifying appropriate automation levels and ensuring readiness for interaction with human controllers is critical to mitigate these risks .
Insufficient understanding of an AI system's environment can lead to design inadequacies where not all environmental states are accounted for, resulting in higher uncertainty and risks. Proper risk management requires anticipation of relevant situations the system may encounter. Failing to do so can lead to unpreparedness and suboptimal risk control, making the system vulnerable to unforeseen circumstances and compromising its safety and robustness .
Transparency and explainability are crucial for stakeholders to assess AI systems against their expectations, impacting trustworthiness and accountability. They allow stakeholders to understand the capabilities, performance, and limitations of these systems. However, excessive transparency and explainability can lead to privacy, security, confidentiality, and intellectual property risks. This balance is important to maintain while ensuring stakeholders are informed .
Risks in the AI system life cycle can arise from various stages, including design flaw anticipations, inadequate verification, and deployment configurations. During design and development, failure to consider all use contexts can lead to unexpected operational failures. Weak verification and validation can cause accidental regressions. Insufficient deployment configurations may lead to resource-related issues. Long-term risks occur if systems are unsupported during maintenance or when reused in inappropriate contexts .