Aktuelles

14.12.23

Contextualizing AI Risks: A Deep Dive into System Classification and Stakeholder Assessment

Technology is not value-neutral and it never has been. Designers' choices often carry ethical risks as they play a crucial role in determining how a technology will be used, for what purposes, and which users will have privileged access to it. These decisions are closely tied to underlying values and ethical considerations.

In this post, we shed light on how to identify ethical risks based on the technical and socio-economic features of an AI system. The white papers “OECD Framework for the Classification of AI Systems” by the Organisation for Economic Cooperation and Development (OECD) and “Data Justice in Practice: A Guide for Policymakers” by the Global Partnership on Artificial Intelligence (GPAI) will serve as central references.



System Classification


Classifying an AI system is the first step of an Ethical Impact Assessment (EIA). This is necessary to identify the sensitive areas in which AI risks may arise. Moreover, understanding the purpose and the technical features of the system also allows it to be framed within the risk matrix provided by the AI Act. Depending on the level of risk, different regulatory requirements will apply.

To perform a classification, the OECD recommends analyzing five dimensions: People & Planet; Economic Context; Data & Input; AI Model; Tasks & Output. In the following, we will briefly illustrate how defining the system’s features in each of these dimensions contributes to risk identification.

People & Planet

According to the OECD definition, this dimension “identifies individuals and groups that interact with or are affected by an applied AI system.” It also evaluates the impact of an AI system on “human rights, the environment, well-being, society and the world of work.” Investigating this dimension is important to understand the consequences of a specific AI application for human rights and democratic values such as safety and security, physical and psychological integrity, freedom of expression and association, equality and non-discrimination, and so on. For example, if we consider an AI-driven credit scoring system, concerns will include the potential of creating unequal access to credit and the subsequent effect on the financial well-being of individuals. Such concerns should be addressed through regular audits ensuring decisions are unbiased and fairly allocated.

Economic context

This encompasses, among other things, the economic sector in which the system is deployed (e.g. insurance, automotive, healthcare), its business function, impact, and scale. The OECD highlights that the policy implications of deploying AI systems vary significantly from one sector to another. Depending on the sector, the application of the same task in different functional areas has a different impact on economic and social benefits, jobs and skills, education, safety, and so on. Moreover, in critical sectors such as energy, transport, water, health, digital infrastructure, and finance, critical functions are accompanied by heightened risk considerations. In our example, the impact of the credit scoring system on the banking sector and on credit assessment roles and procedures must be evaluated. In response to potential challenges, possible measures include implementing strategies for job displacement or retraining, as well as ensuring proper oversight and adherence to AI and credit assessment regulations. In addition, proactive loss mitigation plans may be developed to manage potential financial consequences resulting from system errors.

Data and Input

Following the OECD classification, characteristics of this dimension include “the provenance of data and inputs, machine and/or human collection method, data structure and format, and data properties.” Models require good data, yet collecting extensive and detailed data often conflicts with individual privacy. Moreover, data collection methodologies can be associated with risks concerning working conditions of digital laborers, consumer protection, system’s biases, transparency and explainability, resource consumption, safety, and robustness. Considering the credit scoring use case, concerns arise from potential biases present in data due to socio-economic factors, and the overall robustness of the credit scoring model given the quality of data used. Measures to tackle these issues include ensuring data collection adheres to regulatory standards and customer agreements, as well as implementing bias monitoring and elimination techniques.

AI Model

This is a representation of the environment of an AI system. According to the OECD, this encompasses processes, objects, ideas, people and/or interactions taking place in its deployment and application context. For example, expert knowledge and performance measures used for training and optimization, or the objectives set while monitoring generated output, are elements of an AI model. Also, in this case, the system’s explainability, robustness, fairness, and resilience to cyberattacks can vary depending on AI model configurations and features. For instance, a model having no completely centralized training dataset better addresses data protection issues than a model having a centralized one; having randomness elements in a model can affect results’ reproducibility; using deep neural networks having low-interpretability can impact explainability; and so on. In our credit scoring example, some of the main concerns include opacity issues deriving from AI model complexity, as well as security vulnerabilities leading to data breaches. To address these issues, more interpretable models should be preferred as long as they don’t undermine system robustness and data protection, and periodic cybersecurity assessments should be performed.

Tasks & Output

The last dimension proposed by the OECD refers to the specific tasks the system performs and considers its concrete outputs and resulting actions. Examples of tasks include image recognition, predicting future values for data, detection of events such as fraud or human mistakes, content recommendation, sentiment analysis, and so on. Depending on the task and the area of application, the OECD remarks that specific actions might be required to mitigate ethical risks. For instance, keeping a human in the loop may be important for accountability in forecasting, while higher transparency and disclosure of the fact that the user is interacting with an AI system might be required in interaction support tasks (e.g. chatbots). In our example, one of the core concerns is ensuring the accuracy and reliability of the credit scores generated by the AI system. There could also be issues arising due to a lack of explanation for scores, leading to potential miscommunication. Measures to address these concerns include regular performance audits, as well as implementing a review procedure providing explanations about how the credit score was calculated and allowing individuals to challenge their credit scores.

Stakeholder Analysis


To complete the analysis of possible risk scenarios, it is important to assess in detail the stakeholders impacted by the AI system and how. Following the insights of the GPAI white paper “Data Justice in Practice”, we distinguish four phases of a stakeholder assessment.

Stakeholder Identification

The stakeholder identification process involves identifying all individuals and social groups that may impact or be impacted by the AI application. As part of this process, it should be determined if any of these stakeholders possess sensitive or protected characteristics that may increase their vulnerability to abuse, adverse impact, or discrimination. Furthermore, the procedure entails evaluating whether the AI system presents significant concerns to specific groups of stakeholders due to vulnerabilities caused by the system’s use.

Scoping potential stakeholder impacts

This process entails an evaluation of how the AI application may affect a set of core ethical priorities, including the respect of human dignity, protection of human freedom and autonomy, prevention of harm, fairness and non-discrimination, data protection, and respect of private and family life. The goal is to foresee positive and negative outcomes, including any harm that could arise if the AI system malfunctions or produces unintended consequences.

Analyzing stakeholder salience

This involves evaluating which stakeholder groups are most likely to be positively or negatively impacted by the AI system, bearing in mind their specific needs. This includes understanding the dynamics that could influence how benefits and risks are distributed among stakeholders. Particular attention is paid to those stakeholders whose limited influence might impede their ability to share in the AI system's benefits or protect themselves from its potential risks. The objective is to ensure a fair and equitable consideration of all parties involved or affected by the AI system.

Determining Stakeholder Engagement Methods

This last step considers possible ways of collecting meaningful stakeholder feedback on the AI application and including the perspective of impacted parties in the implementation of the AI solution. It involves recognizing and accommodating the needs of diverse stakeholders, particularly addressing participation barriers for vulnerable groups. Engagement methods range from highly interactive and co-creative approaches, such as collaborative activities where stakeholders help shape discussions and decision-making, to more informative and consultative ones, such as online surveys and interviews, newsletters, and forums. Choosing the right engagement methods is critical to ensure that these solutions are accessible, considerate of different levels of understanding, and effective in facilitating meaningful participation.

Summing up, our key takeaway is that the ethical risks of AI systems are closely linked to their specific functions and the individuals and groups they impact. By carefully classifying these systems and understanding the stakeholders involved, we pave the way for more informed risk assessments.

As we move to our next topic, we'll tackle the specific areas where we can audit these risks, making sure AI works responsibly and benefits everyone. Stay tuned for an insightful look at how we can keep our AI systems in check and ethically sound.

zurück
top