AI and Equality

The AI Life Cycle

Based on our Human Rights-based approach, we investigate and explore questions at each phase of the AI life cycle, focusing on fairness and debiasing AI

AI has often harmed or exploited vulnerable communities, even when well-intentioned. We believe that engaging in a critical analysis throughout and after AI development will enable AI creators to consider best practices from the onset, including improved engagement with affected communities. A Human Rights-based approach is likely to result in more sustainable solutions that have a positive impact on the communities it is designed to serve.

We anchor this approach in reflective questions that we call Essential Questions along the AI lifecycle using the process to move from the identification of a societal problem to the technical solution that addresses that problem.

In the AI & Equality Human Rights Toolbox we distinguish following six stages of the life cycle:

The AI Life Cycle stages

Short overview of the six stages as well as some of the essential questions that AI creators should reflect on at each stage. 

Stage 1.
Defining objectives

The first essential step is to reflect on the objective and purpose for building a system. Often, the vision of what AI should look like, and who it should support only reflects the needs of the people in power as opposed to the needs of the communities it will serve and affect. To remedy this issue, early engagement and participatory development with the affected communities is essential

  • How can affected communities be included in the design process?
Stage 1.
Team composition

Numerous people are involved in the creation and operation of an AI system – and more than just people writing code! Your objective will fundamentally inform the team composition required to create the system fulfilling the objective. This includes the skills and expertise needed, but also the diversity of its members regarding backgrounds, perspectives, and experiences with the environment for which your system is developed.

  • Is the team diverse regarding lived experiences (culture and demographics)?
Stage 2.
System requirements

At the second stage, the system’s objective is formalized into a list of requirements. The list of system requirements should be developed in dialogue with various other roles to ensure the system is feasible (e.g. the developer team). Often, the dialogue with the various communities and roles includes no-code prototypes to communicate ideas, such as sketches, wireframes, or other prototypes.

The process of defining the system’s requirements should be iterative and fluid; it is very likely that the list of requirements will change as more details about the context and the needs of impacted communities become apparent.

  • Inclusivity considerations: Have the requirements been confirmed by a wide-range of stakeholders, including affected communities?

  • Explainability considerations: Can you explain the algorithm’s basic mechanisms without using technical terms?

  • Accountability considerations: What is the accountability structure?

    • Which human oversight should be aimed for?

    • What expertise/training will the human loop require?

Stage 3.
Data discovery

In this step, developers reflect on whether your data is representative of your use case and context.The dataset can (and often is) the first way in which bias enters the system. To avoid this, it is essential that a good socio-cultural fit of the dataset is ensured with the intended application context, e.g. regarding demographics, culture, or environmental factors.

Domain experts should be consulted to ensure that the dataset captures the application context correctly. Their feedback on new or further data collection should be followed, as well as regarding mathematical methods to balance the dataset. 

  • Who collected the data and for which purpose?
  • What data pre-processing steps are required to create a “fair” model in this context?
  • What historical/present bias in the data might compromise human rights?
  •  
Stage 4.
Selecting and developing a model

Before the model can be developed, it is important to consider what type of AI model is the best to satisfy the system requirements. It is not always the most complicated deep-learning algorithm that is the most suitable for a context. Instead, it is about choosing the most suitable model for the required scope while managing trade-offs.

The model development itself should be seen as an iterative process.it is important at this stage to look back at data collection, to ensure there is a good fit between the data, what you are intending to model and the actual model chosen. This process is also seen in the testing stage, where the model is tested against the objective and then iteratively improved until it meets success metrics. 

  • Model ethical considerations:

    • What type of model is most useful for this objective?

    • Can this type of model fulfill fairness in a way that is suitable for the context?

  • Explainability and transparency

    • Can the model alert the user if it is confronted with an instance outside of the scope of its training data?

Stage 5.
Testing and interpreting outcomes

After the model has been developed, we have to test whether it fulfills the system requirements defined in stage 3. For some metrics, this can be done via technical tests, others require stakeholder feedback, e.g. whether the intended level of explainability was achieved for the end-user.

It is important to place affected communities at the core of the assessment. At this stage, the involvement aims at assessing whether the requirements defined in stage 3 are fulfilled or respected appropriately. Since the affected parties were involved throughout the project, this stage can be understood as their final ‘sign-off’ of needs and risk mitigations identified and addressed throughout the development process.

Insights gained from the testing stage should inform a ‘manual’ handed to the future operators that defines the contexts in which the system is expected to operate well as well as situations in which the system is expected to perform poorly.

  • What measures of model performance are being tested, and why were those measures chosen
  • How much human oversight is required, and by a supervisor with what level of expertise?
Stage 6.
Deployment and Audit

Deployment      

The deployment step is something like the last sanity check, i.e. whether all harms, discriminatory impacts and consequences have been considered, communicated, and are accounted for. We recommend conducting a thorough Human Rights Impact Assessment) to ensure that the system was assessed for negative Human Rights impacts in its final form. 

It’s crucial that operators and strongly affected communities are able to alert issues they experience in relation to the system. The decision as to whether the system is ready to be deployed is powerful. We recommend gathering the opinions from affected communities on the system’s readiness – after all, they have to bear the consequences of a faulty operation.

Post-deployment

Directly after the deployment, the system should be audited regularly in post-deployment audits, including opportunities for affected communities to provide feedback. The deployed system might expose previously unknown challenges or problems which have to be detected. 

Lastly, ensure that the system complies with relevant regulations and guidelines, such as GDPR, CCPA, the new EU AI Act, or industry-specific standards. 

  • Deployment:

    • Who decides the model is ready to be deployed?

    • Have you conducted a Human Rights Impact Assessment where the full model operation is known?best practices available?

  • Post-deployment:

    • Are there processes that allow operators to alert suspected system inaccuracies?

    • What alerts you to if the objective loses its purpose? How would you know if it was time to retire the system?

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.