AI Fairness in the EU AI Act

Fairness and non-discrimination are two topics of critical importance for the ethical use of artificial intelligence. Time and again, AI has been portrayed negatively in the news because the careless use of technology resulted in unfair and discriminatory treatment of citizens. As a result, more and more people are now demanding that AI be regulated so that companies use it ethically.

Corporations across the globe are paying close attention to the regulatory developments concerning AI. The upcoming European Artificial Intelligence Act (EU AI Act) is the most comprehensive effort to regulate the technology to prevent harmful consequences on individuals’ lives and society in general. It aims to create a trusting and safe environment for the use of artificial intelligence.

The EU AI Act adopts a risk-based approach aimed at categorizing AI applications into risk buckets according to their potential impact on societal outcomes. AI systems making major decisions affecting citizens’ lives—such as those involving applications for credit, job applications, or the monitoring of critical infrastructure—will face greater scrutiny.

Given its goals, it is surprising at first glance that the Act includes no explicit mention of the fairness concept. Does that mean fairness is not an important principle for ethical and trustworthy AI?

Why Doesn’t the EU AI Act Refer to Fairness?

We need to dig deeper to understand why the term “fairness” was deliberately not used in drafting the EU AI Act. 

It is straightforward to appreciate the potential impact of high-risk AI systems on, for example, credit or job applications. Discrimination against underprivileged and protected groups can easily result from training AI systems based on real-world data, which, in turn, can encode historical biases and become unbalanced.

One of the requirements for high-risk AI systems is stated in the EU-AI act as follows: 

Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. […] In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems.

EU AI Act Preamble

 An important reason why the term “fairness” is not explicitly mentioned could be, in our opinion, that any notion of fairness would need to be harmonized with each country’s existing legal non-discrimination frameworks and divergent legal regimes. Every one of the 27 EU member states has its own concepts of non-discrimination and fairness based its values and history, which means that there cannot be a universally accepted “right definition” imposed at the EU level. As such, the EU AI Act will surely be complemented by other legislation at the national level that will aim to establish practical guidelines to safeguard fairness in AI systems.

A Closer Look at EU AI Policy

A good place to start understanding EU AI policy is by examining one of the four documents that served as inspiration for the drafting of the EU AI Act: Ethics guidelines for trustworthy AI, published by the High-Level Expert Group on AI in 2019. According to the document, the seven pillars for trustworthy AI are:

  • Human agency and oversight 
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental wellbeing
  • Accountability

The mention of fairness is made explicit in this document and tied to the concept of ethical and trustworthy AI.

As discussed above, providing practical guidelines on safeguarding AI system fairness will be the task of complementary national legislation. So, while the EU AI Act itself does not prescribe a notion of “fairness,” national legislation already does. Companies must ensure compliance with regulations to continue using the technology to generate economic value. Noncompliance will prompt fines, cause reputational damage, and generally slow down the level of AI adoption. These are all reasons why companies need to carefully assess the fairness of their critical AI applications and proactively seek to minimize biases and discriminatory outputs.

Data-Centric AI and AI Fairness

AI models are not inherently unfair and discriminatory: They are simply a reflection of the data used to train them and, as such, can preserve and amplify existing real-world biases. This is why the Data-Centric approach to AI can be very powerful in building fairer AI models.

Modulos Data-Centric AI allows companies to take a proactive approach to stay ahead of the curve in addressing upcoming challenges related to the ethical use of AI. By pinpointing to users the exact samples in a dataset that could be deemed responsible for biased, unfair, or even discriminatory outcomes, Data-Centric AI creates the basis for monitoring and improving the fairness of an AI system in a fast, efficient, and transparent manner.

Users are enabled to follow the evolution of different fairness metrics at every iteration of their datasets. By following the system’s guidance, it is possible to remove sources of bias in a systematic and objective-oriented manner. The methodology is extremely flexible and can also be adapted to a variety of legal frameworks where different concepts of fairness may apply.

Data-Centric AI is not only a very effective response to the complex topic of AI fairness but also a comprehensive tool for ethical AI in line with the ethical guidelines set forth by the EU:

  • It promotes a “Human in the Loop” approach of looking at the whole ML lifecycle.
  • It requires accountability for data by subject matter experts (e.g., business leaders and data scientists)
  • It tracks every action performed on data and its impact on model performance to support potential audits.
  • It allows implementations to achieve accurate, fair, and robust results.

Learn more on the topic by watching our webinar on Fairness in Credit Risk or reading our blog post that sheds light on different fairness metrics by examining the practical use case of commercial lending.

This blog has been republished by AIIA. To view the original article, please click HERE.