Proxy Discrimination in Artificial Intelligence: What We Know and What We Should Be Concerned About

Costanza Nardocci is a Lawyer and a Professor of Constitutional Law at the Department of Italian and Supranational Public Law, University of Milan (Italy). Her research revolves around Constitutional Law with a focus on human rights, anti-discrimination law, minority rights, multiculturalism, and artificial intelligence. As a lawyer, she deals with human rights claims before the Italian Constitutional Court and the European Court of Human Rights.

The links between artificial intelligence and discrimination are well known. In recent years, especially since the emergence of powerful new tools such as ChatGPT, the discriminatory biases created or reproduced by AI tools have been extensively documented in the news[1]. However, the technical drivers of these issues remain largely unexplored and under-researched from a legal perspective. Costanza Nardocci, Professor of Constitutional Law at the University of Milan, is particularly interested in the concept of proxies. This blog post presents the main takes from her recent article Proxy Discrimination in Artificial Intelligence.

First of all, what is a proxy in AI? And how can it be considered a type of discrimination?

A proxy is an element, such as an individual quality defining human beings, that is used by an AI system to make distinctions between individuals and/or social groups. The proxy can act as, and therefore be compared to a traditional factor of discrimination, when, as is often the case, it directly or indirectly correlates with a protected characteristic, such as gender, age, race or ethnicity, leading to biased decisions generated by the AI system. The discrimination occurs every time the proxy perpetuates biases by disproportionately affecting certain individuals and groups, even without grounding the distinction on classical factors of discrimination, but, rather, between their correlations with a presumably non-discriminatory trait or element. For example, we might think of zip codes used as proxies for socioeconomic status in an algorithm that may indirectly discriminate against certain racial or ethnic groups due to historical residential segregation. In short, proxy discrimination here is very similar to discrimination by association, which is a form of discrimination that relies on elements that are predictive of one or more traditional factors of discrimination.

The problem is that the proxy can be “anything.” This has severe implications:

  • It cannot be identified or defined ex-ante.
  • It can be challenging to identify which are the elements acting as proxies in an AI system.
  • It is difficult to distinguish between human discrimination and proxy discrimination, because the element on which the distinction is based is often unknown or not yet easy to detect.
  • It can be unpredictable, in that there might be indefinite correlations between the proxy, other proxies and the traditional protected characteristics.
  • In addition, multiple proxies may operate simultaneously and overlap, as in the case of intersectional discriminations. We can imagine at least two different scenarios: 1) two or more proxies intersect and can be used by the AI to predict your personal affiliation, ultimately leading to discrimination; 2) in other cases, multiple proxies interact, resulting in the creation of a group that is vulnerable to discriminatory treatment. In the worst-case scenario, the group is unaware that it is being discriminated against. This has severe consequences from a constitutional and human rights law perspective, as it is a clear violation of the principles of self-determination and self-identification as a member of a protected group.

Do you have examples of proxy discrimination in the healthcare sector?

In the context of health insurance systems, especially in the U.S., scores are frequently utilized to determine eligibility for insurance coverage. This scoring mechanism may inadvertently favour individuals with better insurance or higher socioeconomic status. Consequently, an AI-driven system for hospital resource allocation could perpetuate unequal access to healthcare, with socioeconomic conditions and insurance status serving as proxies.

Another instance involves diagnostic disparities, where proxies such as race, ethnicity, or gender play a role. For example, using symptoms that mainly manifest in males as a basis for diagnosis may lead to discrimination against women, as their pathology might not be accurately associated. Additionally, AI algorithms in diagnostic tools may exhibit variations in accuracy across different racial or ethnic groups, resulting in disparities in the identification and treatment of certain medical conditions.

How could proxy discrimination be identified?

The proxy distances itself from traditional or classical factors of discriminations (e.g., human qualities dividing human beings along sec, genders, racial, ethnic lines, just to mention some of them). Because of this, it is increasingly difficult to trace and detect an AI-based proxy discrimination.

A key concept that should be considered and used as a powerful tool to identify and tackle proxy discrimination is the concept of correlation. One way to verify whether we are dealing with a form of proxy discrimination is questioning if there are one or more possible associations and/or correlations between the elements the “machine” uses to make a distinction—the proxy—and one or more traditional factors of discrimination. The more the correlation between the two is evident, the easiest it will be to unveil a form of proxy discrimination.

In order to actually trace a correlation, there is another aspect worth considering. In fact, it is always useful to refer to the concepts of direct and indirect discrimination. Indeed, the way in which a proxy could be linked and associated with one or more factors of discrimination could be the result of a direct or either indirect correlation. This is also the reason why it is now possible to distinguish between direct and indirect proxy discrimination, depending on the nature—direct or indirect—of the relationship between the element used as a proxy and one or more alleged grounds of discrimination.

It should also be highlighted that the new principles envisioned especially by the European Union in the AI Act require AI systems to be transparent in terms of their design, implementation and deployment. If transparency is fully applied, this will benefit the identification of the proxy and its correlations with suspected grounds of discriminations. This could increase the chances of detecting ex-ante the feasibility of discriminatory outcomes resulting from the resort to AI technologies.

Finally, as an example of a successful scrutiny on a proxy discrimination, it is worth recalling the recent judgment of the European Court of Human Rights in the case of Glukhin v. Russia.  For the very first time, this demonstrates the possibility of actually sanctioning AI-based discrimination resulting from the use of a machine-learning system such as facial recognition systems.

Compared to other forms of AI discrimination, does proxy discrimination make a difference from a legal perspective? What does it mean in the European legal context?

If examined from the angle of the interposition of the machine between the machine and the human behaviour, proxy discrimination certainly makes a difference compared to pure human-driven discrimination.

The already mentioned challenges in unveiling the agency of the proxy or proxies behind the functioning of AI systems, especially in machine learning technologies, extremely amplifies the already existing difficulties in tackling human discriminations. From a European Union Law perspective, this means that there is an urgent need to rethink the classical categorizations of anti-discrimination law, namely direct and indirect discrimination. It should more effectively take into adequate account the undeniable novelty of the “new” artificial intelligence.

In short, the less one knows about what the proxy is and its correlations with other proxies and traditional factors of discrimination, the less it will be likely to 1) qualify a difference in treatment as an unreasonable treatment, meaning a discrimination; and 2) sanction the detected discrimination, applying the traditional mechanism of anti-discrimination law.

Why do you say that “victims of proxy discrimination in AI are unaware victims and, likewise, unaware members of new minorities created by the proxy”?

The victims of proxy discrimination in AI are often unaware of their status and membership in newly formed minorities, as they lack knowledge about the underlying proxies and how AI makes differentiations. A notable example in Washington, DC, illustrates this, where zip codes were employed for city centre renovations. The use of an app by the city administration resulted in discrimination against people in the boroughs, who were unaware of being targeted.

Another example involves a recognition system based on hat-wearing. When individuals deviate from the norm, such as wearing no hat, they become potential targets for discrimination. This departure from neutrality raises concerns about violations of individual and collective human rights. From a human rights perspective, proxy discrimination in AI poses risks to the principle of the right to self-identification, as AI may associate individuals with groups without their knowledge or consent.

In your article you say that that, so far, Human Rights have not been at the core of the debate on AI discrimination. How do you explain this?

It can be attributed to varying legal cultures, particularly between Europe and the United States. The U.S. initially hesitated to regulate AI extensively, while the European approach has been increasingly inclined towards integrating human rights considerations. In Europe, the prioritization of the right to privacy within the AI legal framework is a relatively recent development. The historical context of the EU reveals that fundamental rights gained prominence in the 2000s. Economic considerations initially took precedence, for example through the General Data Protection Regulation (GDPR).

In fact, the European AI Act did not initially address discrimination in its first draft. The Council of Europe, on the other hand, is more concerned with human rights and discrimination, although there is a lack of explicit mention of these issues in legal texts. The third version of the AI Act, presented in July 2023, says that non-member states have the opportunity to join and participate in negotiations. This mechanism is akin to that in the cybercrime department, which could possibly greatly benefit the further reach of the perspective Convention on AI under discussion within the Council of Europe[2].

Eventually, the latest draft of the AI Act, presented on December 14th, emphasizes more strongly its commitment to tackle discriminations resulting from AI, and establishes an explicit link with the EU Directives on anti-discrimination laws.

However, despite the efforts, the problem continues to be the same as no political institution, neither at the domestic nor supranational level, is properly acknowledging the characteristics of AI-derived discriminations increasing risks of gaps and loopholes in the enacted and prospective regulation.

Unfortunately, at the international human rights law level, while many countries joined the negotiations of the CoE’s first treaty on AI (e.g., The United States, Canada, Japan), the United Nations has not yet adopted a similar approach. The UN relies more on guidelines, and rarely discusses the feasibility of a treaty entirely dedicated to AI and its implications on human rights and fundamental freedoms. Finally, and in a comparative perspective, it is worth mentioning the efforts of the United States to focus on the human rights’ implications of AI. Especially after two executive orders issued by the Biden administration, alongside the Equal Employment Opportunity Commission (EEOC), to address AI-derived discrimination in the workplace, following the already known Blueprint for an AI Bill of Rights. Only the future will tell how much these efforts will side with the European ones.

You call for states and regulators to “implement preventive and repressive policies required to intercept, first, and sanction, later, discriminations provoked proxies used by AI systems.” What would be your recommendations for policy-makers to address this matter?

Encourage a closer collaboration between programmers and users, fostering a multidisciplinary understanding. This will enable users to understand how AI systems work and to make informed decisions about system liability.

Transparency, in line with the EU legal perspective, also plays a pivotal role. Implementing a mandatory disclaimer and providing a comprehensive guide to the system’s functionality, associated risks, and limitations can greatly empower users. Such a transparent approach can act as a form of user consent, allowing individuals and social groups to make informed decisions about the use of AI systems.

The user’s right to know is also very important here and, not surprisingly, EU law is increasingly insisting on its recognition among the so-called new rights associated with AI systems. Liability, in this context, is intertwined with the right to information. By recognizing that technology is never neutral, policy-makers and public policy should address the user’s right to understand how each AI system he or she is subjected to works and holds it accountable when discriminatory practices are identified.

Finally, what would be your recommendations to scholars in the field?

From a broader perspective, I strongly believe that more attention should be paid to the differences between human-driven discrimination and AI (or proxy) discrimination. In short, a key question to be addressed in the near future would therefore be: “Whether and how existing anti-discrimination laws could be applied to AI-derived discrimination?”

[1] https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/

[2] On this, see the work that has been done so far by the CAHAI and the CAI, the two committees established by the CoE.

Ce contenu a été mis à jour le 8 février 2024 à 20 h 57 min.

Commentaires

Laisser un commentaire