SciELO - Scientific Electronic Library Online

 
 issue93The Securitization of Artificial Intelligence: An Analysis of its Drivers and ConsequencesAlgocracy in the Judiciary: Challenging Trust in the System author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Revista de Estudios Sociales

Print version ISSN 0123-885X

Abstract

CODDOU MC MANUS, Alberto; GERMAN ORTIZ, Mariana  and  TABARES SOTO, Reinel. Avoiding the Formalism Trap: A Critical Evaluation and Selection of Statistical Fairness Metrics in Public Algorithms. rev.estud.soc. [online]. 2025, n.93, pp.85-106.  Epub Aug 27, 2025. ISSN 0123-885X.  https://doi.org/10.7440/res93.2025.05.

This article examines the different statistical fairness metrics used to evaluate the performance of artificial intelligence (AI) models and proposes criteria for selecting them based on context and legal implications. It focuses in particular on how these metrics can help safeguard the right to equality and non-discrimination in algorithmic systems implemented by the state. Its core contribution is the development of an analytical framework for choosing fairness metrics according to the purpose of the automated system, the nature of the project, and the rights at stake. For example, in the criminal justice system-where individual liberty is at risk-the emphasis is on minimizing false positives. In contrast, for algorithms designed to protect victims of gender-based violence, the priority is to reduce false negatives. In areas like public procurement, group fairness is assessed using metrics such as disparate impact or demographic parity. In sectors like tax enforcement or medical diagnostics, the focus is on predictive accuracy and efficiency. Taking an interdisciplinary approach, the article puts forward a sociotechnical perspective that brings together technical and legal insights. It highlights the need to avoid the “formalism trap,” where fairness is reduced to abstract metrics without accounting for the broader social and political context. Finally, it argues that selecting appropriate metrics not only helps identify and mitigate algorithmic bias but also contributes to building AI systems that are fairer and more transparent, and that align with fundamental principles of equality and non-discrimination.

Keywords : algorithmic bias; algorithmic discrimination; algorithmic justice; artificial intelligence; public algorithms; statistical fairness.

        · abstract in Spanish | Portuguese     · text in Spanish     · Spanish ( pdf )