There is an elephant in the room: towards a critique on the use of fairness in biometrics
AI Ethics , Open Access available at https://doi.org/10.1007/s43681-022-00249-2
Ana Valdivia, Júlia Corbera Serrajòrdia & Aneta Swianiewicz (2022)
The proliferation of biometric systems in our societies is shaping public debates around its political, social and ethical implications. Yet, whilst concerns towards the racialised use of this technology have been on the rise, the field of biometrics remains unperturbed by these debates. Despite the lack of critical analysis, algorithmic fairness has recently been adopted by biometrics. Different studies have been published to understand and mitigate demographic bias in biometric systems, without analysing the political consequences. In this paper, we offer a critical reading of recent debates about biometric fairness and show its detachment from political debates. Building on previous fairness demonstrations, we prove that biometrics will be always biased. Yet, we claim algorithmic fairness cannot distribute justice in scenarios which are broken or whose intended purpose is to discriminate. By focusing on demographic biases rather than examine how these systems reproduce historical and political injustices, fairness has overshadowed the elephant in the room of biometrics.
Neither opaque nor transparent: A transdisciplinary methodology to investigate datafication at the EU borders
Big Data & Society, 9(2), Open Access available at https://doi.org/10.1177/20539517221124586
Ana Valdivia, Claudia Aradau, Tobias Blanke, & Sarah Perret (2022)
In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.
Algorithmic Reason: The New Government of Self and Other
Claudia Aradau and Tobias Blanke (2022)
Open Access - PDF of e-book available here
Are algorithms ruling the world today? Is artificial intelligence making life-and-death decisions? Are social media companies able to manipulate elections? As we are confronted with public and academic anxieties about unprecedented changes, this book offers a different analytical prism through which these transformations can be explored. Claudia Aradau and Tobias Blanke develop conceptual and methodological tools to understand how algorithmic operations shape the government of self and other. They explore the emergence of algorithmic reason through rationalities, materializations, and interventions, and trace how algorithmic rationalities of decomposition, recomposition, and partitioning are materialized in the construction of dangerous others, the power of platforms, and the production of economic value. The book provides a global trandisciplinary perspective on algorithmic operations, drawing on qualitative and digital methods to investigate controversies ranging from mass surveillance and the Cambridge Analytica scandal in the UK to predictive policing in the US, and from the use of facial recognition in China and drone targeting in Pakistan to the regulation of hate speech in Germany.
Asylum, Borders, and the Politics of Violence: From Suspicion to Cruelty
Global Studies Quarterly 2(2), Open Access, available at
Claudia Aradau and Lucrezia Canzutti (2022)
Critical scholarship in international relations, border, and migration studies has analyzed the cultures of suspicion that underpin border practices and have increasingly reshaped the politics of asylum globally. They have highlighted either the generalization of suspicion through the securitization of asylum or racialized and gendered continuities of colonial violence. We propose to understand the entanglements of continuity and discontinuity in the politics of asylum through “technologies of cruelty.” To conceptualize technologies of cruelty, we draw on Etienne Balibar's “topographies of cruelty” and Rita Laura Segato's “pedagogies of cruelty.” Empirically, the article argues that technologies of cruelty minimize, erase, undo, splinter, and devalue asylum seekers’ claims for protection in ways that objectify and dehumanize them. Methodologically, the argument is developed through an analysis of a corpus of asylum appeal decisions in the United Kingdom. Asylum appeals are a particularly important archive for the diagnosis of cruelty internationally, as they are both inscriptions of dominant knowledge and contestations over knowledge and claims to protection. They also allow us to trace how the politics of asylum is situated within global topographies of cruelty, shaped by technologies of cruelty and the deactivation of empathy.
Los estudios críticos sobre relaciones internacionales, fronteras y migraciones han analizado las culturas de la sospecha que sustentan las prácticas fronterizas y que han redefinido progresivamente la política de asilo a nivel mundial. Han destacado la generalización de la sospecha a través de la segurización del asilo o la continuidad racializada y de género de la violencia colonial. Proponemos entender los enredos de la continuidad y discontinuidad en la política de asilo a través de las “tecnologías de la crueldad.” A fin de conceptualizar las tecnologías de la crueldad, nos basamos en las “topografías de la crueldad” de Etienne Balibar y en las “pedagogías de la crueldad” de Rita Laura Segato. Desde el punto de vista empírico, el artículo sostiene que las tecnologías de la crueldad minimizan, borran, deshacen, escinden y devalúan las solicitudes de protección de quienes piden asilo de maneras que los cosifican y deshumanizan. Metodológicamente, el argumento se desarrolla mediante el análisis de un corpus de decisiones de apelación de asilo en el Reino Unido. Las apelaciones de asilo son un recurso particularmente importante para detectar la crueldad a nivel internacional, ya que son constancias del conocimiento dominante e impugnaciones en torno al conocimiento y las solicitudes de protección. También nos permiten entender cómo la política de asilo se sitúa dentro de la topografía global de la crueldad, configurada por las tecnologías de la crueldad y la desactivación de la empatía.
En tant que recherches critiques en relations internationales, les études des frontières et de la migration ont analysé les cultures de la suspicion qui sous-tendent les pratiques frontalières et ont de plus en plus remodelé les politiques d'asile du monde entier. Elles ont mis en évidence soit la généralisation de la suspicion par le biais de la sécuritisation de l'asile, soit des continuités racialisées et genrées de la violence coloniale. Nous proposons de comprendre les enchevêtrements de la continuité et de la discontinuité des politiques d'asile en nous basant sur les « technologies de la cruauté ». Pour conceptualiser les technologies de la cruauté, nous nous appuyons sur les « topographies de la cruauté » d’Étienne Balibar et sur les « pédagogies de la cruauté » de Rita Laura Segato. Sur le plan empirique, cet article soutient que les technologies de la cruauté minimisent, éliminent, annulent, morcellent et dévaluent les demandes de protection des demandeurs d'asile de manières qui les traitent comme des objets et les déshumanisent. D'un point de vue méthodologique, l'argument est développé par le biais d'une analyse d'un corpus de décisions de recours en matière d'asile rendues au Royaume-Uni. Les recours en matière d'asile constituent une archive particulièrement importante pour le diagnostic de la cruauté au niveau international, car ils sont à la fois des inscriptions des connaissances dominantes et des contestations des connaissances et des demandes de protection. Ils nous permettent également de retracer la manière dont la politique d'asile s'inscrit dans les topographies mondiales de la cruauté, qui sont façonnées par les technologies de la cruauté et la désactivation de l'empathie.
You can find here some of the publications from the project and related research we have done over the past years.
The politics of (non-)knowledge at Europe's borders: Errors, fakes, and subjectivity
Review of International Studies: 1-20
Claudia Aradau and Sarah Perret (2022)
From statistical calculations to psychological knowledge, from profiling to scenario planning, and from biometric data to predictive algorithms, International Relations scholars have shed light on the multiple forms of knowledge deployed in the governing of populations and their political effects. Recent scholarship in critical border and security studies has drawn attention to ‘the other side of knowledge’ and has developed a vibrant conversation with the emergent interdisciplinary field of ignorance studies. This article proposes to advance these conversations on governing through non-knowledge by nuancing the analysis of power/(non-)knowledge/subjectivity relations. Firstly, we expand the analysis of non-knowledge by attending to the problematisation of errors and fakes in controversies at Europe's borders. Errors have emerged in relation to border actors’ practices and technologies, while migrant practices, documentation, and narratives are deemed to be potentially ‘fake’, ‘fraudulent’, or ‘false’. Secondly, we explore how different subjectivities are produced through regimes of error/truth and fake/authenticity. We argue that there are important epistemic differences between ‘fake’ and ‘error’, that they are entangled with different techniques of power and produce highly differentiated subjectivities. Finally, we attend to how these subjectivities are enacted within racialised hierarchies and ask whether non-knowledge can be mobilised to challenge these hierarchies.
Algorithmic Surveillance and the Political Life of Error
Journal for the History of Knowledge 2(1), Open Access, available at:
Claudia Aradau and Tobias Blanke (2021)
Concerns with errors, mistakes, and inaccuracies have shaped political debates about what technologies do, where and how certain technologies can be used, and for which purposes. However, error has received scant attention in the emerging field of ignorance studies. In this article, we analyze how errors have been mobilized in scientific and public controversies over surveillance technologies. In juxtaposing nineteenth-century debates about the errors of biometric technologies for policing and surveillance to current criticisms of facial recognition systems, we trace a transformation of error and its political life. We argue that the modern preoccupation with error and the intellectual habits inculcated to eliminate or tame it have been transformed with machine learning. Machine learning algorithms do not eliminate or tame error, but they optimize it. Therefore, despite reports by digital rights activists, civil liberties organizations, and academics highlighting algorithmic bias and error, facial recognition systems have continued to be rolled out. Drawing on a landmark legal case around facial recognition in the UK, we show how optimizing error also remakes the conditions for a critique of surveillance.
Experimentality, surplus data and the politics of debilitation in borderzones
Claudia Aradau (2020)
The use of digital devices and the collection of digital data have become pervasive in borderzones. Whether deployed by state or non-state actors, digital devices are rolled out despite intense criticism and controversy. Many scholars and public actors have criticised these experiments with digital technologies in borderzones. Yet, what exactly does it mean to conduct experiments in borderzones? In this article, I discuss different meanings of experiments and experimentation and propose to understand border governance through the prism of experimentality. Experimentality was initially formulated in the anthropological literature on the globalisation of clinical trials and, more recently, revisited in feminist science and technology studies. Drawing on this work, I argue that experimentality has become a rationality of governing in borderzones, which renders social relations continuously decomposable and recomposable by inserting mundane (digital) devices into the world. The introduction of various digital devices in Greece since 2015, starting with Skype for the pre-registration of asylum seekers, helps shed light on a particular form of governing through experiments without protocol. This form of experimentality has specific political effects for migrants’ lives. Firstly, experimentality builds upon and intensifies neoliberalism by rearranging rather than redressing precarity. In so doing, experimentality through digital devices produces debilitation rather than better connectivity or access to asylum. Secondly, migrants become not only subjects of surveillance, but subjects of extraction of ‘surplus data’ which entangles their lives into the circuits of digital platforms.
Designs of borders: Security, critique, and the machines
European Journal of International Security, 6(3): 278-300
Médéric Martin-Mazé & Sarah Perret (2021)
Over the past 15 years, the European Commission has poured millions of euros into Research and Development in border security. This article looks at the devices that are funded under this scheme. To this end, it applies Multiple Correspondence Analysis to a database of 41 projects funded under 7th Framework Programme. This method of data visualisation unearths the deep patterns of opposition that run across the sociotechnical universe where European borders are designed and created. We identify three rationalities of power at play: territorial surveillance aimed at detecting rare events in remote areas, policing of dense human flows by sorting out the benign from the dangerous, and finally global dataveillance of cargo on the move. Instead of trends towards either the hardening of borders or their virtualisation, we, therefore, find multiple rationalities of power simultaneously redefining the modalities of control at EU borders. A second finding shows where precisely critical actors are located in this sociotechnical universe and indicates that the structure of European R&D in border security keeps irregularised migrants off their radars. This finding calls for more caution as to the possibility to effectively put critique to work within the context of EU R&D.