top of page
Search

Critical Borders: Radical (Re)visions of AI

18-21 October 2021

"Exploring a variety of critical approaches to AI that interrogate how AI at the border, and the bordering processes of AI, differentially affect and produce disabled, queer, gendered, and racialised subjects."

Image credit: Vicki Smith, Peaceful Mind, 2017, Oil on Canvas, 36 X 48 in. Bau-Xi Gallery, Toronto, Canada


Ana Valdivia will attend the Critical Borders conference organised by the Leverhulme Centre for the Future of Intelligence and the University of Cambridge Centre for Gender Studies. She will speak in the panel: 'AI, Nationalism, and the Borders of the Nation State: EU Borders' with her talk entitled 'Algorithmic oppression at the EU borders: how biometric systems are jeopardising fundamental migrants’ rights?'.


Abstract:

The process of transforming our everyday lives into quantifiable data is also transforming borders. Biometric data such as fingerprints, iris, voice or facial images are extracted from asylum-seekers and stored in information management systems to implement border controls as well as asylum and migration policies. These systems are enhanced with algorithmic optimization systems, also referred as artificial intelligence (AI) or automated decision-making approaches, which are claimed to enable more efficient allocation of human and financial resources. Yet, little is known about the specification of the algorithms used and how it might impact on undocumented migrants’ rights: Which machine learning classifier is used? What is the precision of the model? And the ratio of false positive and negatives?


In the specific case of biometric systems developed at the EU borders, deep learning-based models are designed to match migrants and refugees in interoperable databases. The storage of biometric data in these systems allows police and border guards to identify and check whether a person is in possession of a visa within a Schengen state, which state is responsible for processing an asylum application, identify and return migrants or find criminal convictions. Therefore, false positive, i.e., a person being wrongly matched could have serious consequences such as deportations or asylum refusal. Furthermore, demographic biases embedded in the data used to train these algorithms can also affect fundamental rights.


Through a systematic analysis of EU contracts in the migration industry, I will unveil which private actors are developing these systems and how much public money is invested. Then, I will critically examine the technical specificities of the biometric systems developed (accuracy, false positive rate, biases). Finally, I will shed light on the technological and socio-political assumptions of algorithmic oppression, thus raising concerns about how migrant’s rights are jeopardised at the EU’s datafied border.


48 views

Comments


bottom of page