After the COVID-19 pandemic stopped many asylum procedures throughout Europe, fresh technologies are reviving these systems. Via lie recognition tools analyzed at the boundary to a system for validating documents and transcribes selection interviews, a wide range of systems is being included in asylum applications. This article is exploring how these solutions have reshaped the ways asylum procedures are conducted. It reveals how asylum seekers happen to be transformed into compelled hindered techno-users: They are asked to comply with a series of techno-bureaucratic steps and also to keep up with unpredictable tiny changes in criteria and deadlines. This obstructs the capacity to browse through these systems and to pursue their right for coverage.
It also illustrates how these types of technologies are embedded in refugee governance: They aid the ‘circuits of financial-humanitarianism’ that function through a whirlwind of spread technological requirements. These requirements increase asylum seekers’ socio-legal precarity simply by hindering all of them from interacting with the programs of cover. It further argues that examines of securitization and victimization should be coupled with an insight in the disciplinary mechanisms of these technologies, in which migrants are turned into data-generating subjects exactly who are regimented by their reliability on technology.
Drawing on Foucault’s notion of power/knowledge and comarcal knowledge, the article states that these technology have an natural obstructiveness. There is a double effect: www.ascella-llc.com/the-counseling-services-offers-free-confidential-counseling-services-to-enrolled-students/ whilst they assist to expedite the asylum procedure, they also produce it difficult designed for refugees to navigate these types of systems. They are positioned in a ‘knowledge deficit’ that makes all of them vulnerable to illegitimate decisions manufactured by non-governmental stars, and ill-informed and unreliable narratives about their conditions. Moreover, they will pose new risks of’machine mistakes’ that may result in inaccurate or discriminatory outcomes.