9 research outputs found

    Facial Recognition for Preventive Purposes: the Human Rights Implications of Detecting Emotions in Public Spaces

    Get PDF
    Police departments are increasingly relying on surveillance technologies to tackle public security issues in smart cities. Automated facial recognition is deployed in public spaces for real-time identification of suspects and warranted individuals. In some cases, law enforcement is going even further by exploiting also emotion recognition technologies. In preventive operations indeed, emotion facial recognition (EFR) is being used to infer individuals’ inner affective states from traits like facial muscle movements. In this way, law enforcement aims to obtain insightful hints on unknown persons acting suspiciously in public or strategic venues (e.g. train stations, airports). While the employment of such tools still seems to be relegated to dystopian scenarios, it is already a reality in some parts of the world. Hence, there emerges a need to explore their compatibility with the European human rights framework. The Chapter undertakes this task and examines whether and how EFR can be considered compliant with the rights to privacy and data protection, the freedom of thought and the presumption of innocence

    Surveillance Risks in IoT Applied to Smart Cities

    Get PDF
    With the advent of the IoT, proximity sensor devices are installed in many places in smart cities. Without any regulation or social policy, they could lead to a super-surveillance network managed by multi-agent systems in the future. Such networks may be able to reduce accidents, risks, damage and errors. However, they also pose high risk of surveillance and data breaches, including hacking attacks or malware intrusion. This research project is aimed at investigating the implications of IoT-driven surveillance in smart cities from privacy, data protection and ethical perspectives. The identification of the critical issues related to the extensive deployment of such sensing devices in the urban area will constitute a starting point for the development of a new regulatory framework for sensor-based surveillance in European Smart Cities. This new regulatory system shall be aimed at providing citizens with effective tools to exercise their rights to privacy and data protection when facing IoT-driven surveillance. Indeed, setting a clear set of rules governing big urban data processing shall be considered crucial to ensure a fair, democratic, human-centric development of smart cities in Europe

    Safety and Privacy in Immersive Extended Reality: An Analysis and Policy Recommendations

    Get PDF
    Extended reality (XR) technologies have experienced cycles of development—“summers” and “winters”—for decades, but their overall trajectory is one of increasing uptake. In recent years, immersive extended reality (IXR) applications, a kind of XR that encompasses immersive virtual reality (VR) and augmented reality (AR) environments, have become especially prevalent. The European Union (EU) is exploring regulating this type of technology, and this article seeks to support this endeavor. It outlines safety and privacy harms associated with IXR, analyzes to what extent the existing EU framework for digital governance—including the General Data Protection Regulation, Product Safety Legislation, ePrivacy Directive, Digital Markets Act, Digital Services Act, and AI Act—addresses these harms, and offers some recommendations to EU legislators on how to fill regulatory gaps and improve current approaches to the governance of IXR

    The Italian Implementation of the EU Directives on Procedural Safeguards for Accused Persons in Criminal Proceedings

    Get PDF
    Questo saggio Ăš stato sviluppato nel corso di un progetto di ricerca di 30 mesi finanziato dalla Commissione europea - CrossJustice (https://site.unibo.it/cross-justice/en), condotto sotto la supervisione dell'UniversitĂ  di Bologna. L'obiettivo del progetto era verificare il livello di attuazione delle sei direttive sui diritti dell'imputato adottate dal 2009 nell'ambito del Programma di Stoccolma. La ricerca ha esaminato criticamente i diritti dell'imputato riconosciuti e tutelati dalla Direttiva 2010/64/UE del 20 ottobre 2010 sul diritto all'interpretazione e alla traduzione; dalla Direttiva 2012/13/UE del 22 maggio 2012 sul diritto all'informazione; la Direttiva 2013/48/UE sul diritto di accesso a un difensore e di informazione di terzi; la Direttiva 2016/343/UE del 9 marzo 2016 sulla presunzione di innocenza e il diritto di presenziare al processo; la Direttiva 2016/800/UE sulle garanzie procedurali per gli imputati minorenni; la Direttiva 2016/1919/UE del 26 ottobre 2016 sul patrocinio a spese dello Stato. I ricercatori coinvolti hanno combinato due diverse metodologie, esaminando la questione sia da una prospettiva tradizionale, condotta da studiosi specializzati in diritto dell'UE e diritto processuale penale nazionale, sia da una nuova analisi computazionale. Nell'ambito di quest'ultimo approccio, la ricerca ha sviluppato una piattaforma di intelligenza artificiale semi-automatizzata, per evidenziare meglio le lacune scoperte dei testi normativi e migliorare l'analisi comparativa tra i sistemi giuridici (https://www.crossjustice.eu/en/index.html#crossjustice-platform)   Il presente contributo si concentra – adottando un metodo tradizionale - sul modo in cui il legislatore italiano ha recepito e attuato le suddette direttive, sia con riferimento alle disposizioni normative sia nell'interpretazione giudiziaria dei vertici del sistema. Infatti, mentre l'acquis dell'UE stabilisce standard minimi comuni in materia di diritti processuali penali, la necessitĂ  di promuoverne un'applicazione efficace e coerente rimane particolarmente pressante a causa della forte frammentazione della legislazione nazionale e della relativa giurisprudenza. In termini generali, il quadro che emerge mostra alcuni punti di forza del sistema italiano, con particolare riferimento al diritto al difensore, al diritto all'informazione e alla disclosure (e, in misura meno uniforme, alle regole di esclusione probatoria quando si tratti di tutelare le violazioni delle garanzie difensive). Non mancano tuttavia alcune criticitĂ , spesso legate alla prassi (ad esempio, la necessaria formazione che i difensori degli imputati vulnerabili dovrebbero ricevere, il patrocinio a spese dello Stato, la qualitĂ  e l'efficacia del diritto all'interprete e la tradizione degli atti). La presente analisi del sistema italiano, insieme a quella sviluppata per gli altri 10 Stati Membri dell’UE coinvolti nel progetto (Bulgaria, Croazia, Francia, Germania, Paesi Bassi, Polonia, Portogallo, Romania, Spagna, Svezia) ed ai risultati dell’analisi semantica dei testi normativi, fondata su tecniche di Intelligenza Artificiale, ha consentito di sviluppare una ricerca innovativa nei metodi e nei contenuti, che, oltre alla piattaforma Crossjustice, ha trovato recente pubblicazione anche in un volume edito da Brill (Giuseppe Contissa, Giulia Lasagni, Michele Caianiello, Giovanni Sartor (eds.), Effective Protection of the Rights of the Accused in the EU Directives. A Computable Approach to Criminal Procedure Law, 2022)

    Facial recognition in police hands: Assessing the ‘Clearview case’ from a European perspective

    No full text
    Since 2019, over 600 law enforcement agencies across the United States have started using a ground-breaking facial recognition app designed by Clearview AI, a tech start-up which now plans to market its technology in Europe. While the Clearview app is an expression of the wider phenomenon of the repurposing of privately held data in the law enforcement context, its use in criminal proceedings is likely to encroach on individuals’ rights in unprecedented ways. Indeed, the Clearview app goes far beyond traditional facial recognition tools. If these have been historically limited to matching government-stored images, Clearview now combines its technology with a database of over 3 billion images published on the Internet. Against this background, this article will review the use of this new investigative tool in light of the EU legal framework on privacy and data protection. The proposed assessment will proceed as follows. Firstly, it will briefly assess the lawfulness of Clearview AI’s data scraping practices under the General Data Protection Regulation (GDPR). Secondly, it will discuss the transfer of scraped data from Clearview AI to EU law enforcement agencies under the regime of the Police Directive 2016/680/EU (the Directive). Finally, it will analyse the compliance of the Clearview app with Article 10 of the Directive, which lays down the criteria for lawful processing of biometric data. More specifically, this last analysis will focus on the strict necessity test, as defined by the Charter of Fundamental Rights of the European Union (the Charter) and the European Convention of Human Rights (ECHR). Following this assessment, it will be argued that the use of Clearview’s app in criminal proceedings is highly problematic in light of the EU legislative framework for both privacy and data protection

    DATI ESTERNI ALLE COMUNICAZIONI E PROCESSO PENALE: QUESTIONI ANCORA APERTE IN TEMA DI DATA RETENTION

    No full text
    La disciplina della data retention costituisce ad oggi uno dei terreni di più delicato bilanciamento tra diritti fondamentali e uso dei mezzi tecnologici a fini repressivi. Sullo sfondo delle ormai note pronunce della Corte di giustizia sul tema, il presente contributo commenta la sentenza n. 36380 del 2019, che vede la Corte di Cassazione riconfermare la compatibilità dell’art. 132 cod. privacy con gli artt. 7, 8 e 52 della Carta dei diritti fondamentali dell’Unione Europea. Dopo aver esaminato i profili più problematici dell’attuale disciplina dell’art 132 cod. privacy, si riflette sulla necessità di ripensare la ben nota distinzione tra dati esterni e contenuto delle comunicazioni. In conclusione, l’analisi si sposta sul ruolo del principio di proporzionalità nella complessa materia della data retention

    Surveillance risks in IoT applied to smart cities

    Get PDF
    Nowadays, cities deal with unprecedented pollution and overpopulation problems, and Internet of Things (IoT) technologies are supporting them in facing these issues and becoming increasingly smart. IoT sensors embedded in public infrastructure can provide granular data on the urban environment, and help public authorities to make their cities more sustainable and efficient. Nonetheless, this pervasive data collection also raises high surveillance risks, jeopardizing privacy and data protection rights. Against this backdrop, this thesis addresses how IoT surveillance technologies can be implemented in a legally compliant and ethically acceptable fashion in smart cities. An interdisciplinary approach is embraced to investigate this question, combining doctrinal legal research (on privacy, data protection, criminal procedure) with insights from philosophy, governance, and urban studies. The fundamental normative argument of this work is that surveillance constitutes a necessary feature of modern information societies. Nonetheless, as the complexity of surveillance phenomena increases, there emerges a need to develop more fine-attuned proportionality assessments to ensure a legitimate implementation of monitoring technologies. This research tackles this gap from different perspectives, analyzing the EU data protection legislation and the United States and European case law on privacy expectations and surveillance. Specifically, a coherent multi-factor test assessing privacy expectations in public IoT environments and a surveillance taxonomy are proposed to inform proportionality assessments of surveillance initiatives in smart cities. These insights are also applied to four use cases: facial recognition technologies, drones, environmental policing, and smart nudging. Lastly, the investigation examines competing data governance models in the digital domain and the smart city, reviewing the EU upcoming data governance framework. It is argued that, despite the stated policy goals, the balance of interests may often favor corporate strategies in data sharing, to the detriment of common good uses of data in the urban context

    The normative challenges of AI surveillance in the analysis of encrypted IoT-generated data for law enforcement purposes

    No full text
    This paper explores the normative challenges of digital security technologies i.e., end-to-end (E2E) encryption and metadata analysis, in particular in the context of law enforcement activities. Internet of Things (IoT) devices embedded in smart environments (e.g., smart cities) increasingly rely on E2E encryption in order to safeguard the confidentiality of information and uphold individuals’ fundamental rights, such as privacy and data protection. In November 2020, the Council of the EU published a resolution titled “Encryption – Security through encryption and security despite encryption”. The resolution seeks to ensure the ability of security and criminal justice authorities to access data in a lawful and targeted manner. Nonetheless, in the context of pre-emptive surveillance and criminal investigations, E2E encryption renders the analysis of the content of communications extremely challenging or practically impossible, even when access to data could be lawful. Here, two different layers of complexity seem to emerge. They concern: (i) whether a balance between the values protected by E2E encryption and the aims of law enforcement can be attained; (ii) whether state-of-the-art AI models can preserve the advantages of E2E encryption, allowing for inferences of valuable information from communication traffic, with the aim of detecting possible threats or illicit content. Against this backdrop, we firstly examine whether AI algorithms, such as Machine Learning and Deep Learning, might be part of the solution, especially when it comes to data-driven and statistical methods for applying classification in encrypted communication traffic so as to infer sensitive information about individuals. Secondly, we consider the possible uses of AI tools in the analysis of IoT-generated data in smart cities scenarios, focusing on metadata analysis. We explore whether that AI-based classification of encrypted traffic can circumscribe the scope of law enforcement monitoring operations, in compliance with the European surveillance case-law. Finally, as far as our research focus is concerned, we discuss how the use of AI bears the potential of smoothing traditional trade-offs between security and fundamental rights, allowing for encrypted traffic analysis without breaking encryption

    Minimalistic Transposition and Substantial Protection in the Implementation of the EU Defence Rights Directives

    No full text
    The Chapter aims to convey the quality of transposition concerning the EU Defence Rights Directives as well as the degree of substantial protection offered by national case law. Critical and positive aspects have been examined in order to pursue a general overview on the domestic state of play in order to foster the protection of procedural safeguards and fundamental guarantees
    corecore