18,186 research outputs found

    Anonymous subject identification and privacy information management in video surveillance

    Get PDF
    The widespread deployment of surveillance cameras has raised serious privacy concerns, and many privacy-enhancing schemes have been recently proposed to automatically redact images of selected individuals in the surveillance video for protection. Of equal importance are the privacy and efficiency of techniques to first, identify those individuals for privacy protection and second, provide access to original surveillance video contents for security analysis. In this paper, we propose an anonymous subject identification and privacy data management system to be used in privacy-aware video surveillance. The anonymous subject identification system uses iris patterns to identify individuals for privacy protection. Anonymity of the iris-matching process is guaranteed through the use of a garbled-circuit (GC)-based iris matching protocol. A novel GC complexity reduction scheme is proposed by simplifying the iris masking process in the protocol. A user-centric privacy information management system is also proposed that allows subjects to anonymously access their privacy information via their iris patterns. The system is composed of two encrypted-domain protocols: The privacy information encryption protocol encrypts the original video records using the iris pattern acquired during the subject identification phase; the privacy information retrieval protocol allows the video records to be anonymously retrieved through a GC-based iris pattern matching process. Experimental results on a public iris biometric database demonstrate the validity of our framework

    Efficient Anonymous Biometric Matching in Privacy-Aware Environments

    Get PDF
    Video surveillance is an important tool used in security and environmental monitoring, however, the widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been recently proposed to automatically redact images of selected individuals in the surveillance video for protection. To identify these individuals for protection, the most reliable approach is to use biometric signals as they are immutable and highly discriminative. If misused, these characteristics of biometrics can seriously defeat the goal of privacy protection. In this dissertation, an Anonymous Biometric Access Control (ABAC) procedure is proposed based on biometric signals for privacy-aware video surveillance. The ABAC procedure uses Secure Multi-party Computational (SMC) based protocols to verify membership of an incoming individual without knowing his/her true identity. To make SMC-based protocols scalable to large biometric databases, I introduce the k-Anonymous Quantization (kAQ) framework to provide an effective and secure tradeoff of privacy and complexity. kAQ limits systems knowledge of the incoming individual to k maximally dissimilar candidates in the database, where k is a design parameter that controls the amount of complexity-privacy tradeoff. The relationship between biometric similarity and privacy is experimentally validated using a twin iris database. The effectiveness of the entire system is demonstrated based on a public iris biometric database. To provide the protected subjects with full access to their privacy information in video surveillance system, I develop a novel privacy information management system that allows subjects to access their information via the same biometric signals used for ABAC. The system is composed of two encrypted-domain protocols: the privacy information encryption protocol encrypts the original video records using the iris pattern acquired during ABAC procedure; the privacy information retrieval protocol allows the video records to be anonymously retrieved through a GC-based iris pattern matching process. Experimental results on a public iris biometric database demonstrate the validity of my framework

    Privacy & law enforcement

    Get PDF

    Visual Privacy Protection Methods: A Survey

    Get PDF
    Recent advances in computer vision technologies have made possible the development of intelligent monitoring systems for video surveillance and ambient-assisted living. By using this technology, these systems are able to automatically interpret visual data from the environment and perform tasks that would have been unthinkable years ago. These achievements represent a radical improvement but they also suppose a new threat to individual’s privacy. The new capabilities of such systems give them the ability to collect and index a huge amount of private information about each individual. Next-generation systems have to solve this issue in order to obtain the users’ acceptance. Therefore, there is a need for mechanisms or tools to protect and preserve people’s privacy. This paper seeks to clarify how privacy can be protected in imagery data, so as a main contribution a comprehensive classification of the protection methods for visual privacy as well as an up-to-date review of them are provided. A survey of the existing privacy-aware intelligent monitoring systems and a valuable discussion of important aspects of visual privacy are also provided.This work has been partially supported by the Spanish Ministry of Science and Innovation under project “Sistema de visión para la monitorización de la actividad de la vida diaria en el hogar” (TIN2010-20510-C04-02) and by the European Commission under project “caring4U - A study on people activity in private spaces: towards a multisensor network that meets privacy requirements” (PIEF-GA-2010-274649). José Ramón Padilla López and Alexandros Andre Chaaraoui acknowledge financial support by the Conselleria d'Educació, Formació i Ocupació of the Generalitat Valenciana (fellowship ACIF/2012/064 and ACIF/2011/160 respectively)

    From public data to private information: The case of the supermarket

    Get PDF
    The background to this paper is that in our world of massively increasing personal digital data any control over the data about me seems illusionary – informational privacy seems a lost cause. On the other hand, the production of this digital data seems a necessary component of our present life in the industrialized world. A framework for a resolution of this apparent dilemma is provided if by the distinction between (meaningless) data and (meaningful) information. I argue that computational data processing is necessary for many present-day processes and not a breach of privacy, while collection and processing of private information is often not necessary and a breach of privacy. The problem and the sketch of its solution are illustrated in a case-study: supermarket customer cards

    Analysis of the right to be forgotten under the GDPR in the age of surveillance capitalism

    Get PDF
    The definition of personal data is evolving in the modern age. With the emergence of new technology, new commercial practices and the increase in the value of data, companies are looking for ways to extract as much value as possible from the data of their users and gain an edge on their competition. Among these practices there are various legal concerns such as the right to be forgotten under the GDPR, how well it can be ensured and whether it can be ensured. Because of competition, companies may engage in practices that may not be legal in terms of data collection in order to benefit and increase their market dominance. Overall, the right to be forgotten is not adequately ensured under the GDPR in terms of copied information due to a lack of clear enforcement terms and definitions. Profiling is well regulated and defined, however, in real practice most companies do not admit that their work revolves around profiling or benefitting from an ecosystem built on profiling, which means that in reality profiling is still a big issue. Harmful data extraction is regulated, as well as there is a case brought before Germany’s competition authority regarding abuse of market position by a dominant social network. This case can bring attention to harmful data extraction and increase the quality of its regulation, while it is currently not defined under the GDPR. Overall, the GDPR suffers from a lack of definitions and enforcement terms, which could be fixed by computer scientists and legislators collaborating more closely

    Facing Real-Time Identification in Mobile Apps & Wearable Computers

    Get PDF
    The use of face recognition technology in mobile apps and wearable computers challenges individuals’ ability to remain anonymous in public places. These apps can also link individuals’ offline activities to their online profiles, generating a digital paper trail of their every move. The ability to go off the radar allows for quiet reflection and daring experimentation—processes that are essential to a productive and democratic society. Given what we stand to lose, we ought to be cautious with groundbreaking technological progress. It does not mean that we have to move any slower, but we should think about potential consequences of the steps that we take. This article maps out the recently launched face recognition apps and some emerging regulatory responses to offer initial policy considerations. With respect to current apps, app developers should consider how the relevant individuals could be put on notice given that the apps will not only be using information about their users, but also about the persons being identified. They should also consider how the apps could minimize their data collection and retention and keep the data secure. Today’s face recognition apps mostly use photos from social networks. They therefore call for regulatory responses that consider the context in which users originally shared the photos. Most importantly, the article highlights that the Federal Trade Commission’s first policy response to consumer applications that use face recognition did not follow the well-established principle of technology neutrality. The article argues that any regulation with respect to identification in real time should be technology neutral and narrowly address harmful uses of computer vision without hampering the development of useful applications

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn
    corecore