5,373 research outputs found

    Visual Content Privacy Protection: A Survey

    Full text link
    Vision is the most important sense for people, and it is also one of the main ways of cognition. As a result, people tend to utilize visual content to capture and share their life experiences, which greatly facilitates the transfer of information. Meanwhile, it also increases the risk of privacy violations, e.g., an image or video can reveal different kinds of privacy-sensitive information. Researchers have been working continuously to develop targeted privacy protection solutions, and there are several surveys to summarize them from certain perspectives. However, these surveys are either problem-driven, scenario-specific, or technology-specific, making it difficult for them to summarize the existing solutions in a macroscopic way. In this survey, a framework that encompasses various concerns and solutions for visual privacy is proposed, which allows for a macro understanding of privacy concerns from a comprehensive level. It is based on the fact that privacy concerns have corresponding adversaries, and divides privacy protection into three categories, based on computer vision (CV) adversary, based on human vision (HV) adversary, and based on CV \& HV adversary. For each category, we analyze the characteristics of the main approaches to privacy protection, and then systematically review representative solutions. Open challenges and future directions for visual privacy protection are also discussed.Comment: 24 pages, 13 figure

    Challenges and Remedies to Privacy and Security in AIGC: Exploring the Potential of Privacy Computing, Blockchain, and Beyond

    Full text link
    Artificial Intelligence Generated Content (AIGC) is one of the latest achievements in AI development. The content generated by related applications, such as text, images and audio, has sparked a heated discussion. Various derived AIGC applications are also gradually entering all walks of life, bringing unimaginable impact to people's daily lives. However, the rapid development of such generative tools has also raised concerns about privacy and security issues, and even copyright issues in AIGC. We note that advanced technologies such as blockchain and privacy computing can be combined with AIGC tools, but no work has yet been done to investigate their relevance and prospect in a systematic and detailed way. Therefore it is necessary to investigate how they can be used to protect the privacy and security of data in AIGC by fully exploring the aforementioned technologies. In this paper, we first systematically review the concept, classification and underlying technologies of AIGC. Then, we discuss the privacy and security challenges faced by AIGC from multiple perspectives and purposefully list the countermeasures that currently exist. We hope our survey will help researchers and industry to build a more secure and robust AIGC system.Comment: 43 pages, 10 figure

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    State of the art in privacy preservation in video data

    Full text link
    Active and Assisted Living (AAL) technologies and services are a possible solution to address the crucial challenges regarding health and social care resulting from demographic changes and current economic conditions. AAL systems aim to improve quality of life and support independent and healthy living of older and frail people. AAL monitoring systems are composed of networks of sensors (worn by the users or embedded in their environment) processing elements and actuators that analyse the environment and its occupants to extract knowledge and to detect events, such as anomalous behaviours, launch alarms to tele-care centres, or support activities of daily living, among others. Therefore, innovation in AAL can address healthcare and social demands while generating economic opportunities. Recently, there has been far-reaching advancements in the development of video-based devices with improved processing capabilities, heightened quality, wireless data transfer, and increased interoperability with Internet of Things (IoT) devices. Computer vision gives the possibility to monitor an environment and report on visual information, which is commonly the most straightforward and human-like way of describing an event, a person, an object, interactions and actions. Therefore, cameras can offer more intelligent solutions for AAL but they may be considered intrusive by some end users. The General Data Protection Regulation (GDPR) establishes the obligation for technologies to meet the principles of data protection by design and by default. More specifically, Article 25 of the GDPR requires that organizations must "implement appropriate technical and organizational measures [...] which are designed to implement data protection principles [...] , in an effective manner and to integrate the necessary safeguards into [data] processing.” Thus, AAL solutions must consider privacy-by-design methodologies in order to protect the fundamental rights of those being monitored. Different methods have been proposed in the latest years to preserve visual privacy for identity protection. However, in many AAL applications, where mostly only one person would be present (e.g. an older person living alone), user identification might not be an issue; concerns are more related to the disclosure of appearance (e.g. if the person is dressed/naked) and behaviour, what we called bodily privacy. Visual obfuscation techniques, such as image filters, facial de-identification, body abstraction, and gait anonymization, can be employed to protect privacy and agreed upon by the users ensuring they feel comfortable. Moreover, it is difficult to ensure a high level of security and privacy during the transmission of video data. If data is transmitted over several network domains using different transmission technologies and protocols, and finally processed at a remote location and stored on a server in a data center, it becomes demanding to implement and guarantee the highest level of protection over the entire transmission and storage system and for the whole lifetime of the data. The development of video technologies, increase in data rates and processing speeds, wide use of the Internet and cloud computing as well as highly efficient video compression methods have made video encryption even more challenging. Consequently, efficient and robust encryption of multimedia data together with using efficient compression methods are important prerequisites in achieving secure and efficient video transmission and storage.This publication is based upon work from COST Action GoodBrother - Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (CA19121), supported by COST (European Cooperation in Science and Technology). COST (European Cooperation in Science and Technology) is a funding agency for research and innovation networks. Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. www.cost.e

    The Force Awakens: Artificial intelligence for consumer law

    Get PDF
    Recent years have been tainted by market practices that continuously expose us, as consumers, to new risks and threats. We have become accustomed, and sometimes even resigned, to businesses monitoring our activities, examining our data, and even meddling with our choices. Artificial Intelligence (AI) is often depicted as a weapon in the hands of businesses and blamed for allowing this to happen. In this paper, we envision a paradigm shift, where AI technologies are brought to the side of consumers and their organizations, with the aim of building an efficient and effective counter-power. AI-powered tools can support a massive-scale automated analysis of textual and audiovisual data, as well as code, for the benefit of consumers and their organizations. This in turn can lead to a better oversight of business activities, help consumers exercise their rights, and enable the civil society to mitigate information overload. We discuss the societal, political, and technological challenges that stand before that vision

    The Force Awakens: Artificial Intelligence for Consumer Law

    Get PDF
    Recent years have been tainted by market practices that continuously expose us, as consumers, to new risks and threats. We have become accustomed, and sometimes even resigned, to businesses monitoring our activities, examining our data, and even meddling with our choices. Artificial Intelligence (AI) is often depicted as a weapon in the hands of businesses and blamed for allowing this to happen. In this paper, we envision a paradigm shift, where AI technologies are brought to the side of consumers and their organizations, with the aim of building an efficient and effective counter-power. AI-powered tools can support a massive-scale automated analysis of textual and audiovisual data, as well as code, for the benefit of consumers and their organizations. This in turn can lead to a better oversight of business activities, help consumers exercise their rights, and enable the civil society to mitigate information overload. We discuss the societal, political, and technological challenges that stand before that vision

    GAN-Based Differential Private Image Privacy Protection Framework for the Internet of Multimedia Things.

    Full text link
    With the development of the Internet of Multimedia Things (IoMT), an increasing amount of image data is collected by various multimedia devices, such as smartphones, cameras, and drones. This massive number of images are widely used in each field of IoMT, which presents substantial challenges for privacy preservation. In this paper, we propose a new image privacy protection framework in an effort to protect the sensitive personal information contained in images collected by IoMT devices. We aim to use deep neural network techniques to identify the privacy-sensitive content in images, and then protect it with the synthetic content generated by generative adversarial networks (GANs) with differential privacy (DP). Our experiment results show that the proposed framework can effectively protect users' privacy while maintaining image utility

    Facial re-enactment, speech synthesis and the rise of the Deepfake

    Get PDF
    Emergent technologies in the fields of audio speech synthesis and video facial manipulation have the potential to drastically impact our societal patterns of multimedia consumption. At a time when social media and internet culture is plagued by misinformation, propaganda and “fake news”, their latent misuse represents a possible looming threat to fragile systems of information sharing and social democratic discourse. It has thus become increasingly recognised in both academic and mainstream journalism that the ramifications of these tools must be examined to determine what they are and how their widespread availability can be managed. This research project seeks to examine four emerging software programs – Face2Face, FakeApp , Adobe VoCo and Lyrebird – that are designed to facilitate the synthesis of speech and manipulate facial features in videos. I will explore their positive industry applications and the potentially negative consequences of their release into the public domain. Consideration will be directed to how such consequences and risks can be ameliorated through detection, regulation and education. A final analysis of these three competing threads will then attempt to address whether the practical and commercial applications of these technologies are outweighed by the inherent unethical or illegal uses they engender, and if so; what we can do in response
    • …
    corecore