9 research outputs found
HFORD: High-Fidelity and Occlusion-Robust De-identification for Face Privacy Protection
With the popularity of smart devices and the development of computer vision
technology, concerns about face privacy protection are growing. The face
de-identification technique is a practical way to solve the identity protection
problem. The existing facial de-identification methods have revealed several
problems, including the impact on the realism of anonymized results when faced
with occlusions and the inability to maintain identity-irrelevant details in
anonymized results. We present a High-Fidelity and Occlusion-Robust
De-identification (HFORD) method to deal with these issues. This approach can
disentangle identities and attributes while preserving image-specific details
such as background, facial features (e.g., wrinkles), and lighting, even in
occluded scenes. To disentangle the latent codes in the GAN inversion space, we
introduce an Identity Disentanglement Module (IDM). This module selects the
latent codes that are closely related to the identity. It further separates the
latent codes into identity-related codes and attribute-related codes, enabling
the network to preserve attributes while only modifying the identity. To ensure
the preservation of image details and enhance the network's robustness to
occlusions, we propose an Attribute Retention Module (ARM). This module
adaptively preserves identity-irrelevant details and facial occlusions and
blends them into the generated results in a modulated manner. Extensive
experiments show that our method has higher quality, better detail fidelity,
and stronger occlusion robustness than other face de-identification methods
A review on visual privacy preservation techniques for active and assisted living
This paper reviews the state of the art in visual privacy protection techniques, with particular attention paid to techniques applicable to the field of Active and Assisted Living (AAL). A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced. Perceptual obfuscation methods, a category in this taxonomy, is highlighted. These are a category of visual privacy preservation techniques, particularly relevant when considering scenarios that come under video-based AAL monitoring. Obfuscation against machine learning models is also explored. A high-level classification scheme of privacy by design, as defined by experts in privacy and data protection law, is connected to the proposed taxonomy of visual privacy preservation techniques. Finally, we note open questions that exist in the field and introduce the reader to some exciting avenues for future research in the area of visual privacy.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is part of the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 861091. The authors would also like to acknowledge the contribution of COST Action CA19121 - GoodBrother, Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (https://goodbrother.eu/), supported by COST (European Cooperation in Science and Technology) (https://www.cost.eu/)
State of the art in privacy preservation in video data
Active and Assisted Living (AAL) technologies and services are a possible solution to address the crucial challenges regarding health and social care resulting from demographic changes and current economic conditions. AAL systems aim to improve quality of life and support independent and healthy living of older and frail people. AAL monitoring systems are composed of networks of sensors (worn by the users or embedded in their environment) processing elements and actuators that analyse the environment and its occupants to extract knowledge and to detect events, such as anomalous behaviours, launch alarms to tele-care centres, or support activities of daily living, among others. Therefore, innovation in AAL can address healthcare and social demands while generating economic opportunities.
Recently, there has been far-reaching advancements in the development of video-based devices with improved processing capabilities, heightened quality, wireless data transfer, and increased interoperability with Internet of Things (IoT) devices. Computer vision gives the possibility to monitor an environment and report on visual information, which is commonly the most straightforward and human-like way of describing an event, a person, an object, interactions and actions. Therefore, cameras can offer more intelligent solutions for AAL but they may be considered intrusive by some end users.
The General Data Protection Regulation (GDPR) establishes the obligation for technologies to meet the principles of data protection by design and by default. More specifically, Article 25 of the GDPR requires that organizations must "implement appropriate technical and organizational measures [...] which are designed to implement data protection principles [...] , in an effective manner and to integrate the necessary safeguards into [data] processing.” Thus, AAL solutions must consider privacy-by-design methodologies in order to protect the fundamental rights of those being monitored.
Different methods have been proposed in the latest years to preserve visual privacy for identity protection. However, in many AAL applications, where mostly only one person would be present (e.g. an older person living alone), user identification might not be an issue; concerns are more related to the disclosure of appearance (e.g. if the person is dressed/naked) and behaviour, what we called bodily privacy. Visual obfuscation techniques, such as image filters, facial de-identification, body abstraction, and gait anonymization, can be employed to protect privacy and agreed upon by the users ensuring they feel comfortable.
Moreover, it is difficult to ensure a high level of security and privacy during the transmission of video data. If data is transmitted over several network domains using different transmission technologies and protocols, and finally processed at a remote location and stored on a server in a data center, it becomes demanding to implement and guarantee the highest level of protection over the entire transmission and storage system and for the whole lifetime of the data. The development of video technologies, increase in data rates and processing speeds, wide use of the Internet and cloud computing as well as highly efficient video compression methods have made video encryption even more challenging. Consequently, efficient and robust encryption of multimedia data together with using efficient compression methods are important prerequisites in achieving secure and efficient video transmission and storage.This publication is based upon work from COST Action GoodBrother - Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (CA19121), supported by COST (European Cooperation in Science and Technology).
COST (European Cooperation in Science and Technology) is a funding agency for research and innovation networks. Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation.
www.cost.e
StyleGAN as a Utility-Preserving Face De-identification Method
Face de-identification methods have been proposed to preserve users' privacy
by obscuring their faces. These methods, however, can degrade the quality of
photos, and they usually do not preserve the utility of faces, i.e., their age,
gender, pose, and facial expression. Recently, GANs, such as StyleGAN, have
been proposed, which generate realistic, high-quality imaginary faces. In this
paper, we investigate the use of StyleGAN in generating de-identified faces
through style mixing. We examined this de-identification method for preserving
utility and privacy by implementing several face detection, verification, and
identification attacks and conducting a user study. The results from our
extensive experiments, human evaluation, and comparison with two
state-of-the-art methods, i.e., CIAGAN and DeepPrivacy, show that StyleGAN
performs on par or better than these methods, preserving users' privacy and
images' utility. In particular, the results of the machine learning-based
experiments show that StyleGAN0-4 preserves utility better than CIAGAN and
DeepPrivacy while preserving privacy at the same level. StyleGAN0-3 preserves
utility at the same level while providing more privacy. In this paper, for the
first time, we also performed a carefully designed user study to examine both
privacy and utility-preserving properties of StyleGAN0-3, 0-4, and 0-5, as well
as CIAGAN and DeepPrivacy from the human observers' perspectives. Our
statistical tests showed that participants tend to verify and identify
StyleGAN0-5 images more easily than DeepPrivacy images. All the methods but
StyleGAN0-5 had significantly lower identification rates than CIAGAN. Regarding
utility, as expected, StyleGAN0-5 performed significantly better in preserving
some attributes. Among all methods, on average, participants believe gender has
been preserved the most while naturalness has been preserved the least
Emotion detection with privacy preservation using adversarial learning
The continuous monitoring of one's emotional state can provide valuable insights about their psychological well-being and can be used as a foundation for diagnosis and treatment applications. Yet, due to privacy concerns, technologies that continuously monitor signals that reflect emotions, such as images, are met with strong skepticism. This thesis aims to design a privacy-preserving image generation algorithm that anonymizes the input image and at the same time maintains emotion-related information. To do so, we identify landmarks in human faces and quantify the amount of emotion and identity based information carried by each of the landmarks. We then propose a modification of a conditional generative adversarial network that can transform facial images in such a way that the identity based information is ignored while the emotion based information is retained. We then evaluate the degree of emotion and identity content in the transformed images by performing emotion and identity classification using these images. The proposed system is trained and evaluated on two publicly available datasets, namely the Yale Face Database and the Japanese Female Facial Expression dataset, and the generated images achieve moderate to high emotion classification accuracy and low identity classification accuracy
State of the art of audio- and video based solutions for AAL
Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach.
This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users.
The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted.
The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
State of the Art of Audio- and Video-Based Solutions for AAL
It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed