357 research outputs found
Distributionally Robust Semi-Supervised Learning for People-Centric Sensing
Semi-supervised learning is crucial for alleviating labelling burdens in
people-centric sensing. However, human-generated data inherently suffer from
distribution shift in semi-supervised learning due to the diverse biological
conditions and behavior patterns of humans. To address this problem, we propose
a generic distributionally robust model for semi-supervised learning on
distributionally shifted data. Considering both the discrepancy and the
consistency between the labeled data and the unlabeled data, we learn the
latent features that reduce person-specific discrepancy and preserve
task-specific consistency. We evaluate our model in a variety of people-centric
recognition tasks on real-world datasets, including intention recognition,
activity recognition, muscular movement recognition and gesture recognition.
The experiment results demonstrate that the proposed model outperforms the
state-of-the-art methods.Comment: 8 pages, accepted by AAAI201
Neural network security and optimization for single-person authentication using electroencephalogram data
Includes bibliographical references.2022 Fall.Security is an important focus for devices that use biometric data, and as such security around authentication needs to be considered. This is true for brain-computer interfaces (BCIs), which often use electroencephalogram (EEG) data as inputs and neural network classification to determine their function. EEG data can also serve as a form of biometric authentication, which would contribute to the security of these devices. Neural networks have also used a method known as ablation to improve their efficiency. In light of this info, the goal of this research is to determine whether neural network ablation can also be used as a method to improve security by reducing a network's learning capabilities to include authenticating only a given target, and preventing adversaries from training new data to be authenticated. Data on the change in entropy of weight values of the networks after training was also collected for the purpose of determining patterns in weight distribution. Results from a set of ablated networks to a set of baseline (non-ablated) networks for five targets chosen randomly from a data set of 12 people were compared. The results found that ablated maintained accuracy through the ablation process, but that they did not perform as well as the baseline networks. Change in performance between single-target authentication and target-plus-invader authentication was also examined, but no significant results were found. Furthermore, the change in entropy differed between both baseline networks and ablated networks, as well as between single-target authentication and target-plus-invader authentication for all networks. Ablation was determined to have potential for security applications that need to be expanded on, and weight distribution was found to have some correlation with the complexity of an input to a network
A Framework for Preserving Privacy and Cybersecurity in Brain-Computer Interfacing Applications
Brain-Computer Interfaces (BCIs) comprise a rapidly evolving field of
technology with the potential of far-reaching impact in domains ranging from
medical over industrial to artistic, gaming, and military. Today, these
emerging BCI applications are typically still at early technology readiness
levels, but because BCIs create novel, technical communication channels for the
human brain, they have raised privacy and security concerns. To mitigate such
risks, a large body of countermeasures has been proposed in the literature, but
a general framework is lacking which would describe how privacy and security of
BCI applications can be protected by design, i.e., already as an integral part
of the early BCI design process, in a systematic manner, and allowing suitable
depth of analysis for different contexts such as commercial BCI product
development vs. academic research and lab prototypes. Here we propose the
adoption of recent systems-engineering methodologies for privacy threat
modeling, risk assessment, and privacy engineering to the BCI field. These
methodologies address privacy and security concerns in a more systematic and
holistic way than previous approaches, and provide reusable patterns on how to
move from principles to actions. We apply these methodologies to BCI and data
flows and derive a generic, extensible, and actionable framework for
brain-privacy-preserving cybersecurity in BCI applications. This framework is
designed for flexible application to the wide range of current and future BCI
applications. We also propose a range of novel privacy-by-design features for
BCIs, with an emphasis on features promoting BCI transparency as a prerequisite
for informational self-determination of BCI users, as well as design features
for ensuring BCI user autonomy. We anticipate that our framework will
contribute to the development of privacy-respecting, trustworthy BCI
technologies
Privacy-Protecting Techniques for Behavioral Data: A Survey
Our behavior (the way we talk, walk, or think) is unique and can be used as a biometric trait. It also correlates with sensitive attributes like emotions. Hence, techniques to protect individuals privacy against unwanted inferences are required. To consolidate knowledge in this area, we systematically reviewed applicable anonymization techniques. We taxonomize and compare existing solutions regarding privacy goals, conceptual operation, advantages, and limitations. Our analysis shows that some behavioral traits (e.g., voice) have received much attention, while others (e.g., eye-gaze, brainwaves) are mostly neglected. We also find that the evaluation methodology of behavioral anonymization techniques can be further improved
- …