97 research outputs found
Securing Cyber-Physical Social Interactions on Wrist-worn Devices
Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this article, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel key generation system, which harvests motion data during user handshaking from the wrist-worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g., exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn’t involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed key generation system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to different types of attacks including impersonate mimicking attacks, impersonate passive attacks, or eavesdropping attacks. Specifically, for real-time impersonate mimicking attacks, in our experiments, the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed key generation system can be extremely lightweight and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption
Shake-n-shack : enabling secure data exchange between Smart Wearables via handshakes
Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this paper, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel Shake-n-Shack system, which harvests motion data during user handshaking from the wrist worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g., exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn't involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed Shake-n-Shack 1 system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to real-time mimicking attacks: in our experiments the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed Shake-n-Shack system can be extremely lightweight, and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption
Robust and IP-Protecting Vertical Federated Learning against Unexpected Quitting of Parties
Vertical federated learning (VFL) enables a service provider (i.e., active
party) who owns labeled features to collaborate with passive parties who
possess auxiliary features to improve model performance. Existing VFL
approaches, however, have two major vulnerabilities when passive parties
unexpectedly quit in the deployment phase of VFL - severe performance
degradation and intellectual property (IP) leakage of the active party's
labels. In this paper, we propose \textbf{Party-wise Dropout} to improve the
VFL model's robustness against the unexpected exit of passive parties and a
defense method called \textbf{DIMIP} to protect the active party's IP in the
deployment phase. We evaluate our proposed methods on multiple datasets against
different inference attacks. The results show that Party-wise Dropout
effectively maintains model performance after the passive party quits, and
DIMIP successfully disguises label information from the passive party's feature
extractor, thereby mitigating IP leakage
Snoopy: Sniffing Your Smartwatch Passwords via Deep Sequence Learning
Demand for smartwatches has taken off in recent years with new models which can run independently from smartphones and provide more useful features, becoming first-class mobile platforms. One can access online banking or even make payments on a smartwatch without a paired phone. This makes smartwatches more attractive and vulnerable to malicious attacks, which to date have been largely overlooked. In this paper, we demonstrate Snoopy, a password extraction and inference system which is able to accurately infer passwords entered on Android/Apple watches within 20 attempts, just by eavesdropping on motion sensors. Snoopy uses a uniform framework to extract the segments of motion data when passwords are entered, and uses novel deep neural networks to infer the actual passwords. We evaluate the proposed Snoopy system in the real-world with data from 362 participants and show that our system offers a ~ 3-fold improvement in the accuracy of inferring passwords compared to the state-of-the-art, without consuming excessive energy or computational resources. We also show that Snoopy is very resilient to user and device heterogeneity: it can be trained on crowd-sourced motion data (e.g. via Amazon Mechanical Turk), and then used to attack passwords from a new user, even if they are wearing a different model. This paper shows that, in the wrong hands, Snoopy can potentially cause serious leaks of sensitive information. By raising awareness, we invite the community and manufacturers to revisit the risks of continuous motion sensing on smart wearable devices
Rethinking Normalization Methods in Federated Learning
Federated learning (FL) is a popular distributed learning framework that can
reduce privacy risks by not explicitly sharing private data. In this work, we
explicitly uncover external covariate shift problem in FL, which is caused by
the independent local training processes on different devices. We demonstrate
that external covariate shifts will lead to the obliteration of some devices'
contributions to the global model. Further, we show that normalization layers
are indispensable in FL since their inherited properties can alleviate the
problem of obliterating some devices' contributions. However, recent works have
shown that batch normalization, which is one of the standard components in many
deep neural networks, will incur accuracy drop of the global model in FL. The
essential reason for the failure of batch normalization in FL is poorly
studied. We unveil that external covariate shift is the key reason why batch
normalization is ineffective in FL. We also show that layer normalization is a
better choice in FL which can mitigate the external covariate shift and improve
the performance of the global model. We conduct experiments on CIFAR10 under
non-IID settings. The results demonstrate that models with layer normalization
converge fastest and achieve the best or comparable accuracy for three
different model architectures.Comment: Submitted to DistributedML'22 worksho
Elementium [cost-effective braille embosser]
Sense Solutions presents Elementium, a low cost Braille embosser. Owning a Braille embosser is a great luxury for any visually impaired person. With this technology they can conveniently tell the difference between objects by printing out the names and mark them. The need for Braille embossers is especially urgent for the people who are learning to read Braille as they require a lot of practice. Currently a Braille embosser can cost from 5000, this is too expensive and people are in need of a solution that is affordable. With a practical and inexpensive product like Elementium, more people can learn to use Braille materials and improve their ways of life
SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models
Mixture-of-Experts (MoE) has emerged as a favorable architecture in the era
of large models due to its inherent advantage, i.e., enlarging model capacity
without incurring notable computational overhead. Yet, the realization of such
benefits often results in ineffective GPU memory utilization, as large portions
of the model parameters remain dormant during inference. Moreover, the memory
demands of large models consistently outpace the memory capacity of
contemporary GPUs. Addressing this, we introduce SiDA (Sparsity-inspired
Data-Aware), an efficient inference approach tailored for large MoE models.
SiDA judiciously exploits both the system's main memory, which is now abundant
and readily scalable, and GPU memory by capitalizing on the inherent sparsity
on expert activation in MoE models. By adopting a data-aware perspective, SiDA
achieves enhanced model efficiency with a neglectable performance drop.
Specifically, SiDA attains a remarkable speedup in MoE inference with up to
3.93X throughput increasing, up to 75% latency reduction, and up to 80% GPU
memory saving with down to 1% performance drop. This work paves the way for
scalable and efficient deployment of large MoE models, even in
memory-constrained systems
A fused biometrics information graph convolutional neural network for effective classification of patellofemoral pain syndrome
Patellofemoral pain syndrome (PFPS) is a common, yet misunderstood, knee pathology. Early accurate diagnosis can help avoid the deterioration of the disease. However, the existing intelligent auxiliary diagnosis methods of PFPS mainly focused on the biosignal of individuals but neglected the common biometrics of patients. In this paper, we propose a PFPS classification method based on the fused biometrics information Graph Convolution Neural Networks (FBI-GCN) which focuses on both the biosignal information of individuals and the common characteristics of patients. The method first constructs a graph which uses each subject as a node and fuses the biometrics information (demographics and gait biosignal) of different subjects as edges. Then, the graph and node information [biosignal information, including the joint kinematics and surface electromyography (sEMG)] are used as the inputs to the GCN for diagnosis and classification of PFPS. The method is tested on a public dataset which contain walking and running data from 26 PFPS patients and 15 pain-free controls. The results suggest that our method can classify PFPS and pain-free with higher accuracy (mean accuracy = 0.8531 ± 0.047) than other methods with the biosignal information of individuals as input (mean accuracy = 0.813 ± 0.048). After optimal selection of input variables, the highest classification accuracy (mean accuracy = 0.9245 ± 0.034) can be obtained, and a high accuracy can still be obtained with a 40% reduction in test variables (mean accuracy = 0.8802 ± 0.035). Accordingly, the method effectively reflects the association between subjects, provides a simple and effective aid for physicians to diagnose PFPS, and gives new ideas for studying and validating risk factors related to PFPS
Event-stream representation for human gaits identification using deep neural networks
Dynamic vision sensors (event cameras) are recently introduced to solve a number of different vision tasks such as object recognition, activities recognition, tracking, etc.Compared with the traditional RGB sensors, the event cameras have many unique advantages such as ultra low resources consumption, high temporal resolution and much larger dynamic range. However, those cameras only produce noisy and asynchronous events of intensity changes, i.e., event-streams rather than frames, where conventional computer vision algorithms can't be directly applied. We hold the opinion that the key challenge of improving the performance of event cameras in vision tasks is finding the appropriate representations of the event-streams so that cutting-edge learning approaches can be applied to fully uncover the spatial-temporal information contained in the event-streams. In this paper, we focus on the event-based human gait identification task and investigate the possible representations of the event-streams when deep neural networks are applied as the classifier. We propose new event-based gait Recognition approaches basing on two different representations of the event-stream, i.e., graph and image-like representations, and use Graph-based Convolutional Network (GCN) and Convolutional Neural Networks (CNN) respectively to recognize gait from the event-streams
- …