1,229 research outputs found

    Detecting head movement using gyroscope data collected via in-ear wearables

    Get PDF
    Abstract. Head movement is considered as an effective, natural, and simple method to determine the pointing towards an object. Head movement detection technology has significant potentiality in diverse field of applications and studies in this field verify such claim. The application includes fields like users interaction with computers, controlling many devices externally, power wheelchair operation, detecting drivers’ drowsiness while they drive, video surveillance system, and many more. Due to the diversity in application, the method of detecting head movement is also wide-ranging. A number of approaches such as acoustic-based, video-based, computer-vision based, inertial sensor data based head movement detection methods have been introduced by researchers over the years. In order to generate inertial sensor data, various types of wearables are available for example wrist band, smart watch, head-mounted device, and so on. For this thesis, eSense — a representative earable device — that has built-in inertial sensor to generate gyroscope data is employed. This eSense device is a True Wireless Stereo (TWS) earbud. It is augmented with some key equipment such as a 6-axis inertial motion unit, a microphone, and dual mode Bluetooth (Bluetooth Classic and Bluetooth Low Energy). Features are extracted from gyroscope data collected via eSense device. Subsequently, four machine learning models — Random Forest (RF), Support Vector Machine (SVM), Naïve Bayes, and Perceptron — are applied aiming to detect head movement. The performance of these models is evaluated by four different evaluation metrics such as Accuracy, Precision, Recall, and F1 score. Result shows that machine learning models that have been applied in this thesis are able to detect head movement. Comparing the performance of all these machine learning models, Random Forest performs better than others, it is able to detect head movement with approximately 77% accuracy. The accuracy rate of other three models such as Support Vector Machine, Naïve Bayes, and Perceptron is close to each other, where these models detect head movement with about 42%, 40%, and 39% accuracy, respectively. Besides, the result of other evaluation metrics like Precision, Recall, and F1 score verifies that using these machine learning models, different head direction such as left, right, or straight can be detected

    A Two-Level Approach to Characterizing Human Activities from Wearable Sensor Data

    Get PDF
    International audienceThe rapid emergence of new technologies in recent decades has opened up a world of opportunities for a better understanding of human mobility and behavior. It is now possible to recognize human movements, physical activity and the environments in which they take place. And this can be done with high precision, thanks to miniature sensors integrated into our everyday devices. In this paper, we explore different methodologies for recognizing and characterizing physical activities performed by people wearing new smart devices. Whether it's smartglasses, smartwatches or smartphones, we show that each of these specialized wearables has a role to play in interpreting and monitoring moments in a user's life. In particular, we propose an approach that splits the concept of physical activity into two sub-categories that we call micro-and macro-activities. Micro-and macro-activities are supposed to have functional relationship with each other and should therefore help to better understand activities on a larger scale. Then, for each of these levels, we show different methods of collecting, interpreting and evaluating data from different sensor sources. Based on a sensing system we have developed using smart devices, we build two data sets before analyzing how to recognize such activities. Finally, we show different interactions and combinations between these scales and demonstrate that they have the potential to lead to new classes of applications, involving authentication or user profiling

    Enhancing Usability, Security, and Performance in Mobile Computing

    Get PDF
    We have witnessed the prevalence of smart devices in every aspect of human life. However, the ever-growing smart devices present significant challenges in terms of usability, security, and performance. First, we need to design new interfaces to improve the device usability which has been neglected during the rapid shift from hand-held mobile devices to wearables. Second, we need to protect smart devices with abundant private data against unauthorized users. Last, new applications with compute-intensive tasks demand the integration of emerging mobile backend infrastructure. This dissertation focuses on addressing these challenges. First, we present GlassGesture, a system that improves the usability of Google Glass through a head gesture user interface with gesture recognition and authentication. We accelerate the recognition by employing a novel similarity search scheme, and improve the authentication performance by applying new features of head movements in an ensemble learning method. as a result, GlassGesture achieves 96% gesture recognition accuracy. Furthermore, GlassGesture accepts authorized users in nearly 92% of trials, and rejects attackers in nearly 99% of trials. Next, we investigate the authentication between a smartphone and a paired smartwatch. We design and implement WearLock, a system that utilizes one\u27s smartwatch to unlock one\u27s smartphone via acoustic tones. We build an acoustic modem with sub-channel selection and adaptive modulation, which generates modulated acoustic signals to maximize the unlocking success rate against ambient noise. We leverage the motion similarities of the devices to eliminate unnecessary unlocking. We also offload heavy computation tasks from the smartwatch to the smartphone to shorten response time and save energy. The acoustic modem achieves a low bit error rate (BER) of 8%. Compared to traditional manual personal identification numbers (PINs) entry, WearLock not only automates the unlocking but also speeds it up by at least 18%. Last, we consider low-latency video analytics on mobile devices, leveraging emerging mobile backend infrastructure. We design and implement LAVEA, a system which offloads computation from mobile clients to edge nodes, to accomplish tasks with intensive computation at places closer to users in a timely manner. We formulate an optimization problem for offloading task selection and prioritize offloading requests received at the edge node to minimize the response time. We design and compare various task placement schemes for inter-edge collaboration to further improve the overall response time. Our results show that the client-edge configuration has a speedup ranging from 1.3x to 4x against running solely by the client and 1.2x to 1.7x against the client-cloud configuration

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Hand Input Gesture Recognition Using ML-based Pose Estimation and Optical Flow Processing

    Get PDF
    Current techniques for detection of hand pose have high computational cost, and have accuracy and speed limitations which make them less than satisfactory for use in some high precision and time-sensitive applications. This disclosure describes the use of a combination of machine learning based pose estimation and optical flow processing techniques for hand gesture recognition, e.g., in AR and VR applications. Machine learning (ML) based pose estimation is utilized to identify hand pose and identify areas of interest that are processed further using optical flow processing. The use of optical flow processing enables higher precision in the detection of small movements as well as good low light performance. The use of ML based pose estimation reduces the solution space by reducing the size of an image that is subjected to optical flow processing. Such reduction of the solution space reduces the computational load and time required for optical flow processing

    SCLAiR : Supervised Contrastive Learning for User and Device Independent Airwriting Recognition

    Full text link
    Airwriting Recognition is the problem of identifying letters written in free space with finger movement. It is essentially a specialized case of gesture recognition, wherein the vocabulary of gestures corresponds to letters as in a particular language. With the wide adoption of smart wearables in the general population, airwriting recognition using motion sensors from a smart-band can be used as a medium of user input for applications in Human-Computer Interaction. There has been limited work in the recognition of in-air trajectories using motion sensors, and the performance of the techniques in the case when the device used to record signals is changed has not been explored hitherto. Motivated by these, a new paradigm for device and user-independent airwriting recognition based on supervised contrastive learning is proposed. A two stage classification strategy is employed, the first of which involves training an encoder network with supervised contrastive loss. In the subsequent stage, a classification head is trained with the encoder weights kept frozen. The efficacy of the proposed method is demonstrated through experiments on a publicly available dataset and also with a dataset recorded in our lab using a different device. Experiments have been performed in both supervised and unsupervised settings and compared against several state-of-the-art domain adaptation techniques. Data and the code for our implementation will be made available at https://github.com/ayushayt/SCLAiR
    • …
    corecore