33 research outputs found

    Posture Recognition Using the Interdistances Between Wearable Devices

    Get PDF
    Recognition of user's postures and activities is particularly important, as it allows applications to customize their operations according to the current situation. The vast majority of available solutions are based on wearable devices equipped with accelerometers and gyroscopes. In this article, a different approach is explored: The posture of the user is inferred from the interdistances between the set of devices worn by the user. Interdistances are first measured by using ultra-wideband transceivers operating in two-way ranging mode and then provided as input to a classifier that estimates current posture. An experimental evaluation shows that the proposed method is effective (up to ∼98.2% accuracy), especially when using a personalized model. The method could be used to enhance the accuracy of activity recognition systems based on inertial sensors

    Unstructured Handwashing Recognition using Smartwatch to Reduce Contact Transmission of Pathogens

    Full text link
    Current guidelines from the World Health Organization indicate that the SARS-CoV-2 coronavirus, which results in the novel coronavirus disease (COVID-19), is transmitted through respiratory droplets or by contact. Contact transmission occurs when contaminated hands touch the mucous membrane of the mouth, nose, or eyes so hands hygiene is extremely important to prevent the spread of the SARSCoV-2 as well as of other pathogens. The vast proliferation of wearable devices, such as smartwatches, containing acceleration, rotation, magnetic field sensors, etc., together with the modern technologies of artificial intelligence, such as machine learning and more recently deep-learning, allow the development of accurate applications for recognition and classification of human activities such as: walking, climbing stairs, running, clapping, sitting, sleeping, etc. In this work, we evaluate the feasibility of a machine learning based system which, starting from inertial signals collected from wearable devices such as current smartwatches, recognizes when a subject is washing or rubbing its hands. Preliminary results, obtained over two different datasets, show a classification accuracy of about 95% and of about 94% for respectively deep and standard learning techniques

    Continuous human motion recognition with a dynamic range-Doppler trajectory method based on FMCW radar

    Get PDF
    Radar-based human motion recognition is crucial for many applications, such as surveillance, search and rescue operations, smart homes, and assisted living. Continuous human motion recognition in real-living environment is necessary for practical deployment, i.e., classification of a sequence of activities transitioning one into another, rather than individual activities. In this paper, a novel dynamic range-Doppler trajectory (DRDT) method based on the frequency-modulated continuous-wave (FMCW) radar system is proposed to recognize continuous human motions with various conditions emulating real-living environment. This method can separate continuous motions and process them as single events. First, range-Doppler frames consisting of a series of range-Doppler maps are obtained from the backscattered signals. Next, the DRDT is extracted from these frames to monitor human motions in time, range, and Doppler domains in real time. Then, a peak search method is applied to locate and separate each human motion from the DRDT map. Finally, range, Doppler, radar cross section (RCS), and dispersion features are extracted and combined in a multidomain fusion approach as inputs to a machine learning classifier. This achieves accurate and robust recognition even in various conditions of distance, view angle, direction, and individual diversity. Extensive experiments have been conducted to show its feasibility and superiority by obtaining an average accuracy of 91.9% on continuous classification

    Two-Dimensional Principal Component Analysis and Its Extensions

    Get PDF

    An Effective Approach for Human Activity Classification Using Feature Fusion and Machine Learning Methods

    Get PDF
    Recent advances in image processing and machine learning methods have greatly enhanced the ability of object classification from images and videos in different applications. Classification of human activities is one of the emerging research areas in the field of computer vision. It can be used in several applications including medical informatics, surveillance, human computer interaction, and task monitoring. In the medical and healthcare field, the classification of patients’ activities is important for providing the required information to doctors and physicians for medication reactions and diagnosis. Nowadays, some research approaches to recognize human activity from videos and images have been proposed using machine learning (ML) and soft computational algorithms. However, advanced computer vision methods are still considered promising development directions for developing human activity classification approach from a sequence of video frames. This paper proposes an effective automated approach using feature fusion and ML methods. It consists of five steps, which are the preprocessing, feature extraction, feature selection, feature fusion, and classification steps. Two available public benchmark datasets are utilized to train, validate, and test ML classifiers of the developed approach. The experimental results of this research work show that the accuracies achieved are 99.5% and 99.9% on the first and second datasets, respectively. Compared with many existing related approaches, the proposed approach attained high performance results in terms of sensitivity, accuracy, precision, and specificity evaluation metric.publishedVersio

    Monitoring environmental supporting conditions of a raised bog using remote sensing techniques

    Get PDF
    Conventional methods of monitoring wetlands and detecting changes over time can be time-consuming and costly. Inaccessibility and remoteness of many wetlands is also a limiting factor. Hence, there is a growing recognition of remote sensing techniques as a viable and cost-effective alternative to field-based ecosystem monitoring. Wetlands encompass a diverse array of habitats, for example, fens, bogs, marshes, and swamps. In this study, we concentrate on a natural wetland – Clara Bog, Co. Offaly, a raised bog situated in the Irish midlands. The aim of the study is to identify and monitor the environmental conditions of the bog using remote sensing techniques. Environmental conditions in this study refer to the vegetation composition of the bog and whether it is in an intact (peat-forming) or degraded state. It can be described using vegetation, the presence of water (soil moisture) and topography. Vegetation indices (VIs) derived from satellite data have been widely used to assess variations in properties of vegetation. This study uses mid-resolution data from Sentinel-2 MSI, Landsat 8 OLI for VI analysis. An initial study to delineate the boundary of the bog using the combination of edge detection and segmentation techniques namely, entropy filtering, canny edge detection, and graph-cut segmentation is performed. Once the bog boundary is defined, spectra of the delineated area are studied. VIs like NDVI, ARVI, SAVI, NDWI, derived using Sentinel-2 MSI and Landsat 8 OLI are analysed. A digital elevation model (DEM) was also used for better classification. All of these characteristics (features) serve as a basis for classifying the bog into broad vegetation communities (termed ecotopes) that indicate the quality of raised bog habitat. This analysis is validated using field derived ecotopes. The results show that, by using spectral information and vegetation index clustering, an additional linkage can be established between spectral RS signatures and wetland ecotopes. Hence, the benefit of the study is in understanding ecosystem (bog) environmental conditions and in defining appropriate metrics by which changes in the conditions can be monitored.</p

    Sensing Your Touch: Strengthen User Authentication via Touch Dynamic Biometrics

    Get PDF
    © 2019 IEEE. Mobile devices are increasingly used to store private and sensitive data, and this has led to an increased demand for more secure and usable authentication services. Currently, mobile device authentication services mainly use a knowledge-based method, e.g. a PIN-based authentication method, and, in some cases, a fingerprint-based authentication method is also supported. The knowledge-based method is vulnerable to impersonation attacks, while the fingerprint-based method can be unreliable sometimes. To make the authentication service more secure and reliable for mobile device users, this paper describes our efforts in investigating the benefits of integrating a touch dynamics authentication method into a PIN-based authentication method. It describes the design, implementation and evaluation of this method. Experimental results show that this approach can significantly reduce the success rate of impersonation attempts; in the case of a 4-digit PIN, the success rate is reduced from 100% (if only the PIN is used) to 9.9% (if both the PIN and the touch dynamics are used)
    corecore