522 research outputs found

    Fitness Activity Recognition on Smartphones Using Doppler Measurements

    Get PDF
    Quantified Self has seen an increased interest in recent years, with devices including smartwatches, smartphones, or other wearables that allow you to monitor your fitness level. This is often combined with mobile apps that use gamification aspects to motivate the user to perform fitness activities, or increase the amount of sports exercise. Thus far, most applications rely on accelerometers or gyroscopes that are integrated into the devices. They have to be worn on the body to track activities. In this work, we investigated the use of a speaker and a microphone that are integrated into a smartphone to track exercises performed close to it. We combined active sonar and Doppler signal analysis in the ultrasound spectrum that is not perceivable by humans. We wanted to measure the body weight exercises bicycles, toe touches, and squats, as these consist of challenging radial movements towards the measuring device. We have tested several classification methods, ranging from support vector machines to convolutional neural networks. We achieved an accuracy of 88% for bicycles, 97% for toe-touches and 91% for squats on our test set

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    Intelligent ultrasound hand gesture recognition system

    Get PDF
    With the booming development of technology, hand gesture recognition has become a hotspot in Human-Computer Interaction (HCI) systems. Ultrasound hand gesture recognition is an innovative method that has attracted ample interest due to its strong real-time performance, low cost, large field of view, and illumination independence. Well-investigated HCI applications include external digital pens, game controllers on smart mobile devices, and web browser control on laptops. This thesis probes gesture recognition systems on multiple platforms to study the behavior of system performance with various gesture features. Focused on this topic, the contributions of this thesis can be summarized from the perspectives of smartphone acoustic field and hand model simulation, real-time gesture recognition on smart devices with speed categorization algorithm, fast reaction gesture recognition based on temporal neural networks, and angle of arrival-based gesture recognition system. Firstly, a novel pressure-acoustic simulation model is developed to examine its potential for use in acoustic gesture recognition. The simulation model is creating a new system for acoustic verification, which uses simulations mimicking real-world sound elements to replicate a sound pressure environment as authentically as possible. This system is fine-tuned through sensitivity tests within the simulation and validate with real-world measurements. Following this, the study constructs novel simulations for acoustic applications, informed by the verified acoustic field distribution, to assess their effectiveness in specific devices. Furthermore, a simulation focused on understanding the effects of the placement of sound devices and hand-reflected sound waves is properly designed. Moreover, a feasibility test on phase control modification is conducted, revealing the practical applications and boundaries of this model. Mobility and system accuracy are two significant factors that determine gesture recognition performance. As smartphones have high-quality acoustic devices for developing gesture recognition, to achieve a portable gesture recognition system with high accuracy, novel algorithms were developed to distinguish gestures using smartphone built-in speakers and microphones. The proposed system adopts Short-Time-Fourier-Transform (STFT) and machine learning to capture hand movement and determine gestures by the pretrained neural network. To differentiate gesture speeds, a specific neural network was designed and set as part of the classification algorithm. The final accuracy rate achieves 96% among nine gestures and three speed levels. The proposed algorithms were evaluated comparatively through algorithm comparison, and the accuracy outperformed state-of-the-art systems. Furthermore, a fast reaction gesture recognition based on temporal neural networks was designed. Traditional ultrasound gesture recognition adopts convolutional neural networks that have flaws in terms of response time and discontinuous operation. Besides, overlap intervals in network processing cause cross-frame failures that greatly reduce system performance. To mitigate these problems, a novel fast reaction gesture recognition system that slices signals in short time intervals was designed. The proposed system adopted a novel convolutional recurrent neural network (CRNN) that calculates gesture features in a short time and combines features over time. The results showed the reaction time significantly reduced from 1s to 0.2s, and accuracy improved to 100% for six gestures. Lastly, an acoustic sensor array was built to investigate the angle information of performed gestures. The direction of a gesture is a significant feature for gesture classification, which enables the same gesture in different directions to represent different actions. Previous studies mainly focused on types of gestures and analyzing approaches (e.g., Doppler Effect and channel impulse response, etc.), while the direction of gestures was not extensively studied. An acoustic gesture recognition system based on both speed information and gesture direction was developed. The system achieved 94.9% accuracy among ten different gestures from two directions. The proposed system was evaluated comparatively through numerical neural network structures, and the results confirmed that incorporating additional angle information improved the system's performance. In summary, the work presented in this thesis validates the feasibility of recognizing hand gestures using remote ultrasonic sensing across multiple platforms. The acoustic simulation explores the smartphone acoustic field distribution and response results in the context of hand gesture recognition applications. The smartphone gesture recognition system demonstrates the accuracy of recognition through ultrasound signals and conducts an analysis of classification speed. The fast reaction system proposes a more optimized solution to address the cross-frame issue using temporal neural networks, reducing the response latency to 0.2s. The speed and angle-based system provides an additional feature for gesture recognition. The established work will accelerate the development of intelligent hand gesture recognition, enrich the available gesture features, and contribute to further research in various gestures and application scenarios

    Device-Free Localization for Human Activity Monitoring

    Get PDF
    Over the past few decades, human activity monitoring has grabbed considerable research attentions due to greater demand for human-centric applications in healthcare and assisted living. For instance, human activity monitoring can be adopted in smart building system to improve the building management as well as the quality of life, especially for the elderly people who are facing health deterioration due to aging factor, without neglecting the important aspects such as safety and energy consumption. The existing human monitoring technology requires additional sensors, such as GPS, PIR sensors, video camera, etc., which incur cost and have several drawbacks. There exist various solutions of using other technologies for human activity monitoring in a smartly controlled environment, either device-assisted or device-free. A radio frequency (RF)-based device-free indoor localization, known as device-free localization (DFL), has attracted a lot of research effort in recent years due its simplicity, low cost, and compatibility with the existing hardware equipped with RF interface. This chapter introduces the potential of RF signals, commonly adopted for wireless communications, as sensing tools for DFL system in human activity monitoring. DFL is based on the concept of radio irregularity where human existence in wireless communication field may interfere and change the wireless characteristics

    Radio Based Device Activity Recognition

    Get PDF
    Recognizing human activities in their daily living allows the event and wide usage of human-centric applications, like health observance, aided living, etc. ancient activity recognition ways typically believe physical sensors (camera, measuring device, gyroscope, etc.) to endlessly collect sensing element readings, and utilize pattern recognition algorithms to spot user's activities at an aggregator. Though ancient activity recognition ways are incontestable to be effective in previous work, they raise some issues like privacy, energy consumption and preparation value. In recent years, a brand new activity recognition approach, that takes advantage of body attenuation and/or channel weakening of wireless radio, has been planned. Compared with ancient activity recognition ways, radio primarily based ways utilize wireless transceivers in environments as infrastructure, exploit radio communication characters to attain high recognition accuracy, scale back energy value and preserve user's privacy. During this paper, we tend to divide radio ways into four categories: ZigBee radio based activity recognition, local area network radio primarily based activity recognition, RFID radio primarily based activity recognition, and different radio primarily based activity recognition. Some existing add every class is introduced and reviewed thoroughly. Then, we tend to compare some representative ways to point out their blessings and downsides. At last, we tend to entails some future analysis directions of this new analysis topic

    Latest research trends in gait analysis using wearable sensors and machine learning: a systematic review

    Get PDF
    Gait is the locomotion attained through the movement of limbs and gait analysis examines the patterns (normal/abnormal) depending on the gait cycle. It contributes to the development of various applications in the medical, security, sports, and fitness domains to improve the overall outcome. Among many available technologies, two emerging technologies that play a central role in modern day gait analysis are: A) wearable sensors which provide a convenient, efficient, and inexpensive way to collect data and B) Machine Learning Methods (MLMs) which enable high accuracy gait feature extraction for analysis. Given their prominent roles, this paper presents a review of the latest trends in gait analysis using wearable sensors and Machine Learning (ML). It explores the recent papers along with the publication details and key parameters such as sampling rates, MLMs, wearable sensors, number of sensors, and their locations. Furthermore, the paper provides recommendations for selecting a MLM, wearable sensor and its location for a specific application. Finally, it suggests some future directions for gait analysis and its applications

    An Investigation of Indoor Positioning Systems and their Applications

    Get PDF
    PhDActivities of Daily Living (ADL) are important indicators of both cognitive and physical well-being in healthy and ill humans. There is a range of methods to recognise ADLs, each with its own limitations. The focus of this research was on sensing location-driven activities, in which ADLs are derived from location sensed using Radio Frequency (RF, e.g., WiFi or BLE), Magnetic Field (MF) and light (e.g., Lidar) measurements in three different environments. This research discovered that different environments can have different constraints and requirements. It investigated how to improve the positioning accuracy and hence how to improve the ADL recognition accuracy. There are several challenges that need to be addressed in order to do this. First, RF location fingerprinting is affected by the heterogeneity smartphones and their orientation with respect to transmitters, increasing the location determination error. To solve this, a novel Received Signal Strength Indication (RSSI) ranking based location fingerprinting methods that use Kendall Tau Correlation Coefficient (KTCC) and Convolutional Neural Networks (CNN) are proposed to correlate a signal position to pre-defined Reference Points (RPs) or fingerprints, more accurately, The accuracy has increased by up to 25.8% when compared to using Euclidean Distance (ED) based Weighted K-Nearest Neighbours Algorithm (WKNN). Second, the use of MF measurements as fingerprints can overcome some additional RF fingerprinting challenges, as MF measurements are far more invariant to static and dynamic physical objects that affect RF transmissions. Hence, a novel fast path matching data algorithm for an MF sensor combined with an Inertial Measurement Unit (IMU) to determine direction was researched and developed. It can achieve an average of 1.72 m positioning accuracy when the user walks far fewer (5) steps. Third, a device-free or off-body novel location-driven ADL method based upon 2D Lidar was investigated. An innovative method for recognising daily activities using a Seq2Seq model to analyse location data from a low-cost rotating 2D Lidar is proposed. It provides an accuracy of 88% when recognising 17 targeted ADLs. These proposed methods in this thesis have been validated in real environments.Chinese Scholarship Counci

    Smartphone as a Personal, Pervasive Health Informatics Services Platform: Literature Review

    Full text link
    Objectives: The article provides an overview of current trends in personal sensor, signal and imaging informatics, that are based on emerging mobile computing and communications technologies enclosed in a smartphone and enabling the provision of personal, pervasive health informatics services. Methods: The article reviews examples of these trends from the PubMed and Google scholar literature search engines, which, by no means claim to be complete, as the field is evolving and some recent advances may not be documented yet. Results: There exist critical technological advances in the surveyed smartphone technologies, employed in provision and improvement of diagnosis, acute and chronic treatment and rehabilitation health services, as well as in education and training of healthcare practitioners. However, the most emerging trend relates to a routine application of these technologies in a prevention/wellness sector, helping its users in self-care to stay healthy. Conclusions: Smartphone-based personal health informatics services exist, but still have a long way to go to become an everyday, personalized healthcare-provisioning tool in the medical field and in a clinical practice. Key main challenge for their widespread adoption involve lack of user acceptance striving from variable credibility and reliability of applications and solutions as they a) lack evidence-based approach; b) have low levels of medical professional involvement in their design and content; c) are provided in an unreliable way, influencing negatively its usability; and, in some cases, d) being industry-driven, hence exposing bias in information provided, for example towards particular types of treatment or intervention procedures
    corecore