929 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Data Processing for Device-Free Fine-Grained Occupancy Sensing Using Infrared Sensors

    Get PDF
    Fine-grained occupancy information plays an essential role for various emerging applications in smart homes, such as personalized thermal comfort control and human behavior analysis. Existing occupancy sensors, such as passive infrared (PIR) sensors generally provide limited coarse information such as motion. However, the detection of fine-grained occupancy information such as stationary presence, posture, identification, and activity tracking can be enabled with the advance of sensor technologies. Among these, infrared sensing is a low-cost, device-free, and privacy-preserving choice that detects the fluctuation (PIR sensors) or the thermal profiles (thermopile array sensors) from objects' infrared radiation. This work focuses on developing data processing models towards fine-grained occupancy sensing using the synchronized low-energy electronically chopped PIR (SLEEPIR) sensor or the thermopile array sensors. The main contributions of this dissertation include: (1) creating and validating the mathematical model of the SLEEPIR sensor output towards stationary occupancy detection; (2) developing the SLEEPIR detection algorithm using statistical features and long-short term memory (LSTM) deep learning; (3) building machine learning framework for posture detection and activity tracking using thermopile array sensors; and (4) creating convolutional neural network (CNN) models for facing direction detection and identification using thermopile array sensors

    MultiIoT: Towards Large-scale Multisensory Learning for the Internet of Things

    Full text link
    The Internet of Things (IoT), the network integrating billions of smart physical devices embedded with sensors, software, and communication technologies for the purpose of connecting and exchanging data with other devices and systems, is a critical and rapidly expanding component of our modern world. The IoT ecosystem provides a rich source of real-world modalities such as motion, thermal, geolocation, imaging, depth, sensors, video, and audio for prediction tasks involving the pose, gaze, activities, and gestures of humans as well as the touch, contact, pose, 3D of physical objects. Machine learning presents a rich opportunity to automatically process IoT data at scale, enabling efficient inference for impact in understanding human wellbeing, controlling physical devices, and interconnecting smart cities. To develop machine learning technologies for IoT, this paper proposes MultiIoT, the most expansive IoT benchmark to date, encompassing over 1.15 million samples from 12 modalities and 8 tasks. MultiIoT introduces unique challenges involving (1) learning from many sensory modalities, (2) fine-grained interactions across long temporal ranges, and (3) extreme heterogeneity due to unique structure and noise topologies in real-world sensors. We also release a set of strong modeling baselines, spanning modality and task-specific methods to multisensory and multitask models to encourage future research in multisensory representation learning for IoT

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy Sℱ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tangoℱ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices

    Latest research trends in gait analysis using wearable sensors and machine learning: a systematic review

    Get PDF
    Gait is the locomotion attained through the movement of limbs and gait analysis examines the patterns (normal/abnormal) depending on the gait cycle. It contributes to the development of various applications in the medical, security, sports, and fitness domains to improve the overall outcome. Among many available technologies, two emerging technologies that play a central role in modern day gait analysis are: A) wearable sensors which provide a convenient, efficient, and inexpensive way to collect data and B) Machine Learning Methods (MLMs) which enable high accuracy gait feature extraction for analysis. Given their prominent roles, this paper presents a review of the latest trends in gait analysis using wearable sensors and Machine Learning (ML). It explores the recent papers along with the publication details and key parameters such as sampling rates, MLMs, wearable sensors, number of sensors, and their locations. Furthermore, the paper provides recommendations for selecting a MLM, wearable sensor and its location for a specific application. Finally, it suggests some future directions for gait analysis and its applications

    Wearable Sensor Gait Analysis for Fall Detection Using Deep Learning Methods

    Get PDF
    World Health Organization (WHO) data show that around 684,000 people die from falls yearly, making it the second-highest mortality rate after traffic accidents [1]. Early detection of falls, followed by pneumatic protection, is one of the most effective means of ensuring the safety of the elderly. In light of the recent widespread adoption of wearable sensors, it has become increasingly critical that fall detection models are developed that can effectively process large and sequential sensor signal data. Several researchers have recently developed fall detection algorithms based on wearable sensor data. However, real-time fall detection remains challenging because of the wide range of gait variations in older. Choosing the appropriate sensor and placing it in the most suitable location are essential components of a robust real-time fall detection system. This dissertation implements various detection models to analyze and mitigate injuries due to falls in the senior community. It presents different methods for detecting falls in real-time using deep learning networks. Several sliding window segmentation techniques are developed and compared in the first study. As a next step, various methods are implemented and applied to prevent sampling imbalances caused by the real-world collection of fall data. A study is also conducted to determine whether accelerometers and gyroscopes can distinguish between falls and near-falls. According to the literature survey, machine learning algorithms produce varying degrees of accuracy when applied to various datasets. The algorithm’s performance depends on several factors, including the type and location of the sensors, the fall pattern, the dataset’s characteristics, and the methods used for preprocessing and sliding window segmentation. Other challenges associated with fall detection include the need for centralized datasets for comparing the results of different algorithms. This dissertation compares the performance of varying fall detection methods using deep learning algorithms across multiple data sets. Furthermore, deep learning has been explored in the second application of the ECG-based virtual pathology stethoscope detection system. A novel real-time virtual pathology stethoscope (VPS) detection method has been developed. Several deep-learning methods are evaluated for classifying the location of the stethoscope by taking advantage of subtle differences in the ECG signals. This study would significantly extend the simulation capabilities of standard patients by allowing medical students and trainees to perform realistic cardiac auscultation and hear cardiac auscultation in a clinical environment

    Shared Control for Wheelchair Interfaces

    Get PDF
    Independent mobility is fundamental to the quality of life of people with impairment. Most people with severe mobility impairments, whether congenital, e.g., from cerebral palsy, or acquired, e.g., from spinal cord injury, are prescribed a wheelchair. A small yet significant number of people are unable to use a typical powered wheelchair controlled with a joystick. Instead, some of these people require alternative interfaces such as a head- array or Sip/Puff switch to drive their powered wheelchairs. However, these alternative interfaces do not work for everyone and often cause frustration, fatigue and collisions. This thesis develops a novel technique to help improve the usability of some of these alternative interfaces, in particular, the head-array and Sip/Puff switch. Control is shared between a powered wheelchair user, using an alternative interface and a pow- ered wheelchair fitted with sensors. This shared control then produces a resulting motion that is close to what the user desires to do but a motion that is also safe. A path planning algorithm on the wheelchair is implemented using techniques in mo- bile robotics. Afterwards, the output of the path planning algorithm and the user’s com- mand are both modelled as random variables. These random variables are then blended in a joint probability distribution where the final velocity to the wheelchair is the one that maximises the joint probability distribution. The performance of the probabilistic approach to blending the user’s inputs with the output of a path planner, is benchmarked against the most common form of shared control called linear blending. The benchmarking consists of several experiments with end users both in a simulated world and in the real-world. The thesis concludes that probabilistic shared control provides safer motion compared with the traditional shared control for difficult tasks and hard-to-use interfaces

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    • 

    corecore