212 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Multiple Person Localization Based on Their Vital Sign Detection Using UWB Sensor

    Get PDF
    In the past period, great efforts have been made to develop methods for through an obstacle detection of human vital signs such as breathing or heart beating. For that purpose, ultra-wideband (UWB) radars operating in the frequency band DC-5 GHz can be used as a proper tool. The basic principle of respiratory motion detection consists in the identification of radar signal components possessing a significant power in the frequency band 0.2–0.7 Hz (frequency band of human respiratory rate) corresponding to a constant bistatic range between the target and radar. To tackle the task of detecting respiratory motion, a variety of methods have been developed. However, the problem of person localization based on his or her respiratory motion detection has not been studied deeply. In order to fill this gap, an approach for multiple person localization based on the detection of their respiratory motion will be introduced in this chapter

    Wi-Fi For Indoor Device Free Passive Localization (DfPL): An Overview

    Get PDF
    The world is moving towards an interconnected and intercommunicable network of animate and inanimate objects with the emergence of Internet of Things (IoT) concept which is expected to have 50 billion connected devices by 2020. The wireless communication enabled devices play a major role in the realization of IoT. In Malaysia, home and business Internet Service Providers (ISP) bundle Wi-Fi modems working in 2.4 GHz Industrial, Scientific and Medical (ISM) radio band with their internet services. This makes Wi-Fi the most eligible protocol to serve as a local as well as internet data link for the IoT devices. Besides serving as a data link, human entity presence and location information in a multipath rich indoor environment can be harvested by monitoring and processing the changes in the Wi-Fi Radio Frequency (RF) signals. This paper comprehensively discusses the initiation and evolution of Wi-Fi based Indoor Device free Passive Localization (DfPL) since the concept was first introduced by Youssef et al. in 2007. Alongside the overview, future directions of DfPL in line with ongoing evolution of Wi-Fi based IoT devices are briefly discussed in this paper

    Multimodal radar sensing for ambient assisted living

    Get PDF
    Data acquired from health and behavioural monitoring of daily life activities can be exploited to provide real-time medical and nursing service with affordable cost and higher efficiency. A variety of sensing technologies for this purpose have been developed and presented in the literature, for instance, wearable IMU (Inertial Measurement Unit) to measure acceleration and angular speed of the person, cameras to record the images or video sequence, PIR (Pyroelectric infrared) sensor to detect the presence of the person based on Pyroelectric Effect, and radar to estimate distance and radial velocity of the person. Each sensing technology has pros and cons, and may not be optimal for the tasks. It is possible to leverage the strength of all these sensors through information fusion in a multimodal fashion. The fusion can take place at three different levels, namely, i) signal level where commensurate data are combined, ii) feature level where feature vectors of different sensors are concatenated and iii) decision level where confidence level or prediction label of classifiers are used to generate a new output. For each level, there are different fusion algorithms, the key challenge here is mainly on choosing the best existing fusion algorithm and developing novel fusion algorithms that more suitable for the current application. The fundamental contribution of this thesis is therefore exploring possible information fusion between radar, primarily FMCW (Frequency Modulated Continuous Wave) radar, and wearable IMU, between distributed radar sensors, and between UWB impulse radar and pressure sensor array. The objective is to sense and classify daily activities patterns, gait styles and micro-gestures as well as producing early warnings of high-risk events such as falls. Initially, only “snapshot” activities (single activity within a short X-s measurement) have been collected and analysed for verifying the accuracy improvement due to information fusion. Then continuous activities (activities that are performed one after another with random duration and transitions) have been collected to simulate the real-world case scenario. To overcome the drawbacks of conventional sliding-window approach on continuous data, a Bi-LSTM (Bidirectional Long Short-Term Memory) network is proposed to identify the transitions of daily activities. Meanwhile, a hybrid fusion framework is presented to exploit the power of soft and hard fusion. Moreover, a trilateration-based signal level fusion method has been successfully applied on the range information of three UWB (Ultra-wideband) impulse radar and the results show comparable performance as using micro-Doppler signature, at the price of much less computation loads. For classifying ‘snapshot’ activities, fusion between radar and wearable shows approximately 12% accuracy improvement compared to using radar only, whereas for classifying continuous activities and gaits, our proposed hybrid fusion and trilateration-based signal level improves roughly 6.8% (before 89%, after 95.8%) and 7.3% (before 85.4%, after 92.7%), respectively

    Amplitude Modeling of Specular Multipath Components for Robust Indoor Localization

    Get PDF
    Ultra-Wide Bandwidth (UWB) and mm-wave radio systems can resolve specular multipath components (SMCs) from estimated channel impulse response measurements. A geometric model can describe the delays, angles-of-arrival, and angles-of-departure of these SMCs, allowing for a prediction of these channel features. For the modeling of the amplitudes of the SMCs, a data-driven approach has been proposed recently, using Gaussian Process Regression (GPR) to map and predict the SMC amplitudes. In this paper, the applicability of the proposed multipath-resolved, GPR-based channel model is analyzed by studying features of the propagation channel from a set of channel measurements. The features analyzed include the energy capture of the modeled SMCs, the number of resolvable SMCs, and the ranging information that could be extracted from the SMCs. The second contribution of the paper concerns the potential applicability of the channel model for a multipath-resolved, single-anchor positioning system. The predicted channel knowledge is used to evaluate the measurement likelihood function at candidate positions throughout the environment. It is shown that the environmental awareness created by the multipath-resolved, GPR-based channel model yields higher robustness against position estimation outliers

    Neural Networks for Indoor Human Activity Reconstructions

    Get PDF
    Low cost, ubiquitous, tagless, and privacy aware indoor monitoring is essential to many existing or future applications, such as assisted living of elderly persons. We explore how well different types of neural networks in basic configurations can extract location and movement information from noisy experimental data (with both high-pitch and slow drift noise) obtained from capacitive sensors operating in loading mode at ranges much longer that the diagonal of their plates. Through design space exploration, we optimize and analyze the location and trajectory tracking inference performance of multilayer perceptron (MLP), autoregressive feedforward, 1D Convolutional (1D-CNN), and Long-Short Term Memory (LSTM) neural networks on experimental data collected using four capacitive sensors with 16 cm x 16 cm plates deployed on the boundaries of a 3 m x 3 m open space in our laboratory. We obtain the minimum error using a 1D-CNN [0.251 m distance Root Mean Square Error (RMSE) and 0.307 m Average Distance Error (ADE)] and the smoothest trajectory inference using an LSTM, albeit with higher localization errors (0.281 m RMSE and 0.326 m ADE). 1D Convolutional and window-based neural networks have best inference accuracy and smoother trajectory reconstruction. LSTMs seem to infer best the person movement dynamics

    Wi-Fi based people tracking in challenging environments

    Get PDF
    People tracking is a key building block in many applications such as abnormal activity detection, gesture recognition, and elderly persons monitoring. Video-based systems have many limitations making them ineffective in many situations. Wi-Fi provides an easily accessible source of opportunity for people tracking that does not have the limitations of video-based systems. The system will detect, localise, and track people, based on the available Wi-Fi signals that are reflected from their bodies. Wi-Fi based systems still need to address some challenges in order to be able to operate in challenging environments. Some of these challenges include the detection of the weak signal, the detection of abrupt people motion, and the presence of multipath propagation. In this thesis, these three main challenges will be addressed. Firstly, a weak signal detection method that uses the changes in the signals that are reflected from static objects, to improve the detection probability of weak signals that are reflected from the person’s body. Then, a deep learning based Wi-Fi localisation technique is proposed that significantly improves the runtime and the accuracy in comparison with existing techniques. After that, a quantum mechanics inspired tracking method is proposed to address the abrupt motion problem. The proposed method uses some interesting phenomena in the quantum world, where the person is allowed to exist at multiple positions simultaneously. The results show a significant improvement in reducing the tracking error and in reducing the tracking delay

    Edge Artificial Intelligence for Real-Time Target Monitoring

    Get PDF
    The key enabling technology for the exponentially growing cellular communications sector is location-based services. The need for location-aware services has increased along with the number of wireless and mobile devices. Estimation problems, and particularly parameter estimation, have drawn a lot of interest because of its relevance and engineers' ongoing need for higher performance. As applications expanded, a lot of interest was generated in the accurate assessment of temporal and spatial properties. In the thesis, two different approaches to subject monitoring are thoroughly addressed. For military applications, medical tracking, industrial workers, and providing location-based services to the mobile user community, which is always growing, this kind of activity is crucial. In-depth consideration is given to the viability of applying the Angle of Arrival (AoA) and Receiver Signal Strength Indication (RSSI) localization algorithms in real-world situations. We presented two prospective systems, discussed them, and presented specific assessments and tests. These systems were put to the test in diverse contexts (e.g., indoor, outdoor, in water...). The findings showed the localization capability, but because of the low-cost antenna we employed, this method is only practical up to a distance of roughly 150 meters. Consequently, depending on the use-case, this method may or may not be advantageous. An estimation algorithm that enhances the performance of the AoA technique was implemented on an edge device. Another approach was also considered. Radar sensors have shown to be durable in inclement weather and bad lighting conditions. Frequency Modulated Continuous Wave (FMCW) radars are the most frequently employed among the several sorts of radar technologies for these kinds of applications. Actually, this is because they are low-cost and can simultaneously provide range and Doppler data. In comparison to pulse and Ultra Wide Band (UWB) radar sensors, they also need a lower sample rate and a lower peak to average ratio. The system employs a cutting-edge surveillance method based on widely available FMCW radar technology. The data processing approach is built on an ad hoc-chain of different blocks that transforms data, extract features, and make a classification decision before cancelling clutters and leakage using a frame subtraction technique, applying DL algorithms to Range-Doppler (RD) maps, and adding a peak to cluster assignment step before tracking targets. In conclusion, the FMCW radar and DL technique for the RD maps performed well together for indoor use-cases. The aforementioned tests used an edge device and Infineon Technologies' Position2Go FMCW radar tool-set

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    • 

    corecore