38 research outputs found

    Activity recognition based on thermopile imaging array sensors

    Get PDF
    With aging population, the importance of caring for elderly people is getting more and more attention. In this paper, a low resolution thermopile array sensor is used to develop an activity recognition system for elderly people. The sensor is composed of a 32x32 thermopile array with the corresponding 33° × 33° field of view. The outputs of the sensor are sequential images in which each pixel contains a temperature value. According to the thermopile images, the activity recognition system first determines whether the target is within the tracking area; if the target is within the tracking area, the location of the target will be detected and three kinds of activities will be identified. Keywords- Activity Recognition, Raspberry Pi, Thermopile, Imaging Processing

    Risk of falling in a timed Up and Go test using an UWB radar and an instrumented insole

    Get PDF
    Previously, studies reported that falls analysis is possible in the elderly, when using wearable sensors. However, these devices cannot be worn daily, as they need to be removed and recharged from time-to-time due to their energy consumption, data transfer, attachment to the body, etc. This study proposes to introduce a radar sensor, an unobtrusive technology, for risk of falling analysis and combine its performance with an instrumented insole. We evaluated our methods on datasets acquired during a Timed Up and Go (TUG) test where a stride length (SL) was computed by the insole using three approaches. Only the SL from the third approach was not statistically significant (p = 0.2083 > 0.05) compared to the one provided by the radar, revealing the importance of a sensor location on human body. While reducing the number of force sensors (FSR), the risk scores using an insole containing three FSRs and y-axis of acceleration were not significantly different (p > 0.05) compared to the combination of a single radar and two FSRs. We concluded that contactless TUG testing is feasible, and by supplementing the instrumented insole to the radar, more precise information could be available for the professionals to make accurate decision

    Human Activity Recognition and Fall Detection Using Unobtrusive Technologies

    Full text link
    As the population ages, health issues like injurious falls demand more attention. Wearable devices can be used to detect falls. However, despite their commercial success, most wearable devices are obtrusive, and patients generally do not like or may forget to wear them. In this thesis, a monitoring system consisting of two 24×32 thermal array sensors and a millimetre-wave (mmWave) radar sensor was developed to unobtrusively detect locations and recognise human activities such as sitting, standing, walking, lying, and falling. Data were collected by observing healthy young volunteers simulate ten different scenarios. The optimal installation position of the sensors was initially unknown. Therefore, the sensors were mounted on a side wall, a corner, and on the ceiling of the experimental room to allow performance comparison between these sensor placements. Every thermal frame was converted into an image and a set of features was manually extracted or convolutional neural networks (CNNs) were used to automatically extract features. Applying a CNN model on the infrared stereo dataset to recognise five activities (falling plus lying on the floor, lying in bed, sitting on chair, sitting in bed, standing plus walking), overall average accuracy and F1-score were 97.6%, and 0.935, respectively. The scores for detecting falling plus lying on the floor from the remaining activities were 97.9%, and 0.945, respectively. When using radar technology, the generated point clouds were converted into an occupancy grid and a CNN model was used to automatically extract features, or a set of features was manually extracted. Applying several classifiers on the manually extracted features to detect falling plus lying on the floor from the remaining activities, Random Forest (RF) classifier achieved the best results in overhead position (an accuracy of 92.2%, a recall of 0.881, a precision of 0.805, and an F1-score of 0.841). Additionally, the CNN model achieved the best results (an accuracy of 92.3%, a recall of 0.891, a precision of 0.801, and an F1-score of 0.844), in overhead position and slightly outperformed the RF method. Data fusion was performed at a feature level, combining both infrared and radar technologies, however the benefit was not significant. The proposed system was cost, processing time, and space efficient. The system with further development can be utilised as a real-time fall detection system in aged care facilities or at homes of older people

    Thermal Cameras and Applications:A Survey

    Get PDF

    Distributed Computing and Monitoring Technologies for Older Patients

    Get PDF
    This book summarizes various approaches for the automatic detection of health threats to older patients at home living alone. The text begins by briefly describing those who would most benefit from healthcare supervision. The book then summarizes possible scenarios for monitoring an older patient at home, deriving the common functional requirements for monitoring technology. Next, the work identifies the state of the art of technological monitoring approaches that are practically applicable to geriatric patients. A survey is presented on a range of such interdisciplinary fields as smart homes, telemonitoring, ambient intelligence, ambient assisted living, gerontechnology, and aging-in-place technology. The book discusses relevant experimental studies, highlighting the application of sensor fusion, signal processing and machine learning techniques. Finally, the text discusses future challenges, offering a number of suggestions for further research directions

    Smart streetlights: a feasibility study

    Get PDF
    The world's cities are growing. The effects of population growth and urbanisation mean that more people are living in cities than ever before, a trend set to continue. This urbanisation poses problems for the future. With a growing population comes more strain on local resources, increased traffic and congestion, and environmental decline, including more pollution, loss of green spaces, and the formation of urban heat islands. Thankfully, many of these stressors can be alleviated with better management and procedures, particularly in the context of road infrastructure. For example, with better traffic data, signalling can be smoothed to reduce congestion, parking can be made easier, and streetlights can be dimmed in real time to match real-world road usage. However, obtaining this information on a citywide scale is prohibitively expensive due to the high costs of labour and materials associated with installing sensor hardware. This study investigated the viability of a streetlight-integrated sensor system to affordably obtain traffic and environmental information. This investigation was conducted in two stages: 1) the development of a hardware prototype, and 2) evaluation of an evolved prototype system. In Stage 1 of the study, the development of the prototype sensor system was conducted over three design iterations. These iterations involved, in iteration 1, the live deployment of the prototype system in an urban setting to select and evaluate sensors for environmental monitoring, and in iterations 2 and 3, deployments on roads with live and controlled traffic to develop and test sensors for remote traffic detection. In the final iteration, which involved controlled passes of over 600 vehicle, 600 pedestrian, and 400 cyclist passes, the developed system that comprised passive-infrared motion detectors, lidar, and thermal sensors, could detect and count traffic from a streetlight-integrated configuration with 99%, 84%, and 70% accuracy, respectively. With the finalised sensor system design, Stage 1 showed that traffic and environmental sensing from a streetlight-integrated configuration was feasible and effective using on-board processing with commercially available and inexpensive components. In Stage 2, financial and social assessments of the developed sensor system were conducted to evaluate its viability and value in a community. An evaluation tool for simulating streetlight installations was created to measure the effects of implementing the smart streetlight system. The evaluation showed that the on-demand traffic-adaptive dimming enabled by the smart streetlight system was able to reduce the electrical and maintenance costs of lighting installations. As a result, a 'smart' LED streetlight system was shown to outperform conventional always-on streetlight configurations in terms of financial value within a period of five to 12 years, depending on the installation's local traffic characteristics. A survey regarding the public acceptance of smart streetlight systems was also conducted and assessed the factors that influenced support of its applications. In particular, the Australia-wide survey investigated applications around road traffic improvement, streetlight dimming, and walkability, and quantified participants' support through willingness-to-pay assessments to enable each application. Community support of smart road applications was generally found to be positive and welcomed, especially in areas with a high dependence on personal road transport, and from participants adversely affected by spill light in their homes. Overall, the findings of this study indicate that our cities, and roads in particular, can and should be made smarter. The technology currently exists and is becoming more affordable to allow communities of all sizes to implement smart streetlight systems for the betterment of city services, resource management, and civilian health and wellbeing. The sooner that these technologies are embraced, the sooner they can be adapted to the specific needs of the community and environment for a more sustainable and innovative future

    Advanced Occupancy Measurement Using Sensor Fusion

    Get PDF
    With roughly about half of the energy used in buildings attributed to Heating, Ventilation, and Air conditioning (HVAC) systems, there is clearly great potential for energy saving through improved building operations. Accurate knowledge of localised and real-time occupancy numbers can have compelling control applications for HVAC systems. However, existing technologies applied for building occupancy measurements are limited, such that a precise and reliable occupant count is difficult to obtain. For example, passive infrared (PIR) sensors commonly used for occupancy sensing in lighting control applications cannot differentiate between occupants grouped together, video sensing is often limited by privacy concerns, atmospheric gas sensors (such as CO2 sensors) may be affected by the presence of electromagnetic (EMI) interference, and may not show clear links between occupancy and sensor values. Past studies have indicated the need for a heterogeneous multi-sensory fusion approach for occupancy detection to address the short-comings of existing occupancy detection systems. The aim of this research is to develop an advanced instrumentation strategy to monitor occupancy levels in non-domestic buildings, whilst facilitating the lowering of energy use and also maintaining an acceptable indoor climate. Accordingly, a novel multi-sensor based approach for occupancy detection in open-plan office spaces is proposed. The approach combined information from various low-cost and non-intrusive indoor environmental sensors, with the aim to merge advantages of various sensors, whilst minimising their weaknesses. The proposed approach offered the potential for explicit information indicating occupancy levels to be captured. The proposed occupancy monitoring strategy has two main components; hardware system implementation and data processing. The hardware system implementation included a custom made sound sensor and refinement of CO2 sensors for EMI mitigation. Two test beds were designed and implemented for supporting the research studies, including proof-of-concept, and experimental studies. Data processing was carried out in several stages with the ultimate goal being to detect occupancy levels. Firstly, interested features were extracted from all sensory data collected, and then a symmetrical uncertainty analysis was applied to determine the predictive strength of individual sensor features. Thirdly, a candidate features subset was determined using a genetic based search. Finally, a back-propagation neural network model was adopted to fuse candidate multi-sensory features for estimation of occupancy levels. Several test cases were implemented to demonstrate and evaluate the effectiveness and feasibility of the proposed occupancy detection approach. Results have shown the potential of the proposed heterogeneous multi-sensor fusion based approach as an advanced strategy for the development of reliable occupancy detection systems in open-plan office buildings, which can be capable of facilitating improved control of building services. In summary, the proposed approach has the potential to: (1) Detect occupancy levels with an accuracy reaching 84.59% during occupied instances (2) capable of maintaining average occupancy detection accuracy of 61.01%, in the event of sensor failure or drop-off (such as CO2 sensors drop-off), (3) capable of utilising just sound and motion sensors for occupancy levels monitoring in a naturally ventilated space, (4) capable of facilitating potential daily energy savings reaching 53%, if implemented for occupancy-driven ventilation control
    corecore