177 research outputs found
Medical data processing and analysis for remote health and activities monitoring
Recent developments in sensor technology, wearable computing, Internet of Things (IoT), and wireless communication have given rise to research in ubiquitous healthcare and remote monitoring of human\u2019s health and activities. Health monitoring systems involve processing and analysis of data retrieved from smartphones, smart watches, smart bracelets, as well as various sensors and wearable devices. Such systems enable continuous monitoring of patients psychological and health conditions by sensing and transmitting measurements such as heart rate, electrocardiogram, body temperature, respiratory rate, chest sounds, or blood pressure. Pervasive healthcare, as a relevant application domain in this context, aims at revolutionizing the delivery of medical services through a medical assistive environment and facilitates the independent living of patients. In this chapter, we discuss (1) data collection, fusion, ownership and privacy issues; (2) models, technologies and solutions for medical data processing and analysis; (3) big medical data analytics for remote health monitoring; (4) research challenges and opportunities in medical data analytics; (5) examples of case studies and practical solutions
InContexto: Multisensor Architecture to Obtain People Context from Smartphones
The way users intectact with smartphones is changing after the improvements made in their embedded sensors. Increasingly, these devices are being employed as tools to observe individuals habits. Smartphones provide a great set of embedded sensors, such as accelerometer, digital compass, gyroscope, GPS, microphone, and camera. This paper aims to describe a distributed architecture, called inContexto, to recognize user context information using mobile phones. Moreover, it aims to infer physical actions performed by users such as walking, running, and still. Sensory data is collected by HTC magic application made in Android OS, and it was tested achieving about 97% of accuracy classifying five different actions (still, walking and running).This work was supported in part by Projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485), and DPS2008-07029-
C02-02.Publicad
Sensor Technologies for Intelligent Transportation Systems
Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS) and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment
Driver Assistance Technologies
Topic: Driver Assistance Technology is emerging as new driving technology popularly known as ADAS. It is supported with Adaptive Cruise Control, Automatic Emergency Brake, blind spot monitoring, lane change assistance, and forward collision warnings etc. It is an important platform to integrate these multiple applications by using data from multifunction sensors, cameras, radars, lidars etc. and send command to plural actuators, engine, brake, steering etc. ADAS technology can detect some objects, do basic classification, alert the driver of hazardous road conditions, and in some cases, slow or stop the vehicle. The architecture of the electronic control units (ECUs) is responsible for executing advanced driver assistance systems (ADAS) in vehicle which is changing as per its response during the process of driving. Automotive system architecture integrates multiple applications into ADAS ECUs that serve multiple sensors for their functions. Hardware architecture of ADAS and autonomous driving, includes automotive Ethernet, TSN, Ethernet switch and gateway, and domain controller while Software architecture of ADAS and autonomous driving, including AUTOSAR Classic and Adaptive, ROS 2.0 and QNX. This chapter explains the functioning of Assistance Driving Technology with the help of its architecture and various types of sensors
Recommended from our members
Multimodal Multisensor attention modelling
Introduction: Sustaining attention is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to track student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time Multimodal Multisensor data labeled by objective performance outcomes to track the attention of students.
Method: The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal Multisensor data were collected while they participated in a Continuous Performance Test (CPT). Eyegaze, electroencephalogram, body pose, and interaction data were used to create a model of student attention through objective labeling from the Continuous Performance Test outcomes. To achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including High-Level handpicked Compound Features (HLCF). Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated.
Research questions:
RQ1: Can we create a model of attention for PMLD/CP students using the CPT?
RQ2: What are the main correlations found in the CPT outcomes and the Multimodal Multisensor data?
Results: Overall, the random forest classification approach achieved the best classification results. Using random forest, 84.8% classification for attention and 65.4% accuracy for inattention were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naĂŻve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. Incorporating person-specific data improved the classification outcome, compared to being participant neutral. We found that using HighLevel handpicked Compound Features (HLCF) can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of attention and inattention was shown to be eye-gaze. We have shown that we can accurately predict the level of attention of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation, or reliant on a single mode of sensor input. In total, 2475 separate correlation tests were carried over 55 data points using Pearsonâs correlation coefficient. Data points from the SDT, CPT outcomes measures, Multimodal Multisensor features, and participant characteristics were assessed longitudinally for cross-correlation significance. A strong positive correlation was found between participant ability to maintain sustained and selective attention in the CPT to their academic progress in school (dâČ), P < .01. Participants who showed more inhibition in tests had progressed further in their academic assessments P < .01. The Seek-X type CPT also showed specific physiological characteristics, including body movement range and eye-gaze that were significant in P scales such as âReadingâ and âListeningâ P < .05. We found that participant bias was overall liberal BâłD < 0. Participants iii showed no significant bias change during the sessions, and we found no significant correlation between bias (BâłD) and sensitivity (dâČ).
Conclusion: An approach to labeling Multimodal Multisensor data to train machine-learning algorithms to track the attention of students with profound and multiple disabilities has been presented. We posit that this approach can overcome the variation in observer inter-rater reliability when using standardized scales in tracking the emotional expression of students with such profound disabilities. The accuracy of our approach increases with multiple modes of sensor input, and our method is robust to sensor occlusion and fall-out. Multiple sources of sensor input are provided, to accommodate a wide variety of users and their needs. Our model can reliably track the attention of students with profound disabilities, regardless of the sensors available. A system incorporating this model can help teachers design personalized interventions for a very heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. This approach could be used to identify those with the greatest learning challenges, to guarantee that all students are supported to reach their full potential.
KeywordsâAffective computing in education, affect detection, attention, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, Signal Detection Theory, selective attention, sustained attention, student engagement
On driver behavior recognition for increased safety:A roadmap
Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account driversâ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative HumanâMachine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced
Sensor fusion in smart camera networks for ambient intelligence
This short report introduces the topics of PhD research that was conducted on 2008-2013 and was defended on July 2013. The PhD thesis covers sensor fusion theory, gathers it into a framework with design rules for fusion-friendly design of vision networks, and elaborates on the rules through fusion experiments performed with four distinct applications of Ambient Intelligence
Review of data fusion methods for real-time and multi-sensor traffic flow analysis
Recently, development in intelligent transportation systems (ITS) requires the input of various kinds of data in real-time and from multiple sources, which imposes additional research and application challenges. Ongoing studies on Data Fusion (DF) have produced significant improvement in ITS and manifested an enormous impact on its growth. This paper reviews the implementation of DF methods in ITS to facilitate traffic flow analysis (TFA) and solutions that entail the prediction of various traffic variables such as driving behavior, travel time, speed, density, incident, and traffic flow. It attempts to identify and discuss real-time and multi-sensor data sources that are used for various traffic domains, including road/highway management, traffic states estimation, and traffic controller optimization. Moreover, it attempts to associate abstractions of data level fusion, feature level fusion, and decision level fusion on DF methods to better understand the role of DF in TFA and ITS. Consequently, the main objective of this paper is to review DF methods used for real-time and multi-sensor (heterogeneous) TFA studies. The review outcomes are (i) a guideline of constructing DF methods which involve preprocessing, filtering, decision, and evaluation as core steps, (ii) a description of the recent DF algorithms or methods that adopt real-time and multi-sensor sources data and the impact of these data sources on the improvement of TFA, (iii) an examination of the testing and evaluation methodologies and the popular datasets and (iv) an identification of several research gaps, some current challenges, and new research trends
- âŠ