740 research outputs found
Combining Multiple Sensors for Event Detection of Older People
International audienceWe herein present a hierarchical model-based framework for event detection using multiple sensors. Event models combine a priori knowledge of the scene (3D geometric and semantic information, such as contextual zones and equipment) with moving objects (e.g., a Person) detected by a video monitoring system. The event models follow a generic ontology based on natural language, which allows domain experts to easily adapt them. The framework novelty lies on combining multiple sensors at decision (event) level, and handling their conflict using a proba-bilistic approach. The event conflict handling consists of computing the reliability of each sensor before their fusion using an alternative combination rule for Dempster-Shafer Theory. The framework evaluation is performed on multisensor recording of instrumental activities of daily living (e.g., watching TV, writing a check, preparing tea, organizing week intake of prescribed medication) of participants of a clinical trial for Alzheimer's disease study. Two fusion cases are presented: the combination of events (or activities) from heterogeneous sensors (RGB ambient camera and a wearable inertial sensor) following a deterministic fashion, and the combination of conflicting events from video cameras with partially overlapped field of view (a RGB-and a RGB-D-camera, Kinect). Results showed the framework improves the event detection rate in both cases
Combining Multiple Sensors for Event Detection of Older People
International audienceWe herein present a hierarchical model-based framework for event detection using multiple sensors. Event models combine a priori knowledge of the scene (3D geometric and semantic information, such as contextual zones and equipment) with moving objects (e.g., a Person) detected by a video monitoring system. The event models follow a generic ontology based on natural language, which allows domain experts to easily adapt them. The framework novelty lies on combining multiple sensors at decision (event) level, and handling their conflict using a proba-bilistic approach. The event conflict handling consists of computing the reliability of each sensor before their fusion using an alternative combination rule for Dempster-Shafer Theory. The framework evaluation is performed on multisensor recording of instrumental activities of daily living (e.g., watching TV, writing a check, preparing tea, organizing week intake of prescribed medication) of participants of a clinical trial for Alzheimer's disease study. Two fusion cases are presented: the combination of events (or activities) from heterogeneous sensors (RGB ambient camera and a wearable inertial sensor) following a deterministic fashion, and the combination of conflicting events from video cameras with partially overlapped field of view (a RGB-and a RGB-D-camera, Kinect). Results showed the framework improves the event detection rate in both cases
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people's lives, and are
clearly transforming the way we live and perceive technology. Today's
smartphones benefit from almost ubiquitous Internet connectivity and come
equipped with a plethora of inexpensive yet powerful embedded sensors, such as
accelerometer, gyroscope, microphone, and camera. This unique combination has
enabled revolutionary applications based on the mobile crowdsensing paradigm,
such as real-time road traffic monitoring, air and noise pollution, crime
control, and wildlife monitoring, just to name a few. Differently from prior
sensing paradigms, humans are now the primary actors of the sensing process,
since they become fundamental in retrieving reliable and up-to-date information
about the event being monitored. As humans may behave unreliably or
maliciously, assessing and guaranteeing Quality of Information (QoI) becomes
more important than ever. In this paper, we provide a new framework for
defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the
current state-of-the-art on the topic. We also outline novel research
challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN
Recommended from our members
Modeling engagement with multimodal multisensor data: the continuous performance test as an objective tool to track flow
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leaveone-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential
Experience with Using the Sensewear BMS Sensor System in the Context of a Health and Wellbeing Application
An assessment of a sensor designed for monitoring energy expenditure, activity, and sleep was conducted in the context of a research project which develops a weight management application. The overall goal of this project is to affect sustainable behavioural change with respect to diet and exercise in order to improve health and wellbeing. This paper reports results of a pretrial in which three volunteers wore the sensor for a total of 11 days. The aim was to gain experience with the sensor and determine if it would be suitable for incorporation into the ICT system developed by the project to be trialled later on a larger population. In this paper we focus mainly on activity monitoring and user experience. Data and results including visualizations and reports are presented and discussed. User experience proved positive in most respects. Exercise levels and sleep patterns correspond to user logs relating to exercise sessions and sleep patterns. Issues raised relate to accuracy, one source of possible interference, the desirability of enhancing the system with real-time data transmission, and analysis to enable real-time feedback. It is argued that automatic activity classification is needed to properly analyse and interpret physical activity data captured by accelerometry
What Characterizes Safety of Ambient Assisted Living Technologies?
Ambient assisted living (AAL) technologies aim at increasing an individual's safety at home by early recognizing risks or events that might otherwise harm the individual. A clear definition of safety in the context of AAL is still missing and facets of safety still have to be shaped. The objective of this paper is to characterize the facets of AAL-related safety, to identify opportunities and challenges of AAL regarding safety and to identify open research issues in this context. Papers reporting aspects of AAL-related safety were selected in a literature search. Out of 395 citations retrieved, 28 studies were included in the current review. Two main facets of safety were identified: user safety and system safety. System safety concerns an AAL system's reliability, correctness and data quality. User safety reflects impact on physical and mental health of an individual. Privacy, data safety and security issues, sensor quality and integration of sensor data, as well as technical failures of sensors and systems are reported challenges. To conclude, there is a research gap regarding methods and metrics for measuring user and system safety in the context of AAL technologies
Alternative Computer Assisted Communicative Task-based Language Testing: New Communicational and Interactive Online Skills
[EN] Computer-assisted language learning knowledge tests should no longer be designed on traditional skills to measure individual competence through traditional skills such as reading, comprehension and writing, but instead, it should diagnose interactive and communication skills in foreign languages. In recent years in online education, it has been necessary to review the concept of interactive competence in digital environments in a complementary way to its traditional use. It is important to promote a new typology of alternative tasks and items in tests where examinees can prove a real interactive performance in communication and interaction through the digital scenario. This should be done through tools that facilitate oral negotiation, the management and understanding of the information extracted from online repositories, the search for suitable online digital material, and the use of new modes of audio-visual communication. Although some of these tasks have been used in a complementary way in the design of language tests previously: it is true that they have not been applied in a coherent way to be used as an assessment tool. A first approach was made by Miguel Alvarez, Garcia Laborda & Magal-Royo (2021) in the development of oral negotiation skills through the use of interactive tools. The current online assessment models analyzed by Garcia Laborda & Alvarez Fernandez (2021) indicate the need to seek new ways of assessing foreign languages through the design of tests that fit in the current digital and interactive world.Magal-Royo, T.; García Laborda, J.; Mora Cantallops, M.; Sánchez Alonso, S. (2021). Alternative Computer Assisted Communicative
Task-based Language Testing: New Communicational
and Interactive Online Skills. International Journal of Emerging Technologies in Learning (Online). 16(19):251-259. https://doi.org/10.3991/ijet.v16i19.26035S251259161
- …