7,429 research outputs found
Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web
Current âInternet of Thingsâ concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3Câs Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where driversâ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun
Behavior analysis for aging-in-place using similarity heatmaps
The demand for healthcare services for an increasing population of older adults is faced with the shortage of skilled caregivers and a constant increase in healthcare costs. In addition, the strong preference of the elderly to live independently has been driving much research on "ambient-assisted living" (AAL) systems to support aging-in-place. In this paper, we propose to employ a low-resolution image sensor network for behavior analysis of a home occupant. A network of 10 low-resolution cameras (30x30 pixels) is installed in a service flat of an elderly, based on which the user's mobility tracks are extracted using a maximum likelihood tracker. We propose a novel measure to find similar patterns of behavior between each pair of days from the user's detected positions, based on heatmaps and Earth mover's distance (EMD). Then, we use an exemplar-based approach to identify sleeping, eating, and sitting activities, and walking patterns of the elderly user for two weeks of real-life recordings. The proposed system achieves an overall accuracy of about 94%
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
DeePLT: Personalized Lighting Facilitates by Trajectory Prediction of Recognized Residents in the Smart Home
In recent years, the intelligence of various parts of the home has become one
of the essential features of any modern home. One of these parts is the
intelligence lighting system that personalizes the light for each person. This
paper proposes an intelligent system based on machine learning that
personalizes lighting in the instant future location of a recognized user,
inferred by trajectory prediction. Our proposed system consists of the
following modules: (I) human detection to detect and localize the person in
each given video frame, (II) face recognition to identify the detected person,
(III) human tracking to track the person in the sequence of video frames and
(IV) trajectory prediction to forecast the future location of the user in the
environment using Inverse Reinforcement Learning. The proposed method provides
a unique profile for each person, including specifications, face images, and
custom lighting settings. This profile is used in the lighting adjustment
process. Unlike other methods that consider constant lighting for every person,
our system can apply each 'person's desired lighting in terms of color and
light intensity without direct user intervention. Therefore, the lighting is
adjusted with higher speed and better efficiency. In addition, the predicted
trajectory path makes the proposed system apply the desired lighting, creating
more pleasant and comfortable conditions for the home residents. In the
experimental results, the system applied the desired lighting in an average
time of 1.4 seconds from the moment of entry, as well as a performance of
22.1mAp in human detection, 95.12% accuracy in face recognition, 93.3% MDP in
human tracking, and 10.80 MinADE20, 18.55 MinFDE20, 15.8 MinADE5 and 30.50
MinFDE5 in trajectory prediction
An Unsupervised Approach for Automatic Activity Recognition based on Hidden Markov Model Regression
Using supervised machine learning approaches to recognize human activities
from on-body wearable accelerometers generally requires a large amount of
labelled data. When ground truth information is not available, too expensive,
time consuming or difficult to collect, one has to rely on unsupervised
approaches. This paper presents a new unsupervised approach for human activity
recognition from raw acceleration data measured using inertial wearable
sensors. The proposed method is based upon joint segmentation of
multidimensional time series using a Hidden Markov Model (HMM) in a multiple
regression context. The model is learned in an unsupervised framework using the
Expectation-Maximization (EM) algorithm where no activity labels are needed.
The proposed method takes into account the sequential appearance of the data.
It is therefore adapted for the temporal acceleration data to accurately detect
the activities. It allows both segmentation and classification of the human
activities. Experimental results are provided to demonstrate the efficiency of
the proposed approach with respect to standard supervised and unsupervised
classification approache
- âŠ