14,920 research outputs found
Context-awareness for mobile sensing: a survey and future directions
The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions
Transportation mode recognition fusing wearable motion, sound and vision sensors
We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time
Recommended from our members
Review of computer vision in intelligent environment design
This paper discusses and compares the use of vision based and non-vision based technologies in developing intelligent environments. By reviewing the related projects that use vision based techniques in intelligent environment design, the achieved functions, technical issues and drawbacks of those projects are discussed and summarized, and the potential solutions for future improvement are proposed, which leads to the prospective direction of my PhD research
DeePLT: Personalized Lighting Facilitates by Trajectory Prediction of Recognized Residents in the Smart Home
In recent years, the intelligence of various parts of the home has become one
of the essential features of any modern home. One of these parts is the
intelligence lighting system that personalizes the light for each person. This
paper proposes an intelligent system based on machine learning that
personalizes lighting in the instant future location of a recognized user,
inferred by trajectory prediction. Our proposed system consists of the
following modules: (I) human detection to detect and localize the person in
each given video frame, (II) face recognition to identify the detected person,
(III) human tracking to track the person in the sequence of video frames and
(IV) trajectory prediction to forecast the future location of the user in the
environment using Inverse Reinforcement Learning. The proposed method provides
a unique profile for each person, including specifications, face images, and
custom lighting settings. This profile is used in the lighting adjustment
process. Unlike other methods that consider constant lighting for every person,
our system can apply each 'person's desired lighting in terms of color and
light intensity without direct user intervention. Therefore, the lighting is
adjusted with higher speed and better efficiency. In addition, the predicted
trajectory path makes the proposed system apply the desired lighting, creating
more pleasant and comfortable conditions for the home residents. In the
experimental results, the system applied the desired lighting in an average
time of 1.4 seconds from the moment of entry, as well as a performance of
22.1mAp in human detection, 95.12% accuracy in face recognition, 93.3% MDP in
human tracking, and 10.80 MinADE20, 18.55 MinFDE20, 15.8 MinADE5 and 30.50
MinFDE5 in trajectory prediction
- …