684 research outputs found
Learning from life-logging data by hybrid HMM: a case study on active states prediction
In this paper, we have proposed employing a hybrid classifier-hidden Markov model (HMM) as a supervised learning approach to recognize daily active states from sequential life-logging data collected from wearable sensors. We generate synthetic data from real dataset to cope with noise and incompleteness for training purpose and, in conjunction with HMM, propose using a multiobjective genetic programming (MOGP) classifier in comparison of the support vector machine (SVM) with variant kernels. We demonstrate that the system with either algorithm works effectively to recognize personal active states regarding medical reference. We also illustrate that MOGP yields generally better results than SVM without requiring an ad hoc kernel
Context-dependent fusion with application to landmine detection.
Traditional machine learning and pattern recognition systems use a feature descriptor to describe the sensor data and a particular classifier (also called expert or learner ) to determine the true class of a given pattern. However, for complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be viable alternative to using a single classifier. In this thesis we introduce a new Context-Dependent Fusion (CDF) approach, We use this method to fuse multiple algorithms which use different types of features and different classification methods on multiple sensor data. The proposed approach is motivated by the observation that there is no single algorithm that can consistently outperform all other algorithms. In fact, the relative performance of different algorithms can vary significantly depending on several factions such as extracted features, and characteristics of the target class. The CDF method is a local approach that adapts the fusion method to different regions of the feature space. The goal is to take advantages of the strengths of few algorithms in different regions of the feature space without being affected by the weaknesses of the other algorithms and also avoiding the loss of potentially valuable information provided by few weak classifiers by considering their output as well. The proposed fusion has three main interacting components. The first component, called Context Extraction, partitions the composite feature space into groups of similar signatures, or contexts. Then, the second component assigns an aggregation weight to each detector\u27s decision in each context based on its relative performance within the context. The third component combines the multiple decisions, using the learned weights, to make a final decision. For Context Extraction component, a novel algorithm that performs clustering and feature discrimination is used to cluster the composite feature space and identify the relevant features for each cluster. For the fusion component, six different methods were proposed and investigated. The proposed approached were applied to the problem of landmine detection. Detection and removal of landmines is a serious problem affecting civilians and soldiers worldwide. Several detection algorithms on landmine have been proposed. Extensive testing of these methods has shown that the relative performance of different detectors can vary significantly depending on the mine type, geographical site, soil and weather conditions, and burial depth, etc. Therefore, multi-algorithm, and multi-sensor fusion is a critical component in land mine detection. Results on large and diverse real data collections show that the proposed method can identify meaningful and coherent clusters and that different expert algorithms can be identified for the different contexts. Our experiments have also indicated that the context-dependent fusion outperforms all individual detectors and several global fusion methods
Optimizing city-scale traffic through modeling observations of vehicle movements
The capability of traffic-information systems to sense the movement of
millions of users and offer trip plans through mobile phones has enabled a new
way of optimizing city traffic dynamics, turning transportation big data into
insights and actions in a closed-loop and evaluating this approach in the real
world. Existing research has applied dynamic Bayesian networks and deep neural
networks to make traffic predictions from floating car data, utilized dynamic
programming and simulation approaches to identify how people normally travel
with dynamic traffic assignment for policy research, and introduced Markov
decision processes and reinforcement learning to optimally control traffic
signals. However, none of these works utilized floating car data to suggest
departure times and route choices in order to optimize city traffic dynamics.
In this paper, we present a study showing that floating car data can lead to
lower average trip time, higher on-time arrival ratio, and higher
Charypar-Nagel score compared with how people normally travel. The study is
based on optimizing a partially observable discrete-time decision process and
is evaluated in one synthesized scenario, one partly synthesized scenario, and
three real-world scenarios. This study points to the potential of a "living
lab" approach where we learn, predict, and optimize behaviors in the real
world
Automatic Recognition of Concurrent and Coupled Human Motion Sequences
We developed methods and algorithms for all parts of a motion recognition system, i. e. Feature Extraction, Motion Segmentation and Labeling, Motion Primitive and Context Modeling as well as Decoding. We collected several datasets to compare our proposed methods with the state-of-the-art in human motion recognition. The main contributions of this thesis are a structured functional motion decomposition and a flexible and scalable motion recognition system suitable for a Humanoid Robot
Dynamic Switching State Systems for Visual Tracking
This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together
Dynamic Switching State Systems for Visual Tracking
This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together
- …