53 research outputs found

    Predicting human behavior in smart environments: theory and application to gaze prediction

    Get PDF
    Predicting human behavior is desirable in many application scenarios in smart environments. The existing models for eye movements do not take contextual factors into account. This addressed in this thesis using a systematic machine-learning approach, where user profiles for eye movements behaviors are learned from data. In addition, a theoretical innovation is presented, which goes beyond pure data analysis. The thesis proposed the modeling of eye movements as a Markov Decision Processes. It uses Inverse Reinforcement Learning paradigm to infer the user eye movements behaviors

    Pairwise Decomposition of Image Sequences for Active Multi-View Recognition

    Get PDF
    A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classi- fier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both

    Active Object Classification from 3D Range Data with Mobile Robots

    Get PDF
    This thesis addresses the problem of how to improve the acquisition of 3D range data with a mobile robot for the task of object classification. Establishing the identities of objects in unknown environments is fundamental for robotic systems and helps enable many abilities such as grasping, manipulation, or semantic mapping. Objects are recognised by data obtained from sensor observations, however, data is highly dependent on viewpoint; the variation in position and orientation of the sensor relative to an object can result in large variation in the perception quality. Additionally, cluttered environments present a further challenge because key data may be missing. These issues are not always solved by traditional passive systems where data are collected from a fixed navigation process then fed into a perception pipeline. This thesis considers an active approach to data collection by deciding where is most appropriate to make observations for the perception task. The core contributions of this thesis are a non-myopic planning strategy to collect data efficiently under resource constraints, and supporting viewpoint prediction and evaluation methods for object classification. Our approach to planning uses Monte Carlo methods coupled with a classifier based on non-parametric Bayesian regression. We present a novel anytime and non-myopic planning algorithm, Monte Carlo active perception, that extends Monte Carlo tree search to partially observable environments and the active perception problem. This is combined with a particle-based estimation process and a learned observation likelihood model that uses Gaussian process regression. To support planning, we present 3D point cloud prediction algorithms and utility functions that measure the quality of viewpoints by their discriminatory ability and effectiveness under occlusion. The utility of viewpoints is quantified by information-theoretic metrics, such as mutual information, and an alternative utility function that exploits learned data is developed for special cases. The algorithms in this thesis are demonstrated in a variety of scenarios. We extensively test our online planning and classification methods in simulation as well as with indoor and outdoor datasets. Furthermore, we perform hardware experiments with different mobile platforms equipped with different types of sensors. Most significantly, our hardware experiments with an outdoor robot are to our knowledge the first demonstrations of online active perception in a real outdoor environment. Active perception has broad significance in many applications. This thesis emphasises the advantages of an active approach to object classification and presents its assimilation with a wide range of robotic systems, sensors, and perception algorithms. By demonstration of performance enhancements and diversity, our hope is that the concept of considering perception and planning in an integrated manner will be of benefit in improving current systems that rely on passive data collection

    Efficient model learning for dialog management

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 118-122).Partially Observable Markov Decision Processes (POMDPs) have succeeded in many planning domains because they can optimally trade between actions that will increase an agent's knowledge about its environment and actions that will increase an agent's reward. However, POMDPs are defined with a large number of parameters which are difficult to specify from domain knowledge, and gathering enough data to specify the parameters a priori may be expensive. This work develops several efficient algorithms for learning the POMDP parameters online and demonstrates them on dialog manager for a robotic wheelchair. In particular, we show how a combination of specialized queries ("meta-actions") can enable us to create a robust dialog manager that avoids the pitfalls in other POMDP-learning approaches. The dialog manager's ability to reason about its uncertainty -- and take advantage of low-risk opportunities to reduce that uncertainty -- leads to more robust policy learning.by Final Doshi.S.M

    Active Vision-Based Guidance with a Mobile Device for People with Visual Impairments

    Get PDF
    The aim of this research is to determine whether an active-vision system with a human-in-the-loop can be implemented to guide a user with visual impairments in finding a target object. Active vision techniques have successfully been applied to various electro-mechanical object search and exploration systems to boost their effectiveness at a given task. However, despite the potential of intelligent visual sensor arrays to enhance a user’s vision capabilities and alleviate some of the impacts that visual deficiencies have on their day-to-day lives, active vision techniques with human-in-the-loop remains an open research topic. In this thesis, an active guidance system is presented, which uses visual input from an object detector and an initial understanding of a typical room layout to generate navigation cues that assist a user with visual impairments in finding a target object. A complete guidance system prototype is implemented, along with a new audio-based interface and a state-of-the-art object detector, onto a mobile device and evaluated with a set of users in real environments. The results show that an active guidance approach performs well compared to other unguided solutions. This research highlights the potential benefits of the proposed active guidance controller and audio interface, which could enhance current vision-based guidance systems and travel aids for people with visual impairments
    • …
    corecore