352,858 research outputs found

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set

    Collaborative Appearance-Based Place Recognition and Improving Place Recognition Using Detection of Dynamic Objects

    Full text link
    This dissertation makes contributions to the problem of Long-Term Appearance-Based Place Recognition. We present a framework for place recognition in a collaborative scheme and a method to reduce the impact of dynamic objects on place representations. We demonstrate our findings using a state-of-the-art place recognition approach. We begin in Part I by describing the general problem of place recognition and its importance in applications where accurate localization is crucial. We discuss feature detection and description and also explain the functioning of several place recognition frameworks. In Part II, we present a novel framework for collaboration between agents from a pure appearance-based place recognition perspective. Using this framework, multiple agents can efficiently share partial or complete knowledge about places and benefit from their teamwork. This collaborative framework allows agents with limited storage and memory capacity to become useful in environment exploration tasks (for instance, by enabling remote recognition); includes procedures to manage an agent’s memory load and distributes knowledge of places across agents; allows the reuse of knowledge from one agent to another; and increases the tolerance for failure of individual agents. Part II also defines metrics which allow us to measure the performance of a system that uses the collaborative framework. Finally, in Part III, we present an innovative method to improve the recognition of places in environments densely populated by dynamic objects. We demonstrate that we can improve the recognition performance in these environments by incorporating high- level information from dynamic objects. Tests conducted using a synthetic dataset show the benefits of our approach. The proposed method allows the system to significantly improve the recognition performance in the photo-realistic dataset while reducing storage requirements, resulting in up to 23.7 percent less storage space than the state-of-the-art approach that we have extended; smaller representations also reduced the time required to match places. In Part III, we also formulate the concept of a valid place representation and determine the quality of the observation based on dynamic objects present in the agent’s view. Of course, recognition systems that are sensitive to dynamic objects incur additional computational costs to recognize those objects. We show that this additional cost is outweighed by the benefits that incorporating dynamic object detection in the place recognition pipeline. Our findings can be used in many applications, including applications for navigation, e.g. assisting visually impaired individuals with navigating indoors, or autonomous vehicles

    Two-Stream Convolutional Networks for Dynamic Texture Synthesis

    Get PDF
    This thesis introduces a two-stream model for dynamic texture synthesis. The model is based on pre-trained convolutional networks (ConvNets) that target two independent tasks: (i) object recognition, and (ii) optical flow regression. Given an input dynamic texture, statistics of filter responses from the object recognition and optical flow ConvNets encapsulate the per-frame appearance and dynamics of the input texture, respectively. To synthesize a dynamic texture, a randomly initialized input sequence is optimized to match the feature statistics from each stream of an example texture. In addition, the synthesis approach is applied to combine the texture appearance from one texture with the dynamics of another to generate entirely novel dynamic textures. Overall, the proposed approach generates high quality samples that match both the framewise appearance and temporal evolution of input texture. Finally, a quantitative evaluation of the proposed dynamic texture synthesis approach is performed via a large-scale user study

    Fusing dynamic deep learned features and handcrafted features for facial expression recognition

    Get PDF
    The automated recognition of facial expressions has been actively researched due to its wide-ranging applications. The recent advances in deep learning have improved the performance facial expression recognition (FER) methods. In this paper, we propose a framework that combines discriminative features learned using convolutional neural networks and handcrafted features that include shape- and appearance-based features to further improve the robustness and accuracy of FER. In addition, texture information is extracted from facial patches to enhance the discriminative power of the extracted textures. By encoding shape, appearance, and deep dynamic information, the proposed framework provides high performance and outperforms state-of-the-art FER methods on the CK+ dataset

    Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity

    Get PDF
    A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed
    • …
    corecore