12 research outputs found

    Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    Get PDF
    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm

    Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    Get PDF
    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm

    Autonomous Robots in Dynamic Indoor Environments: Localization and Person-Following

    Get PDF
    Autonomous social robots have many tasks that they need to address such as localization, mapping, navigation, person following, place recognition, etc. In this thesis we focus on two key components required for the navigation of autonomous robots namely, person following behaviour and localization in dynamic human environments. We propose three novel approaches to address these components; two approaches for person following and one for indoor localization. A convolutional neural networks based approach and an Ada-boost based approach are developed for person following. We demonstrate the results by showing the tracking accuracy over time for this behaviour. For the localization task, we propose a novel approach which can act as a wrapper for traditional visual odometry based approaches to improve the localization accuracy in dynamic human environments. We evaluate this approach by showing how the performance varies with increasing number of dynamic agents present in the scene. This thesis provides qualitative and quantitative evaluations for each of the approaches proposed and show that we perform better than the current approaches

    Robust Eye Tracking Based on Adaptive Fusion of Multiple Cameras

    Get PDF
    Eye and gaze movements play an essential role in identifying individuals' emotional states, cognitive activities, interests, and attention among other behavioral traits. Besides, they are natural, fast, and implicitly reflect the targets of interest, which makes them a highly valuable input modality in human-computer interfaces. Therefore, tracking gaze movements, in other words, eye tracking is of great interest to a large number of disciplines, including human behaviour research, neuroscience, medicine, and human-computer interaction. Tracking gaze movements accurately is a challenging task, especially under unconstrained conditions. Over the last two decades, significant advances have been made in improving the gaze estimation accuracy. However, these improvements have been achieved mostly under controlled settings. Meanwhile, several concerns have arisen, such as the complexity, inflexibility and cost of the setups, increased user effort, and high sensitivity to varying real-world conditions. Despite various attempts and promising enhancements, existing eye tracking systems are still inadequate to overcome most of these concerns, which prevent them from being widely used. In this thesis, we revisit these concerns and introduce a novel multi-camera eye tracking framework. The proposed framework achieves a high estimation accuracy while requiring a minimal user effort and a non-intrusive flexible setup. In addition, it provides improved robustness to large head movements, illumination changes, use of eye wear, and eye type variations across users. We develop a novel real-time gaze estimation framework based on adaptive fusion of multiple single-camera systems, in which the gaze estimation relies on projective geometry. Besides, to ease the user calibration procedure, we investigate several methods to model the subject-specific estimation bias, and consequently, propose a novel approach based on weighted regularized least squares regression. The proposed method provides a better calibration modeling than state-of-the-art methods, particularly when using low-resolution and limited calibration data. Being able to operate with low-resolution data also enables to utilize a large field-of-view setup, so that large head movements are allowed. To address aforementioned robustness concerns, we propose to leverage multiple eye appearances simultaneously acquired from various views. In comparison with conventional single view approach, the main benefit of our approach is to more reliably detect gaze features under challenging conditions, especially when they are obstructed due to large head pose or movements, or eye glasses effects. We further propose an adaptive fusion mechanism to effectively combine the gaze outputs obtained from multi-view appearances. To this effect, our mechanism firstly determines the estimation reliability of each gaze output and then performs a reliability-based weighted fusion to compute the overall point of regard. In addition, to address illumination and eye type robustness, the setup is built upon active illumination and robust feature detection methods are developed. The proposed framework and methods are validated through extensive simulations and user experiments featuring 20 subjects. The results demonstrate that our framework provides not only a significant improvement in gaze estimation accuracy but also a notable robustness to real-world conditions, making it suitable for a large spectrum of applications

    Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    No full text
    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Get PDF
    The eighth edition of the Italian Conference on Computational Linguistics (CLiC-it 2021) was held at Università degli Studi di Milano-Bicocca from 26th to 28th January 2022. After the edition of 2020, which was held in fully virtual mode due to the health emergency related to Covid-19, CLiC-it 2021 represented the first moment for the Italian research community of Computational Linguistics to meet in person after more than one year of full/partial lockdown

    24th Nordic Conference on Computational Linguistics (NoDaLiDa)

    Get PDF

    CURATION AND MANAGEMENT OF CULTURAL HERITAGE THROUGH LIBRARIES

    Get PDF
    Libraries, museums and archives hold valuable collections in a variety of media, presenting a vast body of knowledge rooted in the history of human civilisation. These form the repository of the wisdom of great works by thinkers of past and the present. The holdings of these institutions are priceless heritage of the mankind as they preserve documents, ideas, and the oral and written records. To value the cultural heritage and to care for it as a treasure bequeathed to us by our ancestors is the major responsibility of libraries. The past records constitute a natural resource and are indispensable to the present generation as well as to the generations to come. Libraries preserve the documentary heritage resources for which they are primarily responsible. Any loss of such materials is simply irreplaceable. Therefore, preserving this intellectual, cultural heritage becomes not only the academic commitment but also the moral responsibility of the librarians/information scientists, who are in charge of these repositories. The high quality of the papers and the discussion represent the thinking and experience of experts in their particular fields. The contributed papers also relate to the methodology used in libraries in Asia to provide access to manuscripts and cultural heritage. The volume discusses best practices in Knowledge preservation and how to collaborate and preserve the culture. The book also deals with manuscript and archives issues in the digital era. The approach of this book is concise, comprehensively, covering all major aspects of preservation and conservation through libraries. The readership of the book is not just limited to library and information science professionals, but also for those involved in conservation, preservation, restoration or other related disciplines. The book will be useful for librarians, archivists and conservators. We thank the Sunan Kalijaga University, Special Libraries Association- Asian Chapter for their trust and their constant support, all the contributors for their submissions, the members of the Local and International Committee for their reviewing effort for making this publication possible
    corecore