49,287 research outputs found

    End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks

    Full text link
    In this work we present a novel end-to-end framework for tracking and classifying a robot's surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it's semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification

    Towards a Principled Integration of Multi-Camera Re-Identification and Tracking through Optimal Bayes Filters

    Full text link
    With the rise of end-to-end learning through deep learning, person detectors and re-identification (ReID) models have recently become very strong. Multi-camera multi-target (MCMT) tracking has not fully gone through this transformation yet. We intend to take another step in this direction by presenting a theoretically principled way of integrating ReID with tracking formulated as an optimal Bayes filter. This conveniently side-steps the need for data-association and opens up a direct path from full images to the core of the tracker. While the results are still sub-par, we believe that this new, tight integration opens many interesting research opportunities and leads the way towards full end-to-end tracking from raw pixels.Comment: First two authors have equal contribution. This is initial work into a new direction, not a benchmark-beating method. v2 only adds acknowledgements and fixes a typo in e-mai

    Generative Temporal Models with Spatial Memory for Partially Observed Environments

    Full text link
    In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent's representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments.Comment: ICML 201

    Spectral analysis for long-term robotic mapping

    Get PDF
    This paper presents a new approach to mobile robot mapping in long-term scenarios. So far, the environment models used in mobile robotics have been tailored to capture static scenes and dealt with the environment changes by means of ‘memory decay’. While these models keep up with slowly changing environments, their utilization in dynamic, real world environments is difficult. The representation proposed in this paper models the environment’s spatio-temporal dynamics by its frequency spectrum. The spectral representation of the time domain allows to identify, analyse and remember regularly occurring environment processes in a computationally efficient way. Knowledge of the periodicity of the different environment processes constitutes the model predictive capabilities, which are especially useful for long-term mobile robotics scenarios. In the experiments presented, the proposed approach is applied to data collected by a mobile robot patrolling an indoor environment over a period of one week. Three scenarios are investigated, including intruder detection and 4D mapping. The results indicate that the proposed method allows to represent arbitrary timescales with constant (and low) memory requirements, achieving compression rates up to 106 . Moreover, the representation allows for prediction of future environment’s state with ∼ 90% precision

    Atypical eye contact in autism: Models, mechanisms and development

    Get PDF
    An atypical pattern of eye contact behaviour is one of the most significant symptoms of Autism Spectrum Disorder (ASD). Recent empirical advances have revealed the developmental, cognitive and neural basis of atypical eye contact behaviour in ASD. We review different models and advance a new ‘fast-track modulator model’. Specifically, we propose that atypical eye contact processing in ASD originates in the lack of influence from a subcortical face and eye contact detection route, which is hypothesized to modulate eye contact processing and guide its emergent specialization during development
    corecore