10,894 research outputs found

    Learning Adaptive Discriminative Correlation Filters via Temporal Consistency Preserving Spatial Feature Selection for Robust Visual Tracking

    Get PDF
    With efficient appearance learning models, Discriminative Correlation Filter (DCF) has been proven to be very successful in recent video object tracking benchmarks and competitions. However, the existing DCF paradigm suffers from two major issues, i.e., spatial boundary effect and temporal filter degradation. To mitigate these challenges, we propose a new DCF-based tracking method. The key innovations of the proposed method include adaptive spatial feature selection and temporal consistent constraints, with which the new tracker enables joint spatial-temporal filter learning in a lower dimensional discriminative manifold. More specifically, we apply structured spatial sparsity constraints to multi-channel filers. Consequently, the process of learning spatial filters can be approximated by the lasso regularisation. To encourage temporal consistency, the filter model is restricted to lie around its historical value and updated locally to preserve the global structure in the manifold. Last, a unified optimisation framework is proposed to jointly select temporal consistency preserving spatial features and learn discriminative filters with the augmented Lagrangian method. Qualitative and quantitative evaluations have been conducted on a number of well-known benchmarking datasets such as OTB2013, OTB50, OTB100, Temple-Colour, UAV123 and VOT2018. The experimental results demonstrate the superiority of the proposed method over the state-of-the-art approaches

    Object Tracking in Vary Lighting Conditions for Fog based Intelligent Surveillance of Public Spaces

    Get PDF
    With rapid development of computer vision and artificial intelligence, cities are becoming more and more intelligent. Recently, since intelligent surveillance was applied in all kind of smart city services, object tracking attracted more attention. However, two serious problems blocked development of visual tracking in real applications. The first problem is its lower performance under intense illumination variation while the second issue is its slow speed. This paper addressed these two problems by proposing a correlation filter based tracker. Fog computing platform was deployed to accelerate the proposed tracking approach. The tracker was constructed by multiple positions' detections and alternate templates (MPAT). The detection position was repositioned according to the estimated speed of target by optical flow method, and the alternate template was stored with a template update mechanism, which were all computed at the edge. Experimental results on large-scale public benchmark datasets showed the effectiveness of the proposed method in comparison with state-of-the-art methods

    Diskriminativni korelacijski filter s segmentacijo in uporabo konteksta za robustno sledenje

    Full text link
    Visual object tracking is an area in the field of computer vision, which has seen great popularity increase due to a large availability of video data. There are many different tracking tasks, such as multiple object tracking, long-term tracking and specialized trackers, expected to perform well in a very specific domain. In this work, we focus on online generic short-term single object tracking, which can be considered the base visual tracking task and can be adaptable to any of the previously mentioned tasks. We propose a new tracker, based on correlation filtering, augmented with context information and a predicted object segmentation mask. The results on benchmarks fall far behind the current state-of-the-art, however the proposed method consistently outperforms baseline trackers, which shows the methods potential for future improvements.Vizualno sledenje objektom je področje računalniškega vida, ki je v zadnjih letih doživelo velik razcvet, zahvaljujoč dostopnosti video vsebin. Problem lahko razdelimo na več podnalog, na primer sledenje več objektom, dolgoročno sledenje ali specializirano sledenje za točno določeno domeno. V tem delu se omejimo na splošne kratkoročne sledilnike, ki sledijo enemu objektu. To lahko namreč razumemo kot najbolj osnovno nalogo vizualnega sledenja, ki jo lahko razširimo za delovanje na prej omenjenih problemih. V delu predstavimo nov sledilnik, ki temelji na sledenju s korelacijskimi filtri, razširimo pa ga z uporabo kontekstne informacije in segmentacijske maske. V primerjavi z ostalimi sledilniki predlagana metoda sicer ne dosega rezultatov, primerljivih z najmodernejšimi sledilniki, vendar pa dosledno dosega boljše rezultate od osnovnejših sledilnikov, kar kaže na potencial metode za nadaljnje izboljšave

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    corecore