21,079 research outputs found
Tracking by Prediction: A Deep Generative Model for Mutli-Person localisation and Tracking
Current multi-person localisation and tracking systems have an over reliance
on the use of appearance models for target re-identification and almost no
approaches employ a complete deep learning solution for both objectives. We
present a novel, complete deep learning framework for multi-person localisation
and tracking. In this context we first introduce a light weight sequential
Generative Adversarial Network architecture for person localisation, which
overcomes issues related to occlusions and noisy detections, typically found in
a multi person environment. In the proposed tracking framework we build upon
recent advances in pedestrian trajectory prediction approaches and propose a
novel data association scheme based on predicted trajectories. This removes the
need for computationally expensive person re-identification systems based on
appearance features and generates human like trajectories with minimal
fragmentation. The proposed method is evaluated on multiple public benchmarks
including both static and dynamic cameras and is capable of generating
outstanding performance, especially among other recently proposed deep neural
network based approaches.Comment: To appear in IEEE Winter Conference on Applications of Computer
Vision (WACV), 201
Recommended from our members
Out there and in here: design for blended scientific inquiry learning
One of the beneļ¬ts of mobile technologies is to combine āthe digitalā (e.g., data, information, photos) with āļ¬eldā experiences in novel ways that are contextualized by peopleās current located activities. However, often cost, mobility disabilities and time exclude students from engaging in such peripatetic experiences. The Out There and In Here project, is exploring a combination of mobile and tabletop technologies in support for collaborative learning. A system is being developed for synchronous collaboration between geology students in the ļ¬eld and peers at an indoor location. The overarching goal of this research is to develop technologies that support people working together in a suitable manner for their locations. There are two OTIH project research threads. The ļ¬rst deals with disabled learner access issues: these complex issues are being reviewed in subsequent evaluations and publications. This paper will deal with issues of technology supported learning design for remote and co-located science learners. Several stakeholder evaluations and two ļ¬eld trials have reviewed two research questions:
1. What will enhance the learning experience for those in the ļ¬eld and laboratory?
2. How can learning trajectories and appropriate technologies be designed to support equitable co-located and remote learning collaboration?
This paper focuses on describing the iterative linked development of technologies and scientiļ¬c inquiry pedagogy. Two stages within the research project are presented. The 1st stage details several pilot studies over 3 years with 21 student participants in synchronous collaborations with traditional technology and pedagogical models. Findings revealed that this was an engaging and useful experience although issues of equity in collaboration needed further research. The 2nd stage, in this project, has been to evaluate data from over 25 stakeholders (academics, learning and technology designers) to develop pervasive ambient technological solutions supporting orchestration of mixed levels of pedagogy (i.e. abstract synthesis to speciļ¬c investigation). Middleware between tabletop āsurfaceā technologies and mobile devices are being designed with Microsoft and OOKL (a mobile software company) to support these developments. Initial ļ¬ndings reveal issues around equity, ownership and professional identity
A generic framework for video understanding applied to group behavior recognition
This paper presents an approach to detect and track groups of people in
video-surveillance applications, and to automatically recognize their behavior.
This method keeps track of individuals moving together by maintaining a spacial
and temporal group coherence. First, people are individually detected and
tracked. Second, their trajectories are analyzed over a temporal window and
clustered using the Mean-Shift algorithm. A coherence value describes how well
a set of people can be described as a group. Furthermore, we propose a formal
event description language. The group events recognition approach is
successfully validated on 4 camera views from 3 datasets: an airport, a subway,
a shopping center corridor and an entrance hall.Comment: (20/03/2012
- ā¦