14,013 research outputs found

    A generic framework for video understanding applied to group behavior recognition

    Get PDF
    This paper presents an approach to detect and track groups of people in video-surveillance applications, and to automatically recognize their behavior. This method keeps track of individuals moving together by maintaining a spacial and temporal group coherence. First, people are individually detected and tracked. Second, their trajectories are analyzed over a temporal window and clustered using the Mean-Shift algorithm. A coherence value describes how well a set of people can be described as a group. Furthermore, we propose a formal event description language. The group events recognition approach is successfully validated on 4 camera views from 3 datasets: an airport, a subway, a shopping center corridor and an entrance hall.Comment: (20/03/2012

    Space time pixels

    Get PDF
    This paper reports the design of a networked system, the aim of which is to provide an intermediate virtual space that will establish a connection and support interaction between multiple participants in two distant physical spaces. The intention of the project is to explore the potential of the digital space to generate original social relationships between people that their current (spatial or social) position can difficultly allow the establishment of innovative connections. Furthermore, to explore if digital space can sustain, in time, low-level connections like these, by balancing between the two contradicting needs of communication and anonymity. The generated intermediate digital space is a dynamic reactive environment where time and space information of two physical places is superimposed to create a complex common ground where interaction can take place. It is a system that provides awareness of activity in a distant space through an abstract mutable virtual environment, which can be perceived in several different ways – varying from a simple dynamic background image to a common public space in the junction of two private spaces or to a fully opened window to the other space – according to the participants will. The thesis is that the creation of an intermediary environment that operates as an activity abstraction filter between several users, and selectively communicates information, could give significance to the ambient data that people unconsciously transmit to others when co-existing. It can therefore generate a new layer of connections and original interactivity patterns; in contrary to a straight-forward direct real video and sound system, that although it is functionally more feasible, it preserves the existing social constraints that limit interaction into predefined patterns

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    Skeleton-aided Articulated Motion Generation

    Full text link
    This work make the first attempt to generate articulated human motion sequence from a single image. On the one hand, we utilize paired inputs including human skeleton information as motion embedding and a single human image as appearance reference, to generate novel motion frames, based on the conditional GAN infrastructure. On the other hand, a triplet loss is employed to pursue appearance-smoothness between consecutive frames. As the proposed framework is capable of jointly exploiting the image appearance space and articulated/kinematic motion space, it generates realistic articulated motion sequence, in contrast to most previous video generation methods which yield blurred motion effects. We test our model on two human action datasets including KTH and Human3.6M, and the proposed framework generates very promising results on both datasets.Comment: ACM MM 201
    • …
    corecore