2 research outputs found

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    A bio-inspired knowledge representation method for anomaly detection in cognitive video surveillance systems

    No full text
    Human behaviour analysis has important applications in the field of anomaly management, such as Intelligent Video Surveillance (IVS). As the number of individuals in a scene increases, however, new macroscopic complex behaviours emerge from the underlying interaction network among multiple agents. This phenomenon has lately been investigated by modelling such interaction through Social Forces. In most recent Intelligent Video Surveillance systems, mechanisms to support human decisions are integrated in cognitive artificial processes. These algorithms mainly address the problem of modelling behaviours to allow for inference and prediction over the environment. A bio-inspired structure is here proposed, which is able to encode and synthesize signals, not only for the description of single entities behaviours, but also for modelling cause-effect relationships between user actions and changes in environment configurations (i.e. the crowd). Such models are stored within a memory during a learning phase. Here the system operates an effective knowledge transfer from a human operator towards an automatic systems called Cognitive Surveillance Node (CSN), which is part of a complex cognitive JDL-based and bioinspired architecture. After such a knowledge-transfer phase, learned representations can be used, at different levels, either to support human decisions by detecting anomalous interaction models and thus compensating for human shortcomings, or, in an automatic decision scenario, to identify anomalous patterns and choose the best strategy to preserve stability of the entire system. Results are presented, where crowd behaviour is modelled by means of Social Forces and can interact with a human operator within a visual 3D simulator. The way anomalies are detected and consequently handled is demonstrated on synthetic data and also on a real video sequence, in both the user-support and automatic modes
    corecore