1,660 research outputs found

    Discovery of Shared Semantic Spaces for Multiscene Video Query and Summarization.

    Get PDF
    The growing rate of public space CCTV installations has generated a need for automated methods for exploiting video surveillance data including scene understanding, query, behaviour annotation and summarization. For this reason, extensive research has been performed on surveillance scene understanding and analysis. However, most studies have considered single scenes, or groups of adjacent scenes. The semantic similarity between different but related scenes (e.g., many different traffic scenes of similar layout) is not generally exploited to improve any automated surveillance tasks and reduce manual effort. Exploiting commonality, and sharing any supervised annotations, between different scenes is however challenging due to: Some scenes are totally un-related -- and thus any information sharing between them would be detrimental; while others may only share a subset of common activities -- and thus information sharing is only useful if it is selective. Moreover, semantically similar activities which should be modelled together and shared across scenes may have quite different pixel-level appearance in each scene. To address these issues we develop a new framework for distributed multiple-scene global understanding that clusters surveillance scenes by their ability to explain each other's behaviours; and further discovers which subset of activities are shared versus scene-specific within each cluster. We show how to use this structured representation of multiple scenes to improve common surveillance tasks including scene activity understanding, cross-scene query-by-example, behaviour classification with reduced supervised labelling requirements, and video summarization. In each case we demonstrate how our multi-scene model improves on a collection of standard single scene models and a flat model of all scenes.Comment: Multi-Scene Traffic Behaviour Analysis ---- Accepted at IEEE Transactions on Circuits and Systems for Video Technolog

    LOCALIZED TEMPORAL PROFILE OF SURVEILLANCE VIDEO

    Get PDF
    Surveillance videos are recorded pervasively and their retrieval currently still relies on human operators. As an intermediate representation, this work develops a new temporal profile of video to convey accurate temporal information in the video while keeping certain spatial characteristics of targets of interest for recognition. The profile is obtained at critical positions where major target flow appears. We set a sampling line crossing the motion direction to profile passing targets in the temporal domain. In order to add spatial information to the temporal profile to certain extent, we integrate multiple profiles from a set of lines with blending method to reflect the target motion direction and position in the temporal profile. Different from mosaicing/montage methods for video synopsis in spatial domain, our temporal profile has no limit on the time length, and the created profile significantly reduces the data size for brief indexing and fast search of video

    Object detection and tracking in video image

    Get PDF
    In recent days, capturing images with high quality and good size is so easy because of rapid improvement in quality of capturing device with less costly but superior technology. Videos are a collection of sequential images with a constant time interval. So video can provide more information about our object when scenarios are changing with respect to time. Therefore, manually handling videos are quite impossible. So we need an automated devise to process these videos. In this thesis one such attempt has been made to track objects in videos. Many algorithms and technology have been developed to automate monitoring the object in a video file. Object detection and tracking is a one of the challenging task in computer vision. Mainly there are three basic steps in video analysis: Detection of objects of interest from moving objects, Tracking of that interested objects in consecutive frames, and Analysis of object tracks to understand their behavior. Simple object detection compares a static background frame at the pixel level with the current frame of video. The existing method in this domain first tries to detect the interest object in video frames. One of the main difficulties in object tracking among many others is to choose suitable features and models for recognizing and tracking the interested object from a video. Some common choice to choose suitable feature to categories, visual objects are intensity, shape, color and feature points. In this thesis, we studied about mean shift tracking based on the color pdf, optical flow tracking based on the intensity and motion; SIFT tracking based on scale invariant local feature points. Preliminary results from experiments have shown that the adopted method is able to track targets with translation, rotation, partial occlusion and deformation

    Indoor Risks Assessment Using Video Captioning

    Get PDF
    The progress of automatic scene analysis techniques for homes and the development of ambient assisted living systems is vital to help different kinds of people, such as the elderly or visually impaired individuals, who require special care in their daily lives. In this bachelor’s thesis we are going to develop a study of the most promising used techniques inside the Video Captioning and scene analysis scope and we will propose a Deep Learning pipeline aimed at performing Risks Assessment on input videos using the knowledge acquired during the study. This can be potentially applied to create systems aimed to help aforementioned people. Moreover, we will propose different evaluation architectures to test each of the stages involved in the Risks Assessment pipeline in order to observe its effectiveness and limitations. In this work we will introduce SwinBERT, a powerful and recent Video Captioning model, complemented with YOLOv7, a model aimed at the Object Recognition task, for the analysis of home scenes. Moreover, we will use various lexical transformations and linguistic models to maximize the semantic similarity of descriptions generated and objects detected, aligning them with the annotations provided by the datasets used. This approach will allow us to achieve more accurate matches from a human perspective. In the experiments we will outstand the usage of the large-scale dataset Charades, which was created with the goal of producing a vast dataset designed for the visual analysis, while preserving the naturalness and spontaneity of household and daily activities
    corecore