5 research outputs found

    Content-Based Video Retrieval (CBVR) system for CCTV surveillance videos

    No full text
    The inherent nature of image and video and its multi-dimension data space makes its processing and interpretation a very complex task, normally requiring considerable processing power. Moreover, understanding the meaning of video content and storing it in a fast searchable and readable form, requires taking advantage of image processing methods, which when running them on a video stream per query, would not be cost effective, and in some cases is quite impossible due to time restrictions. Hence, to speed up the search process, storing video and its extracted meta-data together is desired. The storage model itself is one of the challenges in this context, as based on the current CCTV technology; it is estimated to require a petabyte size data management system. This estimate however, is expected to grow rapidly as current advances in video recording devices are leading to higher resolution sensors, and larger frame size. On the other hand, the increasing demand for object tracking on video streams has invoked the research on Content-Based Image Retrieval (CBIR) and Content-Based Video Retrieval (CBVR). In this paper, we present the design and implementation of a framework and a data model for CCTV surveillance videos on RDBMS which provides the functions of a surveillance monitoring system, with a tagging structure for event detection. On account of some recent results, we believe this is a promising direction for surveillance video search in comparison to the existing solutions

    Recherche par le contenu adaptée à la surveillance vidéo

    Get PDF
    Les systèmes de surveillance vidéo sont omniprésents dans les lieux publics achalandés et leur présence dans les lieux privés s'accroît sans cesse. Si un aéroport ou une gare de trains peut se permettre d'employer une équipe de surveillance pour surveiller des flux vidéo en temps réel, il est improbable qu'un particulier effectue une telle dépense pour un système de surveillance à domicile. Qui plus est, l'utilisation de vidéos de surveillance pour l'analyse criminalistique requiert souvent une analyse a posteriori des événements observés. L'historique d'enregistrement correspond souvent à plusieurs jours, voire des semaines de vidéo. Si le moment où s'est produit un événement d'intérêt est inconnu, un outil de recherche vidéo est essentiel. Un tel outil a pour objectif d'identifier les segments de vidéo dont le contenu correspond à une description approximative de l'événement (ou de l'objet) recherché. Ce mémoire présente une structure de données pour l'indexation du contenu de longues vidéos de surveillance, ainsi qu'un algorithme de recherche par le contenu basé sur cette structure. À partir de la description d'un objet basée sur des attributs tels sa taille, sa couleur et la direction de son mouvement, le système identifie en temps réel les segments de vidéo contenant des objets correspondant à cette description. Nous avons démontré empiriquement que notre système fonctionne dans plusieurs cas d'utilisation tels le comptage d'objets en mouvement, la reconnaissance de trajectoires, la détection d'objets abandonnés et la détection de véhicules stationnés. Ce mémoire comporte également une section sur l'attestation de qualité d'images. La méthode présentée permet de déterminer qualitativement le type et la quantité de distortion appliquée à l'image par un système d'acquisition. Cette technique peut être utilisée pour estimer les paramètres du système d'acquisition afin de corriger les images, ou encore pour aider au développement de nouveaux systèmes d'acquisition

    Exploratory search through large video corpora

    Get PDF
    Activity retrieval is a growing field in electrical engineering that specializes in the search and retrieval of relevant activities and events in video corpora. With the affordability and popularity of cameras for government, personal and retail use, the quantity of available video data is rapidly outscaling our ability to reason over it. Towards the end of empowering users to navigate and interact with the contents of these video corpora, we propose a framework for exploratory search that emphasizes activity structure and search space reduction over complex feature representations. Exploratory search is a user driven process wherein a person provides a system with a query describing the activity, event, or object he is interested in finding. Typically, this description takes the implicit form of one or more exemplar videos, but it can also involve an explicit description. The system returns candidate matches, followed by query refinement and iteration. System performance is judged by the run-time of the system and the precision/recall curve of of the query matches returned. Scaling is one of the primary challenges in video search. From vast web-video archives like youtube (1 billion videos and counting) to the 30 million active surveillance cameras shooting an estimated 4 billion hours of footage every week in the United States, trying to find a set of matches can be like looking for a needle in a haystack. Our goal is to create an efficient archival representation of video corpora that can be calculated in real-time as video streams in, and then enables a user to quickly get a set of results that match. First, we design a system for rapidly identifying simple queries in large-scale video corpora. Instead of focusing on feature design, our system focuses on the spatiotemporal relationships between those features as a means of disambiguating an activity of interest from background. We define a semantic feature vocabulary of concepts that are both readily extracted from video and easily understood by an operator. As data streams in, features are hashed to an inverted index and retrieved in constant time after the system is presented with a user's query. We take a zero-shot approach to exploratory search: the user manually assembles vocabulary elements like color, speed, size and type into a graph. Given that information, we perform an initial downsampling of the archived data, and design a novel dynamic programming approach based on genome-sequencing to search for similar patterns. Experimental results indicate that this approach outperforms other methods for detecting activities in surveillance video datasets. Second, we address the problem of representing complex activities that take place over long spans of space and time. Subgraph and graph matching methods have seen limited use in exploratory search because both problems are provably NP-hard. In this work, we render these problems computationally tractable by identifying the maximally discriminative spanning tree (MDST), and using dynamic programming to optimally reduce the archive data based on a custom algorithm for tree-matching in attributed relational graphs. We demonstrate the efficacy of this approach on popular surveillance video datasets in several modalities. Finally, we design an approach for successive search space reduction in subgraph matching problems. Given a query graph and archival data, our algorithm iteratively selects spanning trees from the query graph that optimize the expected search space reduction at each step until the archive converges. We use this approach to efficiently reason over video surveillance datasets, simulated data, as well as large graphs of protein data

    Context-based multimedia semantics modelling and representation

    Get PDF
    The evolution of the World Wide Web, increase in processing power, and more network bandwidth have contributed to the proliferation of digital multimedia data. Since multimedia data has become a critical resource in many organisations, there is an increasing need to gain efficient access to data, in order to share, extract knowledge, and ultimately use the knowledge to inform business decisions. Existing methods for multimedia semantic understanding are limited to the computable low-level features; which raises the question of how to identify and represent the high-level semantic knowledge in multimedia resources.In order to bridge the semantic gap between multimedia low-level features and high-level human perception, this thesis seeks to identify the possible contextual dimensions in multimedia resources to help in semantic understanding and organisation. This thesis investigates the use of contextual knowledge to organise and represent the semantics of multimedia data aimed at efficient and effective multimedia content-based semantic retrieval.A mixed methods research approach incorporating both Design Science Research and Formal Methods for investigation and evaluation was adopted. A critical review of current approaches for multimedia semantic retrieval was undertaken and various shortcomings identified. The objectives for a solution were defined which led to the design, development, and formalisation of a context-based model for multimedia semantic understanding and organisation. The model relies on the identification of different contextual dimensions in multimedia resources to aggregate meaning and facilitate semantic representation, knowledge sharing and reuse. A prototype system for multimedia annotation, CONMAN was built to demonstrate aspects of the model and validate the research hypothesis, H₁.Towards providing richer and clearer semantic representation of multimedia content, the original contributions of this thesis to Information Science include: (a) a novel framework and formalised model for organising and representing the semantics of heterogeneous visual data; and (b) a novel S-Space model that is aimed at visual information semantic organisation and discovery, and forms the foundations for automatic video semantic understanding
    corecore