10 research outputs found

    Dynamic video surveillance systems guided by domain ontologies

    Full text link
    This paper is a postprint of a paper submitted to and accepted for publication in 3rd International Conference on Imaging for Crime Detection and Prevention (ICDP 2009), and is subject to Institution of Engineering and Technology Copyright. The copy of record is available at IET Digital Library and IEEE XploreIn this paper we describe how the knowledge related to a specific domain and the available visual analysis tools can be used to create dynamic visual analysis systems for video surveillance. Firstly, the knowledge is described in terms of application domain (types of objects, events... that can appear in such domain) and system capabilities (algorithms, detection procedures...) by using an existing ontology. Secondly, the ontology is integrated into a framework to create the visual analysis systems for each domain by inspecting the relations between the entities defined in the domain and system knowledge. Additionally, when necessary, analysis tools could be added or removed on-line. Experiments/Application of the framework show that the proposed approach for creating dynamic visual analysis systems is suitable for analyzing different video surveillance domains without decreasing the overall performance in terms of computational time or detection accuracy.This work was partially supported by the Spanish Administration agency CDTI (CENIT-VISION 2007-1007), by the Spanish Government (TEC2007- 65400 SemanticVideo), by the Comunidad de Madrid (S-050/TIC-0223 - ProMultiDis), by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Consejería de Educación of the Comunidad de Madrid and by The European Social Fund

    A concept–relationship acquisition and inference approach for hierarchical taxonomy construction from tags

    Get PDF
    Author name used in this publication: W. M. WangAuthor name used in this publication: C. F. CheungAuthor name used in this publication: Adela S. M. Lau2009-2010 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Switching Partners: Dancing with the Ontological Engineers

    Get PDF
    Ontologies are today being applied in almost every field to support the alignment and retrieval of data of distributed provenance. Here we focus on new ontological work on dance and on related cultural phenomena belonging to what UNESCO calls the “intangible heritage.” Currently data and information about dance, including video data, are stored in an uncontrolled variety of ad hoc ways. This serves not only to prevent retrieval, comparison and analysis of the data, but may also impinge on our ability to preserve the data that already exists. Here we explore recent technological developments that are designed to counteract such problems by allowing information to be retrieved across disciplinary, cultural, linguistic and technological boundaries. Software applications such as the ones envisaged here will enable speedier recovery of data and facilitate its analysis in ways that will assist both archiving of and research on dance

    Recognition and Understanding of Meetings Overview of the European AMI and AMIDA Projects

    Get PDF
    The AMI and AMIDA projects are concerned with the recognition and interpretation of multiparty (face-to-face and remote) meetings. Within these projects we have developed the following: (1) an infrastructure for recording meetings using multiple microphones and cameras; (2) a one hundred hour, manually annotated meeting corpus; (3) a number of techniques for indexing, and summarizing of meeting videos using automatic speech recognition and computer vision, and (4) a extensible framework for browsing, and searching of meeting videos. We give an overview of the various techniques developed in AMI (mainly involving face-to-face meetings), their integration into our meeting browser framework, and future plans for AMIDA (Augmented Multiparty Interaction with Distant Access), the follow-up project to AMI. Technical and business information related to these two projects can be found at www.amiproject.org, respectively on the Scientific and Business portals

    A semantic concept for the mapping of low-level analysis data to high-level scene descriptions

    Get PDF
    Zusammen mit dem wachsenden Bedarf an Sicherheit wird eine zunehmende Menge an Überwachungsinhalten geschaffen. Um eine schnelle und zuverlässige Suche in den Aufnahmen hunderter oder tausender in einer einzelnenEinrichtung installierten Überwachungssensoren zu ermöglichen, istdie Indizierung dieses Inhalts im Voraus unentbehrlich. Zu diesem Zweckermöglicht das Konzept des Smart Indexing & Retrieval (SIR) durch dieErzeugung von high-level Metadaten kosteneffiziente Suchen. Da es immerschwieriger wird, diese Daten manuell mit annehmbarem Zeit- und Kostenaufwandzu generieren, muss die Erzeugung dieser Metadaten auf Basis vonlow-level Analysedaten automatisch erfolgen.Während bisherige Ansätze stark domänenabhängig sind, wird in dieserArbeit ein generisches Konzept für die Abbildung der Ergebnisse von lowlevelAnalysedaten auf semantische Szenenbeschreibungen präsentiert. Diekonstituierenden Elemente dieses Ansatzes und die ihnen zugrunde liegendenBegriffe werden vorgestellt, und eine Einführung in ihre Anwendungwird gegeben. Der Hauptbeitrag des präsentierten Ansatzes sind dessen Allgemeingültigkeit und die frühe Stufe, auf der der Schritt von der low-levelauf die high-level Repräsentation vorgenommen wird. Dieses Schließen in derMetadatendomäne wird in kleinen Zeitfenstern durchgeführt, während dasSchließen auf komplexeren Szenen in der semantischen Domäne ausgeführtwird. Durch die Verwendung dieses Ansatzes ist sogar eine unbeaufsichtigteSelbstbewertung der Analyseergebnisse möglich

    Ontology And Taxonomy Collaborated Framework For Meeting Classification

    No full text
    A framework for classification of meeting videos is proposed in this paper. We define our framework consisting of a four level concept hierarchy having movements, events, behavior, and genre; which is based on the meeting ontology and taxonomy. Ontology is the formal specification of domain concepts and their relationships. Taxonomy is the general categorization based on a class/subclass relationships. This concept hierarchy is mapped to an implementation of Finite State Machines (FSM) and Rule-based system (RBS) to classify the meetings. Events are detected by the FSMs based on the movements (head and hand tracks). Classification of the meetings is performed by the RBS based on the events, and behaviors of the people present in the meetings. Our framework is novel and scalable, capable of adding new meeting types with no re-training. We conducted experiments on various meeting sequences and classified meetings into voting, argument, presentation, and object passing. This framework has applications in automated video surveillance, video segmentation and retrieval (multimedia), human computer interaction, and augmented reality

    Ontology and Taxonomy Collaborated Framework for Meeting Classification

    No full text
    A framework for classification of meeting videos is proposed in this paper. Our goal is to utilize this framework to analyze human motion data to perform automatic meeting classification. We use a rule-based system and state machine to analyze the videos, utilize three levels of context hierarchy, namely movements (and their attributes), events(actions), and behavior to identify the activities and classify the meeting type based on the meeting ontology. We also define a meeting ontology that is determined by the knowledge base of various meeting sequences. This ontology validates and refines the taxonomy based on the hierarchy of events and behaviors, and regroups similar meetings in one category, refining the classes. Ontology is the process of determining the class of a meeting video based on relationships, and taxonomy is the categorization of meetings based on a certain criteria. The rule-based system is the primary framework manager, which recognizes behaviors based on the events detected by the state machine. It also periodically rolls back the state machine from erroneous statespace to a stable state. The state machine detects the events using a sliding temporal window of human movements. Our approach is appropriate for classifying meetings in complex sequences involving various actions and partial occlusion of tracked objects. Our framework is unique and scalable, with the capability to add new meeting types to the framework with little or no modification to the current framework. Using our framework, we are able to correctly classify various meeting sequences such as voting, argument, presentation, and object passing in our experiments. This framework is applicable to automated video surveillance, video segmentation and retrieval (multimedia), human computer in..

    Unusual event detection in real-world surveillance applications

    Get PDF
    Given the near-ubiquity of CCTV, there is significant ongoing research effort to apply image and video analysis methods together with machine learning techniques towards autonomous analysis of such data sources. However, traditional approaches to scene understanding remain dependent on training based on human annotations that need to be provided for every camera sensor. In this thesis, we propose an unusual event detection and classification approach which is applicable to real-world visual monitoring applications. The goal is to infer the usual behaviours in the scene and to judge the normality of the scene on the basis on the model created. The first requirement for the system is that it should not demand annotated data to train the system. Annotation of the data is a laborious task, and it is not feasible in practice to annotate video data for each camera as an initial stage of event detection. Furthermore, even obtaining training examples for the unusual event class is challenging due to the rarity of such events in video data. Another requirement for the system is online generation of results. In surveillance applications, it is essential to generate real-time results to allow a swift response by a security operator to prevent harmful consequences of unusual and antisocial events. The online learning capabilities also mean that the model can be continuously updated to accommodate natural changes in the environment. The third requirement for the system is the ability to run the process indefinitely. The mentioned requirements are necessary for real-world surveillance applications and the approaches that conform to these requirements need to be investigated. This thesis investigates unusual event detection methods that conform with real-world requirements and investigates the issue through theoretical and experimental study of machine learning and computer vision algorithms

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF
    corecore