131 research outputs found

    Ontology-Driven Semantic Enrichment Framework for Open Data Value Creation

    Get PDF
    The reviewed semantic enrichment frameworks lack mechanisms to assess the degree of semantic value added to flat-text resources in terms of knowledge and semantic capabilities. This complicates the tasks of driving the semantic value creation process toward the specific enrichment output and evaluating the output. In addressing this gap, we propose the semantic value creation solution, which converts flat-text resources to knowledge resources. Namely, we propose the ontology-driven semantic enrichment (ODSE) framework, with a mechanism for semantic valuation. The framework’s development involved adopting the design science research methodology for information systems. The developed framework leverages linked data principles for knowledge creation. This framework was demonstrated to determine the semantic capabilities enabled by the syntax additions, as well as the knowledge enabled by the semantic additions to flat-text resources, along with its potential impact on knowledge creation, mining, and resource-usability effectiveness. The ODSE framework is reusable in semantic value creation implementations that transform a flat text to semantic formats

    ViSOR: VIdeo Surveillance On-line Repository for annotation retrieval

    Full text link
    Aim of the Visor Project [1] is to gather and make freely available a repository of surveillance and video footages for the research community on pattern recognition and multimedia retrieval. Th

    Dynamic video surveillance systems guided by domain ontologies

    Full text link
    This paper is a postprint of a paper submitted to and accepted for publication in 3rd International Conference on Imaging for Crime Detection and Prevention (ICDP 2009), and is subject to Institution of Engineering and Technology Copyright. The copy of record is available at IET Digital Library and IEEE XploreIn this paper we describe how the knowledge related to a specific domain and the available visual analysis tools can be used to create dynamic visual analysis systems for video surveillance. Firstly, the knowledge is described in terms of application domain (types of objects, events... that can appear in such domain) and system capabilities (algorithms, detection procedures...) by using an existing ontology. Secondly, the ontology is integrated into a framework to create the visual analysis systems for each domain by inspecting the relations between the entities defined in the domain and system knowledge. Additionally, when necessary, analysis tools could be added or removed on-line. Experiments/Application of the framework show that the proposed approach for creating dynamic visual analysis systems is suitable for analyzing different video surveillance domains without decreasing the overall performance in terms of computational time or detection accuracy.This work was partially supported by the Spanish Administration agency CDTI (CENIT-VISION 2007-1007), by the Spanish Government (TEC2007- 65400 SemanticVideo), by the Comunidad de Madrid (S-050/TIC-0223 - ProMultiDis), by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Consejería de Educación of the Comunidad de Madrid and by The European Social Fund

    Event Detection and Modelling for Security Application

    Get PDF
    PhD thesisThis thesis focuses on the design and implementation of a novel security domain surveillance system framework that incorporates multimodal information sources to assist the task of event detection from video and social media sources. The comprehensive framework consists of four modules including Data Source, Content Extraction, Parsing and Semantic Knowledge. The security domain ontology conceptual model is proposed for event representation and tailored in conformity with elementary aspects of event description. The adaptation of DOLCE foundational ontology promotes flexibility for heterogeneous ontologies to interoperate. The proposed mapping method using eXtensible Stylesheet Language Transformation (XSLT) stylesheet approach is presented to allow ontology enrichment and instance population to be executed efficiently. The dataset for visual semantic analysis utilizes video footage of 2011 London Riots obtained from Scotland Yard. The concepts person, face, police, car, fire, running, kicking and throwing are chosen to be analysed. The visual semantic analysis results demonstrate successful persons, actions and events detection in the video footage of riot events. For social semantic analysis, a collection of tweets from twitter channels that was actively reporting during the 2011 London Riots was compiled to create a Twitter corpus. The annotated data are mapped in the ontology based on six concepts: token, location, organization, sentence, verb, and noun. Several keywords related to the event that has been presented in the visual and social media sources are chosen to examine the correlation between both sources and to draw supplementary information regarding the event. The chosen keywords describe actions running, throwing, and kicking; activity attack, smash and loot; event fire; and location Hackney and Croydon. An experiment in respect to concept-noun relations are also been executed. The ontology-based visual and social media analysis yields a promising result in analysing long content surveillance videos and lengthy text corpus of social media user-generated content. Adopting ontology-based approach, the proposed novel security domain surveillance system framework enables a large amount of visual and social media data to be analysed systematically and automatically, and promotes a better method for event detection and understanding

    Body Posture Recognition as a Discovery Problem: A Semantic-Based Framework

    Full text link
    Abstract. The automatic detection of human activities requires large computational resources to increase recognition performances and so-phisticated capturing devices to produce accurate results. Anyway, often innovative analysis methods applied to data extracted by off-the-shelf detection peripherals can return acceptable outcomes. In this paper a framework is proposed for automated posture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A simple yet general data model and a corresponding ontology create the needed terminological substratum for an automatic posture annotation via stan-dard Semantic Web languages. Hence, a logic-based matchmaking allows to compare retrieved annotations with standard posture descriptions stored as individuals in a proper Knowledge Base. Finally, non-standard inferences and a similarity-based ranking support the discovery of the best matching posture. This framework has been implemented in a pro-totypical tool and preliminary experimental tests have been carried out w.r.t. a reference dataset

    Detecting Riots with Uncertain Information on the Semantic Web

    Get PDF
    PhDThe ubiquitous nature of CCTV Surveillance cameras means substantial amounts of data being generated. In case of an investigation, this data must be manually browsed and analysed in search of relevant information for the case. As an example, it took more than 450 detectives to examine the hundreds of thousands of hours of videos in the investigation of the 2011 London Riots: one of the largest the London's MET police has ever seen. Anything that can help the security forces save resources in investigations such as this, is valuable. Consequently, automatic analysis of surveillance scenes is a growing research area. One of the research fronts tackling this issue, is the semantic understanding of the scene. In this, the output of computer vision algorithms is fed into Semantic Frameworks, which combine all the information from different sources and try to reach a better knowledge of the scene. However, representing and reasoning with imprecise and uncertain information remains an outstanding issue in current implementations. The Demspter-Sha er (DS) Theory of Evidence has been proposed as a way to deal with imprecise and uncertain information. In this thesis we use it for the main contributions. In our rst contribution, we propose the use of the DS theory and its Transferable Belief Model (TBM) realisation as a way to combine Bayesian priors, using the subjectivist view of the Bayes' Theorem, where the probabilities are beliefs. We rst compute the a priori probabilities of all the pair of events in the model. Then a global potential is created for each event using the TBM. This global potential will encode all the prior knowledge for that particular concept. This has the bene t that when this potential is included in a knowledge base because it has been learned, all the knowledge it entails comes with it. We also propose a semantic web reasoner based on the TBM. This reasoner consists of an ontology to model any domain knowledge using the TBM constructs of Potentials, Focal Elements, and Con gurations. The reasoner also consists of the implementations of the TBM operations in a semantic web framework. The goal is that after the model has been created, the TBM operations can be applied and the knowledge combined and queried. These operations are computationally complex, so we also propose parallel heuristics to the TBM operations. This allows us to apply this paradigm on problems of thousands of records. The nal contribution, is the use of the TBM semantic framework with the method to combine the prior knowledge to detect riots on CCTV footage from the 2011 London riots. We use around a million and a half manually annotated frames with 6 di erent concepts related to the riot detection task, train the system, and infer the presence of riots in the test dataset. Tests show that the system yields a high recall, but a low precision, meaning that there are a lot of false positives. We also show that the framework scales well as more compute power becomes available

    Semantic multimedia modelling & interpretation for annotation

    Get PDF
    The emergence of multimedia enabled devices, particularly the incorporation of cameras in mobile phones, and the accelerated revolutions in the low cost storage devices, boosts the multimedia data production rate drastically. Witnessing such an iniquitousness of digital images and videos, the research community has been projecting the issue of its significant utilization and management. Stored in monumental multimedia corpora, digital data need to be retrieved and organized in an intelligent way, leaning on the rich semantics involved. The utilization of these image and video collections demands proficient image and video annotation and retrieval techniques. Recently, the multimedia research community is progressively veering its emphasis to the personalization of these media. The main impediment in the image and video analysis is the semantic gap, which is the discrepancy among a user’s high-level interpretation of an image and the video and the low level computational interpretation of it. Content-based image and video annotation systems are remarkably susceptible to the semantic gap due to their reliance on low-level visual features for delineating semantically rich image and video contents. However, the fact is that the visual similarity is not semantic similarity, so there is a demand to break through this dilemma through an alternative way. The semantic gap can be narrowed by counting high-level and user-generated information in the annotation. High-level descriptions of images and or videos are more proficient of capturing the semantic meaning of multimedia content, but it is not always applicable to collect this information. It is commonly agreed that the problem of high level semantic annotation of multimedia is still far from being answered. This dissertation puts forward approaches for intelligent multimedia semantic extraction for high level annotation. This dissertation intends to bridge the gap between the visual features and semantics. It proposes a framework for annotation enhancement and refinement for the object/concept annotated images and videos datasets. The entire theme is to first purify the datasets from noisy keyword and then expand the concepts lexically and commonsensical to fill the vocabulary and lexical gap to achieve high level semantics for the corpus. This dissertation also explored a novel approach for high level semantic (HLS) propagation through the images corpora. The HLS propagation takes the advantages of the semantic intensity (SI), which is the concept dominancy factor in the image and annotation based semantic similarity of the images. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other, while semantic similarity of the images are based on the SI and concept semantic similarity among the pair of images. Moreover, the HLS exploits the clustering techniques to group similar images, where a single effort of the human experts to assign high level semantic to a randomly selected image and propagate to other images through clustering. The investigation has been made on the LabelMe image and LabelMe video dataset. Experiments exhibit that the proposed approaches perform a noticeable improvement towards bridging the semantic gap and reveal that our proposed system outperforms the traditional systems

    Novel Methods for Forensic Multimedia Data Analysis: Part I

    Get PDF
    The increased usage of digital media in daily life has resulted in the demand for novel multimedia data analysis techniques that can help to use these data for forensic purposes. Processing of such data for police investigation and as evidence in a court of law, such that data interpretation is reliable, trustworthy, and efficient in terms of human time and other resources required, will help greatly to speed up investigation and make investigation more effective. If such data are to be used as evidence in a court of law, techniques that can confirm origin and integrity are necessary. In this chapter, we are proposing a new concept for new multimedia processing techniques for varied multimedia sources. We describe the background and motivation for our work. The overall system architecture is explained. We present the data to be used. After a review of the state of the art of related work of the multimedia data we consider in this work, we describe the method and techniques we are developing that go beyond the state of the art. The work will be continued in a Chapter Part II of this topic

    Semantic multimedia modelling & interpretation for search & retrieval

    Get PDF
    With the axiomatic revolutionary in the multimedia equip devices, culminated in the proverbial proliferation of the image and video data. Owing to this omnipresence and progression, these data become the part of our daily life. This devastating data production rate accompanies with a predicament of surpassing our potentials for acquiring this data. Perhaps one of the utmost prevailing problems of this digital era is an information plethora. Until now, progressions in image and video retrieval research reached restrained success owed to its interpretation of an image and video in terms of primitive features. Humans generally access multimedia assets in terms of semantic concepts. The retrieval of digital images and videos is impeded by the semantic gap. The semantic gap is the discrepancy between a user’s high-level interpretation of an image and the information that can be extracted from an image’s physical properties. Content- based image and video retrieval systems are explicitly assailable to the semantic gap due to their dependence on low-level visual features for describing image and content. The semantic gap can be narrowed by including high-level features. High-level descriptions of images and videos are more proficient of apprehending the semantic meaning of image and video content. It is generally understood that the problem of image and video retrieval is still far from being solved. This thesis proposes an approach for intelligent multimedia semantic extraction for search and retrieval. This thesis intends to bridge the gap between the visual features and semantics. This thesis proposes a Semantic query Interpreter for the images and the videos. The proposed Semantic Query Interpreter will select the pertinent terms from the user query and analyse it lexically and semantically. The proposed SQI reduces the semantic as well as the vocabulary gap between the users and the machine. This thesis also explored a novel ranking strategy for image search and retrieval. SemRank is the novel system that will incorporate the Semantic Intensity (SI) in exploring the semantic relevancy between the user query and the available data. The novel Semantic Intensity captures the concept dominancy factor of an image. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other. The SemRank will rank the retrieved images on the basis of Semantic Intensity. The investigations are made on the LabelMe image and LabelMe video dataset. Experiments show that the proposed approach is successful in bridging the semantic gap. The experiments reveal that our proposed system outperforms the traditional image retrieval systems

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen
    • …
    corecore