18 research outputs found

    VERGE: A Multimodal Interactive Search Engine for Video Browsing and Retrieval.

    Get PDF
    This paper presents VERGE interactive search engine, which is capable of browsing and searching into video content. The system integrates content-based analysis and retrieval modules such as video shot segmentation, concept detection, clustering, as well as visual similarity and object-based search

    COST292 experimental framework for TRECVID 2008

    Get PDF
    In this paper, we give an overview of the four tasks submitted to TRECVID 2008 by COST292. The high-level feature extraction framework comprises four systems. The first system transforms a set of low-level descriptors into the semantic space using Latent Semantic Analysis and utilises neural networks for feature detection. The second system uses a multi-modal classifier based on SVMs and several descriptors. The third system uses three image classifiers based on ant colony optimisation, particle swarm optimisation and a multi-objective learning algorithm. The fourth system uses a Gaussian model for singing detection and a person detection algorithm. The search task is based on an interactive retrieval application combining retrieval functionalities in various modalities with a user interface supporting automatic and interactive search over all queries submitted. The rushes task submission is based on a spectral clustering approach for removing similar scenes based on eigenvalues of frame similarity matrix and and a redundancy removal strategy which depends on semantic features extraction such as camera motion and faces. Finally, the submission to the copy detection task is conducted by two different systems. The first system consists of a video module and an audio module. The second system is based on mid-level features that are related to the temporal structure of videos

    The COST292 experimental framework for TRECVID 2007

    Get PDF
    In this paper, we give an overview of the four tasks submitted to TRECVID 2007 by COST292. In shot boundary (SB) detection task, four SB detectors have been developed and the results are merged using two merging algorithms. The framework developed for the high-level feature extraction task comprises four systems. The first system transforms a set of low-level descriptors into the semantic space using Latent Semantic Analysis and utilises neural networks for feature detection. The second system uses a Bayesian classifier trained with a “bag of subregions”. The third system uses a multi-modal classifier based on SVMs and several descriptors. The fourth system uses two image classifiers based on ant colony optimisation and particle swarm optimisation respectively. The system submitted to the search task is an interactive retrieval application combining retrieval functionalities in various modalities with a user interface supporting automatic and interactive search over all queries submitted. Finally, the rushes task submission is based on a video summarisation and browsing system comprising two different interest curve algorithms and three features

    The COST292 experimental framework for TRECVID 2007

    Get PDF
    In this paper, we give an overview of the four tasks submitted to TRECVID 2007 by COST292. In shot boundary (SB) detection task, four SB detectors have been developed and the results are merged using two merging algorithms. The framework developed for the high-level feature extraction task comprises four systems. The first system transforms a set of low-level descriptors into the semantic space using Latent Semantic Analysis and utilises neural networks for feature detection. The second system uses a Bayesian classifier trained with a "bag of subregions". The third system uses a multi-modal classifier based on SVMs and several descriptors. The fourth system uses two image classifiers based on ant colony optimisation and particle swarm optimisation respectively. The system submitted to the search task is an interactive retrieval application combining retrieval functionalities in various modalities with a user interface supporting automatic and interactive search over all queries submitted. Finally, the rushes task submission is based on a video summarisation and browsing system comprising two different interest curve algorithms and three features

    Semantically enriching an open source sensor observation service implementation for accessing heterogeneous environmental data sources

    No full text
    Many kinds of environmental data are nowadays publicly available, but spread over the web. This article discusses using the Sensor Observation Service (SOS) standard of the Open Geospatial Consortium (OGC) as a common interface for providing data from heterogeneous sources which can be integrated to a user tailored environmental information system. In order to allow for providing user-tailored and problem-specific information the adjusted SOS is augmented by a semantic layer which maps the environmental information to ontology concepts. The necessary information fusion from different domains and data types lead to several specific requirements for the SOS. Addressing these requirements we have implemented a SOS which still conforms to the OGC SOS 1.0.0 standard specification. The developed SOS has been integrated in a publicly available demonstrator of our personalized environmental information system. Additionally this article discusses future consequences for the SOS, caused by the recently published SOS 2.0 specification

    VERGE in VBS 2017

    No full text
    Comunicació presentada a Video Browser Showdown (VBS'17), a 23rd International Conference on MultiMedia Modeling (MMM'17), celebrat el 4 de gener de 2017 a Reykjavik, Islàndia.This paper presents VERGE interactive video retrieval engine, which is capable of browsing and searching into video content. The system integrates several content-based analysis and retrieval modules including concept detec-tion, clustering, visual similarity search, object-based search, query analysis and multimodal and temporal fusion.This work was supported by the EU’s Horizon 2020 research and innovation programme under grant agreements H2020-687786 InVID, H2020-693092 MOVING, H2020-645012 KRISTINA and H2020-700024 TENSOR

    Personalized environmental service configuration and delivery orchestration: The PESCaDO demonstrator

    No full text
    Citizens are increasingly aware of the influence of environmental and meteorological conditions on the quality of their life. This results in an increasing demand for personalized environmental information, i.e., information that is tailored to citizens’ specific context and background. In this demonstration, we present an environmental information system that addresses this demand in its full complexity in the context of the PESCaDO EU project. Specifically, we will show a system that supports submission of user generated queries related to environmental conditions. From the technical point of view, the system is tuned to discover reliable data in the web and to process these data in order to convert them into knowledge, which is stored in a dedicated repository. At run time, this information is transferred into an ontology-based knowledge base, from which then information relevant to the specific user is deduced and communicated in the language of their preference
    corecore