10 research outputs found

    VERGE: A Multimodal Interactive Search Engine for Video Browsing and Retrieval.

    Get PDF
    This paper presents VERGE interactive search engine, which is capable of browsing and searching into video content. The system integrates content-based analysis and retrieval modules such as video shot segmentation, concept detection, clustering, as well as visual similarity and object-based search

    Deliverable 4.5: Context-aware Content Interpretation

    No full text
    The current deliverable summarises the work conducted within task T4.5 of WP4, presenting our proposed approaches for contextualised content interpretation, aimed at gaining insightful contextualised views on content semantics. This is achieved through the adoption of appropriate context-aware semantic models developed within the project, and via enriching the semantic descriptions with background knowledge, deriving thus higher level contextualised content interpretations that are closer to human perception and appraisal needs. More specifically, the main contributions of the deliverable are the following: A theoretical framework using physics as a metaphor to develop different models of evolving semantic content. A set of proof-of-concept models for semantic drifts due to field dynamics, introducing two methods to identify quantum-like (QL) patterns in evolving information searching behaviour, and a QL model akin to particle-wave duality for semantic content classification. Integration of two specific tools, Somoclu for drift detection and Ncpol2spda for entanglement detection. An “energetic” hypothesis accounting for contextualized evolving semantic structures over time. A proposed semantic interpretation framework, integrating (a) an ontological inference scheme based on Description Logics (DL), (b) a rule-based reasoning layer built on SPARQL Inference Notation (SPIN), (c) an uncertainty management framework based on non-monotonic logics. A novel scheme for contextualized reasoning on semantic drift, based on LRM dependencies and OWL’s punning mechanism. An implementation of SPIN rules for policy and ecosystem change management, with the adoption of LRM preconditions and impacts. Specific use case scenarios demonstrate the context under development and the efficiency of the approach. Respective open-source implementations and experimental results that validate all the above.All these contributions are tightly interlinked with the other PERICLES work packages: WP2 supplies the use cases and sample datasets for validating our proposed approaches, WP3 provides the models (LRM and Digital Ecosystem models) that form the basis for our semantic representations of content and context, WP5 provides the practical application of the technologies developed to preservation processes, while the tools and algorithms presented in this deliverable can be deployed in combination with test scenarios, which will be part of the WP6 test beds.PERICLE

    Empowering persons with deafblindness: Designing an intelligent assistive wearable in the SUITCEYES project

    Get PDF
    Deafblindness is a condition that limits communication capabilities primarily to the haptic channel. In the EU-funded project SUITCEYES we design a system which allows haptic and thermal communication via soft interfaces and textiles. Based on user needs and informed by disability studies, we combine elements from smart textiles, sensors, semantic technologies, image processing, face and object recognition, machine learning, affective computing, and gamification. In this work, we present the underlying concepts and the overall design vision of the resulting assistive smart wearable

    Эффективный алгоритм обнаружения дыма и пламени с использованием цветного и вейвлет-анализа

    Get PDF
    Fire detection is an important task in many applications. Smoke and flame are two essential symbols of fire in images. In this paper, we propose an algorithm to detect smoke and flame simultaneously for color dynamic video sequences obtained from a stationary camera in open space. Motion is a common feature of smoke and flame and usually has been used at the beginning for extraction from a current frame of candidate areas. The adaptive background subtraction has been utilized at a stage of moving detection. In addition, the optical flow-based movement estimation has been applied to identify a chaotic motion. With the spatial and temporal wavelet analysis, Weber contrast analysis and color segmentation, we achieved moving blobs classification. Real video surveillance sequences from publicly available datasets have been used for smoke detection with the utilization of our algorithm. We also have conducted a set of experiments. Experiments results have shown that our algorithm can achieve higher detection rate of 87% for smoke and 92% for flame
    corecore