11 research outputs found
VERGE: A Multimodal Interactive Search Engine for Video Browsing and Retrieval.
This paper presents VERGE interactive search engine, which is capable of browsing and searching into video content. The system integrates content-based analysis and retrieval modules such as video shot segmentation, concept detection, clustering, as well as visual similarity and object-based search
Deliverable 4.5: Context-aware Content Interpretation
The current deliverable summarises the work conducted within task T4.5 of WP4, presenting our proposed approaches for contextualised content interpretation, aimed at gaining insightful contextualised views on content semantics. This is achieved through the adoption of appropriate context-aware semantic models developed within the project, and via enriching the semantic descriptions with background knowledge, deriving thus higher level contextualised content interpretations that are closer to human perception and appraisal needs. More specifically, the main contributions of the deliverable are the following: A theoretical framework using physics as a metaphor to develop different models of evolving semantic content. A set of proof-of-concept models for semantic drifts due to field dynamics, introducing two methods to identify quantum-like (QL) patterns in evolving information searching behaviour, and a QL model akin to particle-wave duality for semantic content classification. Integration of two specific tools, Somoclu for drift detection and Ncpol2spda for entanglement detection. An “energetic” hypothesis accounting for contextualized evolving semantic structures over time. A proposed semantic interpretation framework, integrating (a) an ontological inference scheme based on Description Logics (DL), (b) a rule-based reasoning layer built on SPARQL Inference Notation (SPIN), (c) an uncertainty management framework based on non-monotonic logics. A novel scheme for contextualized reasoning on semantic drift, based on LRM dependencies and OWL’s punning mechanism. An implementation of SPIN rules for policy and ecosystem change management, with the adoption of LRM preconditions and impacts. Specific use case scenarios demonstrate the context under development and the efficiency of the approach. Respective open-source implementations and experimental results that validate all the above.All these contributions are tightly interlinked with the other PERICLES work packages: WP2 supplies the use cases and sample datasets for validating our proposed approaches, WP3 provides the models (LRM and Digital Ecosystem models) that form the basis for our semantic representations of content and context, WP5 provides the practical application of the technologies developed to preservation processes, while the tools and algorithms presented in this deliverable can be deployed in combination with test scenarios, which will be part of the WP6 test beds.PERICLE
Empowering persons with deafblindness: Designing an intelligent assistive wearable in the SUITCEYES project
Deafblindness is a condition that limits communication capabilities primarily to the haptic channel. In the EU-funded project SUITCEYES we design a system which allows haptic and thermal communication via soft interfaces and textiles. Based on user needs and informed by disability studies, we combine elements from smart textiles, sensors, semantic technologies, image processing, face and object recognition, machine learning, affective computing, and gamification. In this work, we present the underlying concepts and the overall design vision of the resulting assistive smart wearable
Mobile App Interventions for Parkinson’s Disease, Multiple Sclerosis and Stroke: A Systematic Literature Review
Central nervous system diseases (CNSDs) lead to significant disability worldwide. Mobile app interventions have recently shown the potential to facilitate monitoring and medical management of patients with CNSDs. In this direction, the characteristics of the mobile apps used in research studies and their level of clinical effectiveness need to be explored in order to advance the multidisciplinary research required in the field of mobile app interventions for CNSDs. A systematic review of mobile app interventions for three major CNSDs, i.e., Parkinson’s disease (PD), multiple sclerosis (MS), and stroke, which impose significant burden on people and health care systems around the globe, is presented. A literature search in the bibliographic databases of PubMed and Scopus was performed. Identified studies were assessed in terms of quality, and synthesized according to target disease, mobile app characteristics, study design and outcomes. Overall, 21 studies were included in the review. A total of 3 studies targeted PD (14%), 4 studies targeted MS (19%), and 14 studies targeted stroke (67%). Most studies presented a weak-to-moderate methodological quality. Study samples were small, with 15 studies (71%) including less than 50 participants, and only 4 studies (19%) reporting a study duration of 6 months or more. The majority of the mobile apps focused on exercise and physical rehabilitation. In total, 16 studies (76%) reported positive outcomes related to physical activity and motor function, cognition, quality of life, and education, whereas 5 studies (24%) clearly reported no difference compared to usual care. Mobile app interventions are promising to improve outcomes concerning patient’s physical activity, motor ability, cognition, quality of life and education for patients with PD, MS, and Stroke. However, rigorous studies are required to demonstrate robust evidence of their clinical effectiveness. © 2023 by the authors
Эффективный алгоритм обнаружения дыма и пламени с использованием цветного и вейвлет-анализа
Fire detection is an important task in many applications. Smoke and flame are two essential symbols of fire in images. In this paper, we propose an algorithm to detect smoke and flame simultaneously for color dynamic video sequences obtained from a stationary camera in open space. Motion is a common feature of smoke and flame and usually has been used at the beginning for extraction from a current frame of candidate areas. The adaptive background subtraction has been utilized at a stage of moving detection. In addition, the optical flow-based movement estimation has been applied to identify a chaotic motion. With the spatial and temporal wavelet analysis, Weber contrast analysis and color segmentation, we achieved moving blobs classification. Real video surveillance sequences from publicly available datasets have been used for smoke detection with the utilization of our algorithm. We also have conducted a set of experiments. Experiments results have shown that our algorithm can achieve higher detection rate of 87% for smoke and 92% for flame