7 research outputs found

    VERGE: A Multimodal Interactive Search Engine for Video Browsing and Retrieval.

    Get PDF
    This paper presents VERGE interactive search engine, which is capable of browsing and searching into video content. The system integrates content-based analysis and retrieval modules such as video shot segmentation, concept detection, clustering, as well as visual similarity and object-based search

    A topic detection and visualisation system on social media posts

    No full text
    Comunicació presentada a: Internet Science. 4th International Conference, INSCI 2017 celebrada del 22 al 24 de novembre de 2017 a Thessaloniki, Grècia.Large amounts of social media posts are produced on a daily basis and monitoring all of them is a challenging task. In this direction we demonstrate a topic detection and visualisation tool in Twitter data, which filters Twitter posts by topic or keyword, in two different languages; German and Turkish. The system is based on state-of-the-art news clustering methods and the tool has been created to handle streams of recent news information in a fast and user-friendly way. The user interface and user-system interaction examples are presented in detail.This work was supported by the EC-funded projects H2020-645012 (KRISTINA) and H2020-700475 (beAWARE)

    Multimodal Analysis of Image Search Intent: Intent Recognition in Image Search from User Behavior and Visual Content

    No full text
    Users search for multimedia content with different underlying motivations or intentions. Study of user search intentions is an emerging topic in information retrieval since understanding why a user is searching for a content is crucial for satisfying the user's need. In this paper, we aimed at automatically recognizing a user's intent for image search in the early stage of a search session. We designed seven different search scenarios under the intent conditions of finding items, re-finding items and entertainment. We collected facial expressions, physiological responses, eye gaze and implicit user interactions from 51 participants who performed seven different search tasks on a custom-built image retrieval platform. We analyzed the users' spontaneous and explicit reactions under different intent conditions. Finally, we trained machine learning models to predict users' search intentions from the visual content of the visited images, the user interactions and the spontaneous responses. After fusing the visual and user interaction features, our system achieved the F-1 score of 0.722 for classifying three classes in a userindependent cross-validation. We found that eye gaze and implicit user interactions, including mouse movements and keystrokes are the most informative features. Given that the most promising results are obtained by modalities that can be captured unobtrusively and online, the results demonstrate the feasibility of deploying such methods for improving multimedia retrieval platforms

    VERGE in VBS 2017

    No full text
    Comunicació presentada a Video Browser Showdown (VBS'17), a 23rd International Conference on MultiMedia Modeling (MMM'17), celebrat el 4 de gener de 2017 a Reykjavik, Islàndia.This paper presents VERGE interactive video retrieval engine, which is capable of browsing and searching into video content. The system integrates several content-based analysis and retrieval modules including concept detec-tion, clustering, visual similarity search, object-based search, query analysis and multimodal and temporal fusion.This work was supported by the EU’s Horizon 2020 research and innovation programme under grant agreements H2020-687786 InVID, H2020-693092 MOVING, H2020-645012 KRISTINA and H2020-700024 TENSOR

    Harmonizing Data collection in an Ontology for a Risk Management Platform

    No full text
    In every disaster time is the enemy and getting accurate and helpful real time information for supporting descision support is critical. Data sources for Risk Management Platforms are heterogeneous. This includes data coming from several resources: sensors, social media, the general public and first responders. All this data needs to be analyzed, aggregated and fused and the semantics of the data needs to be understood. This paper discusses means for integrating and harmonizing data into an ICT platform for risk management and gives examples for semantic analysis
    corecore