47,707 research outputs found

    Fireground location understanding by semantic linking of visual objects and building information models

    Get PDF
    This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding

    More cat than cute? Interpretable Prediction of Adjective-Noun Pairs

    Full text link
    The increasing availability of affect-rich multimedia resources has bolstered interest in understanding sentiment and emotions in and from visual content. Adjective-noun pairs (ANP) are a popular mid-level semantic construct for capturing affect via visually detectable concepts such as "cute dog" or "beautiful landscape". Current state-of-the-art methods approach ANP prediction by considering each of these compound concepts as individual tokens, ignoring the underlying relationships in ANPs. This work aims at disentangling the contributions of the `adjectives' and `nouns' in the visual prediction of ANPs. Two specialised classifiers, one trained for detecting adjectives and another for nouns, are fused to predict 553 different ANPs. The resulting ANP prediction model is more interpretable as it allows us to study contributions of the adjective and noun components. Source code and models are available at https://imatge-upc.github.io/affective-2017-musa2/ .Comment: Oral paper at ACM Multimedia 2017 Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes (MUSA2

    Automated Global Feature Analyzer - A Driver for Tier-Scalable Reconnaissance

    Get PDF
    For the purposes of space flight, reconnaissance field geologists have trained to become astronauts. However, the initial forays to Mars and other planetary bodies have been done by purely robotic craft. Therefore, training and equipping a robotic craft with the sensory and cognitive capabilities of a field geologist to form a science craft is a necessary prerequisite. Numerous steps are necessary in order for a science craft to be able to map, analyze, and characterize a geologic field site, as well as effectively formulate working hypotheses. We report on the continued development of the integrated software system AGFA: automated global feature analyzerreg, originated by Fink at Caltech and his collaborators in 2001. AGFA is an automatic and feature-driven target characterization system that operates in an imaged operational area, such as a geologic field site on a remote planetary surface. AGFA performs automated target identification and detection through segmentation, providing for feature extraction, classification, and prioritization within mapped or imaged operational areas at different length scales and resolutions, depending on the vantage point (e.g., spaceborne, airborne, or ground). AGFA extracts features such as target size, color, albedo, vesicularity, and angularity. Based on the extracted features, AGFA summarizes the mapped operational area numerically and flags targets of "interest", i.e., targets that exhibit sufficient anomaly within the feature space. AGFA enables automated science analysis aboard robotic spacecraft, and, embedded in tier-scalable reconnaissance mission architectures, is a driver of future intelligent and autonomous robotic planetary exploration
    • …
    corecore