65 research outputs found

    Ontology-based context representation and reasoning for object tracking and scene interpretation in video

    Get PDF
    Computer vision research has been traditionally focused on the development of quantitative techniques to calculate the properties and relations of the entities appearing in a video sequence. Most object tracking methods are based on statistical methods, which often result inadequate to process complex scenarios. Recently, new techniques based on the exploitation of contextual information have been proposed to overcome the problems that these classical approaches do not solve. The present paper is a contribution in this direction: we propose a Computer Vision framework aimed at the construction of a symbolic model of the scene by integrating tracking data and contextual information. The scene model, represented with formal ontologies, supports the execution of reasoning procedures in order to: (i) obtain a high-level interpretation of the scenario; (ii) provide feedback to the low-level tracking procedure to improve its accuracy and performance. The paper describes the layered architecture of the framework and the structure of the knowledge model, which have been designed in compliance with the JDL model for Information Fusion. We also explain how deductive and abductive reasoning is performed within the model to accomplish scene interpretation and tracking improvement. To show the advantages of our approach, we develop an example of the use of the framework in a video-surveillance application.This work was supported in part by Projects CICYT TIN2008- 06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008–07029-C02–02.Publicad

    Ontological representation of context knowledge for visual data fusion

    Get PDF
    8 pages, 4 figures.-- Contributed to: 12th International Conference on Information Fusion, 2009 (FUSION '09, Seattle, Washington, US, Jul 6-9, 2009).Context knowledge is essential to achieve successful information fusion, especially at high JDL levels. Context can be used to interpret the perceived situation, which is required for accurate assessment. Both types of knowledge, contextual and perceptual, can be represented with formal languages such as ontologies, which support the creation of readable representations and reasoning with them. In this paper, we present an ontology-based model compliant with JDL to represent knowledge in cognitive visual data fusion systems. We depict the use of the model with an example on surveillance. We show that such a model promotes system extensibility and facilitates the incorporation of humans in the fusion loop.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008-07029-C02-02.Publicad

    Communication in distributed tracking systems: an ontology-based approach to improve cooperation

    Get PDF
    Current Computer Vision systems are expected to allow for the management of data acquired by physically distributed cameras. This is especially the case for modern surveillance systems, which require communication between components and a combination of their outputs in order to obtain a complete view of the scene. Information fusion techniques have been successfully applied in this area, but several problems remain unsolved. One of them is the increasing need for coordination and cooperation between independent and heterogeneous cameras. A solution to achieve an understanding between them is to use a common and well-defined message content vocabulary. In this research work, we present a formal ontology aimed at the symbolic representation of visual data, mainly detected tracks corresponding to real-world moving objects. Such an ontological representation provides support for spontaneous communication and component interoperability, increases system scalability and facilitates the development of high-level fusion procedures. The ontology is used by the agents of Cooperative Surveillance Multi-Agent System, our multi-agent framework for multi-camera surveillance systems.This work was supported in part by Projects CICYT TIN2008-06742-C02-02=TSI, CICYT TEC2008-06732-C02-02=TEC, CAM CONTEXTS (S2009=TIC-1485) and DPS2008-07029-C02-02.Publicad

    High-Level Information Fusion in Visual Sensor Networks

    Get PDF
    Information fusion techniques combine data from multiple sensors, along with additional information and knowledge, to obtain better estimates of the observed scenario than could be achieved by the use of single sensors or information sources alone. According to the JDL fusion process model, high-level information fusion is concerned with the computation of a scene representation in terms of abstract entities such as activities and threats, as well as estimating the relationships among these entities. Recent experiences confirm that context knowledge plays a key role in the new-generation high-level fusion systems, especially in those involving complex scenarios that cause the failure of classical statistical techniques –as it happens in visual sensor networks. In this chapter, we study the architectural and functional issues of applying context information to improve high-level fusion procedures, with a particular focus on visual data applications. The use of formal knowledge representations (e.g. ontologies) is a promising advance in this direction, but there are still some unresolved questions that must be more extensively researched.The UC3M Team gratefully acknowledges that this research activity is supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02

    Foods with Functional Properties and their Potential Uses in Human Health

    Get PDF
    Vegetables and fruits have been a part of human diet since ancient times; nevertheless, as countries develop, its population’s feeding habits change and tend to have a diet poor in vegetables and fruits, with well-known consequences. Several food plant products with massive consumption and within the reach of the population are products such as artichoke, leek, hot chili pepper, coriander, kiwifruit, sweet orange, highbush blueberry, and maracuyá to name a few. They have many beneficial properties principally by its content of phytochemicals with high impact on human health, beyond nutritional support. The phytochemicals are bioactive compounds such as vitamins, carotenoids, phenolic acid, and flavonoids, which contribute to antioxidant capacity and as a whole prevent chronic nontransmissible diseases such as: diabetes, high blood pressure, high cholesterol in blood, cardiovascular risks, among others. This relationship between food plant for human consumption and its impacts on human health is discussed in this chapter, highlighting coriander and kiwifruit by its wide range of benefits

    Topological Properties in Ontology-based Applications

    Get PDF
    Proceedings of: 11th International Conference on Intelligent Systems Design and Applications, Córdoba, Spain, 22 – 24 November, 2011.Representation and reasoning with spatial properties is essential in several application domains where ontologies are being successfully applied; e.g., Information Fusion systems. This requires a full characterization of the semantics of relations such as adjacent, included, overlapping, etc. Nevertheless, ontologies are not expressive enough to directly support widely-use spatial or topological theories, such as the Region Connection Calculus (RCC). In addition, these properties must be properly instantiated in the ontology, which may require expensive calculations. This paper presents a practical approach to represent and reason with topological properties in ontology-based systems, as well as some optimization techniques that have been applied in a video-based Information Fusion application.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC,CAM CONTEXTS (S2009/ TIC-1485) and DPS2008-07029-C02-02.Publicad

    Context-based scene recognition from visual data in smart homes: an Information Fusion approach

    Get PDF
    Ambient Intelligence (AmI) aims at the development of computational systems that process data acquired by sensors embedded in the environment to support users in everyday tasks. Visual sensors, however, have been scarcely used in this kind of applications, even though they provide very valuable information about scene objects: position, speed, color, texture, etc. In this paper, we propose a cognitive framework for the implementation of AmI applications based on visual sensor networks. The framework, inspired by the Information Fusion paradigm, combines a priori context knowledge represented with ontologies with real time single camera data to support logic-based high-level local interpretation of the current situation. In addition, the system is able to automatically generate feedback recommendations to adjust data acquisition procedures. Information about recognized situations is eventually collected by a central node to obtain an overall description of the scene and consequently trigger AmI services. We show the extensible and adaptable nature of the approach with a prototype system in a smart home scenario.This research activity is supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008- 06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad

    Ontological representation of time-of-flight camera data to support vision-based AmI

    Get PDF
    Proceedings of: 4th International Workshop on Sensor Networks and Ambient Intelligence, 19-23 March 2012, Lugano ( Switzerland)Recent advances in technologies for capturing video data have opened a vast amount of new application areas. Among them, the incorporation of Time-of-Flight (ToF) cameras on Ambient Intelligence (AmI) environments. Although theperformance of tracking algorithms have quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted for smart environments. This paper presents an extension of a previous system in the area of videobased AmI to incorporate ToF information to enhance sceneinterpretation. The framework is founded on an ontologybased model of the scene, which is extended to incorporate ToF data. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application.This work was supported in part by Projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029- C02-02.Publicad

    Applying the Dynamic Region Connection Calculus to Exploit Geographic Knowledge in Maritime Surveillance

    Get PDF
    Proceedings of: 15th International Conference on Information Fusion (FUSION 2012), Singapore, 9-12 July 2012.Concerns about the protection of the global transport network have risen the need of new security and surveillance systems. Ontology-based and fusion systems represent an attractive alternative for practical applications focused on fast and accurate responses. This paper presents an architecture based on a geometric model to efficiently predict and calculate the topological relationships between spatial objects. This model aims to reduce the number of calculations by relying on a spatial data structure. The goal is the detection of threatening behaviors next to points of interest without a noticeable loss of efficiency. The architecture has been embedded in an ontology-based prototype compliant with the Joint Directors of Laboratories (JDL) model for Information Fusion. The prototype capabilities are illustrated by applying international protection rules in maritime scenarios.This work was supported in part by Projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029- C02-02.Publicad

    Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

    Get PDF
    This article belongs to the Special Issue Sensors and Wireless Sensor Networks for Novel Concepts of Things, Interfaces and Applications in Smart SpacesRecent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.This work was supported in part by Projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad
    corecore