86 research outputs found

    Context-based Information Fusion: A survey and discussion

    Get PDF
    This survey aims to provide a comprehensive status of recent and current research on context-based Information Fusion (IF) systems, tracing back the roots of the original thinking behind the development of the concept of \u201ccontext\u201d. It shows how its fortune in the distributed computing world eventually permeated in the world of IF, discussing the current strategies and techniques, and hinting possible future trends. IF processes can represent context at different levels (structural and physical constraints of the scenario, a priori known operational rules between entities and environment, dynamic relationships modelled to interpret the system output, etc.). In addition to the survey, several novel context exploitation dynamics and architectural aspects peculiar to the fusion domain are presented and discussed

    High-Level Information Fusion in Visual Sensor Networks

    Get PDF
    Information fusion techniques combine data from multiple sensors, along with additional information and knowledge, to obtain better estimates of the observed scenario than could be achieved by the use of single sensors or information sources alone. According to the JDL fusion process model, high-level information fusion is concerned with the computation of a scene representation in terms of abstract entities such as activities and threats, as well as estimating the relationships among these entities. Recent experiences confirm that context knowledge plays a key role in the new-generation high-level fusion systems, especially in those involving complex scenarios that cause the failure of classical statistical techniques –as it happens in visual sensor networks. In this chapter, we study the architectural and functional issues of applying context information to improve high-level fusion procedures, with a particular focus on visual data applications. The use of formal knowledge representations (e.g. ontologies) is a promising advance in this direction, but there are still some unresolved questions that must be more extensively researched.The UC3M Team gratefully acknowledges that this research activity is supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02

    Requirements of the SALTY project

    Get PDF
    This document is the first external deliverable of the SALTY project (Self-Adaptive very Large disTributed sYstems), funded by the ANR under contract ANR-09-SEGI-012. It is the result of task 1.1 of the Work Package (WP) 1 : Requirements and Architecture. Its objective is to identify and collect requirements from use cases that are going to be developed in WP 4 (Use cases and Validation). Based on the study and classification of the use cases, requirements against the envisaged framework are then determined and organized in features. These features will aim at guide and control the advances in all work packages of the project. As a start, features are classified, briefly described and related scenarios in the defined use cases are pinpointed. In the following tasks and deliverables, these features will facilitate design by assigning priorities to them and defining success criteria at a finer grain as the project progresses. This report, as the first external document, has no dependency to any other external documents and serves as a reference to future external documents. As it has been built from the use cases studies that have been synthesized in two internal documents of the project, extracts from the two documents are made available as appendices (cf. appen- dices B and C)

    Ontological representation of context knowledge for visual data fusion

    Get PDF
    8 pages, 4 figures.-- Contributed to: 12th International Conference on Information Fusion, 2009 (FUSION '09, Seattle, Washington, US, Jul 6-9, 2009).Context knowledge is essential to achieve successful information fusion, especially at high JDL levels. Context can be used to interpret the perceived situation, which is required for accurate assessment. Both types of knowledge, contextual and perceptual, can be represented with formal languages such as ontologies, which support the creation of readable representations and reasoning with them. In this paper, we present an ontology-based model compliant with JDL to represent knowledge in cognitive visual data fusion systems. We depict the use of the model with an example on surveillance. We show that such a model promotes system extensibility and facilitates the incorporation of humans in the fusion loop.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008-07029-C02-02.Publicad

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches

    Communication in distributed tracking systems: an ontology-based approach to improve cooperation

    Get PDF
    Current Computer Vision systems are expected to allow for the management of data acquired by physically distributed cameras. This is especially the case for modern surveillance systems, which require communication between components and a combination of their outputs in order to obtain a complete view of the scene. Information fusion techniques have been successfully applied in this area, but several problems remain unsolved. One of them is the increasing need for coordination and cooperation between independent and heterogeneous cameras. A solution to achieve an understanding between them is to use a common and well-defined message content vocabulary. In this research work, we present a formal ontology aimed at the symbolic representation of visual data, mainly detected tracks corresponding to real-world moving objects. Such an ontological representation provides support for spontaneous communication and component interoperability, increases system scalability and facilitates the development of high-level fusion procedures. The ontology is used by the agents of Cooperative Surveillance Multi-Agent System, our multi-agent framework for multi-camera surveillance systems.This work was supported in part by Projects CICYT TIN2008-06742-C02-02=TSI, CICYT TEC2008-06732-C02-02=TEC, CAM CONTEXTS (S2009=TIC-1485) and DPS2008-07029-C02-02.Publicad

    GWpilot: Enabling multi-level scheduling in distributed infrastructures with GridWay and pilot jobs

    Get PDF
    Current systems based on pilot jobs are not exploiting all the scheduling advantages that the technique offers, or they lack compatibility or adaptability. To overcome the limitations or drawbacks in existing approaches, this study presents a different general-purpose pilot system, GWpilot. This system provides individual users or institutions with a more easy-to-use, easy-toinstall, scalable, extendable, flexible and adjustable framework to efficiently run legacy applications. The framework is based on the GridWay meta-scheduler and incorporates the powerful features of this system, such as standard interfaces, fair-share policies, ranking, migration, accounting and compatibility with diverse infrastructures. GWpilot goes beyond establishing simple network overlays to overcome the waiting times in remote queues or to improve the reliability in task production. It properly tackles the characterisation problem in current infrastructures, allowing users to arbitrarily incorporate customised monitoring of resources and their running applications into the system. This functionality allows the new framework to implement innovative scheduling algorithms that accomplish the computational needs of a wide range of calculations faster and more efficiently. The system can also be easily stacked under other software layers, such as self-schedulers. The advanced techniques included by default in the framework result in significant performance improvements even when very short tasks are scheduled

    Wireless Sensor Networks And Data Fusion For Structural Health Monitoring Of Aircraft

    Get PDF
    This thesis discusses an architecture and design of a sensor web to be used for structural health monitoring of an aircraft. Also presented are several prototypes of critical parts of the sensor web. The proposed sensor web will utilize sensor nodes situated throughout the structure. These nodes and one or more workstations will support agents that communicate and collaborate to monitor the health of the structure. Agents can be any internal or external autonomous entity that has direct access to affect a given system. For the purposes of this document, an agent will be defined as an autonomous software resource that has the ability to make decisions for itself based on given tasks and abilities while also collaborating with others to find a feasible answer to a given problem regarding the structural health monitoring system. Once the agents have received relevant data from nodes, they will utilize applications that perform data fusion techniques to classify events and further improve the functionality of the system for more accurate future classifications. Agents will also pass alerts up a self-configuring hierarchy of monitor agents and make them available for review by personnel. This thesis makes use of previous results from applying the Gaia methodology for analysis and design of the multiagent system
    corecore