542 research outputs found

    Contextual and Human Factors in Information Fusion

    Get PDF
    Proceedings of: NATO Advanced Research Workshop on Human Systems Integration to Enhance Maritime Domain Awareness for Port/Harbour Security Systems, Opatija (Croatia), December 8-12, 2008Context and human factors may be essential to improving measurement processes for each sensor, and the particular context of each sensor could be used to obtain a global definition of context in multisensor environments. Reality may be captured by human sensorial domain based only on machine stimulus and then generate a feedback which can be used by the machine at its different processing levels, adapting its algorithms and methods accordingly. Reciprocally, human perception of the environment could also be modelled by context in the machine. In the proposed model, both machine and man take sensorial information from the environment and process it cooperatively until a decision or semantic synthesis is produced. In this work, we present a model for context representation and reasoning to be exploited by fusion systems. In the first place, the structure and representation of contextual information must be determined before being exploited by a specific application. Under complex circumstances, the use of context information and human interaction can help to improve a tracking system's performance (for instance, video-based tracking systems may fail when dealing with object interaction, occlusions, crosses, etc.).Publicad

    ATC Trajectory Reconstruction for Automated Evaluation of Sensor and Tracker Performance

    Get PDF
    Currently most air traffic controller decisions are based on the information provided by the ground support tools provided by automation systems, based on a network of surveillance sensors and the associated tracker. To guarantee surveillance integrity, it is clear that performance assessments of the different elements of the surveillance system are necessary. Due to the evolution suffered by the surveillance processing chain in the recent past, its complexity has been increased by the integration of new sensor types (e.g., automatic dependent surveillance-broadcast [ADS-B], Mode S radars, and wide area multilateration [WAM]), data link applications, and networking technologies. With new sensors, there is a need for system-level performance evaluations as well as methods for establishing assessment at each component of the tracking evaluation.This work was funded by contract EUROCONTROL’s TRES, by the Spanish Ministry of Economy and Competitiveness under grants CICYT TEC2008-06732/TEC and CYCIT TEC2011-28626, and by the Government of Madrid under grant S2009/TIC-1485 (CONTEXTS).Publicad

    Multi-Target Tracking in Distributed Sensor Networks using Particle PHD Filters

    Full text link
    Multi-target tracking is an important problem in civilian and military applications. This paper investigates multi-target tracking in distributed sensor networks. Data association, which arises particularly in multi-object scenarios, can be tackled by various solutions. We consider sequential Monte Carlo implementations of the Probability Hypothesis Density (PHD) filter based on random finite sets. This approach circumvents the data association issue by jointly estimating all targets in the region of interest. To this end, we develop the Diffusion Particle PHD Filter (D-PPHDF) as well as a centralized version, called the Multi-Sensor Particle PHD Filter (MS-PPHDF). Their performance is evaluated in terms of the Optimal Subpattern Assignment (OSPA) metric, benchmarked against a distributed extension of the Posterior Cram\'er-Rao Lower Bound (PCRLB), and compared to the performance of an existing distributed PHD Particle Filter. Furthermore, the robustness of the proposed tracking algorithms against outliers and their performance with respect to different amounts of clutter is investigated.Comment: 27 pages, 6 figure

    A Practical Approach to the Development of Ontology-Based Information Fusion Systems

    Get PDF
    Proceedings of: NATO Advanced Study Institute (ASI) on Prediction and Recognition of Piracy Efforts Using Collaborative Human-Centric Information Systems, Salamanca, 19-30 September, 2011Ontology-based representations are gaining momentum among other alternatives to implement the knowledge model of high-level fusion applications. In this paper, we provide an introduction to the theoretical foundations of ontology-based knowledge representation and reasoning, with a particular focus on the issues that appear in maritime security –where heterogeneous regulations, information sources, users, and systems are involved. We also present some current approaches and existing technologies for high-level fusion based on ontological representations. Unfortunately, current tools for the practical implementation of ontology-based systems are not fully standardized, or even prepared to work together in medium-scale systems. Accordingly, we discuss different alternatives to face problems such as spatial and temporal knowledge representation or uncertainty management. To illustrate the conclusions drawn from this research, an ontology-based semantic tracking system is briefly presented. Results and latent capabilities of this framework are shown at the end of the paper, where we also envision future opportunities for this kind of applications.This research activity is supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS 2008-07029-C02-02.Publicad

    Automatic detection, tracking and counting of birds in marine video content

    Get PDF
    Robust automatic detection of moving objects in a marine context is a multi-faceted problem due to the complexity of the observed scene. The dynamic nature of the sea caused by waves, boat wakes, and weather conditions poses huge challenges for the development of a stable background model. Moreover, camera motion, reflections, lightning and illumination changes may contribute to false detections. Dynamic background subtraction (DBGS) is widely considered as a solution to tackle this issue in the scope of vessel detection for maritime traffic analysis. In this paper, the DBGS techniques suggested for ships are investigated and optimized for the monitoring and tracking of birds in marine video content. In addition to background subtraction, foreground candidates are filtered by a classifier based on their feature descriptors in order to remove non-bird objects. Different types of classifiers have been evaluated and results on a ground truth labeled dataset of challenging video fragments show similar levels of precision and recall of about 95% for the best performing classifier. The remaining foreground items are counted and birds are tracked along the video sequence using spatio-temporal motion prediction. This allows marine scientists to study the presence and behavior of birds

    Robust sensor fusion in real maritime surveillance scenarios

    Get PDF
    8 pages, 14 figures.-- Proceedings of: 13th International Conference on Information Fusion (FUSION'2010), Edinburgh, Scotland, UK, Jul 26-29, 2010).This paper presents the design and evaluation of a sensor fusion system for maritime surveillance. The system must exploit the complementary AIS-radar sensing technologies to synthesize a reliable surveillance picture using a highly efficient implementation to operate in dense scenarios. The paper highlights the realistic effects taken into account for robust data combination and system scalability.This work was supported in part by a national project with NUCLEO CC, and research projects CICYT TEC2008-06732-C02-02/TEC, CICYT TIN2008-06742-C02-02/TSI, SINPROB, CAM CONTEXTS S2009/TIC-1485 and DPS2008-07029-C02-02.Publicad

    Joint Registration and Fusion of an Infra-Red Camera and Scanning Radar in a Maritime Context

    Get PDF
    The number of nodes in sensor networks is continually increasing, and maintaining accurate track estimates inside their common surveillance region is a critical necessity. Modern sensor platforms are likely to carry a range of different sensor modalities, all providing data at differing rates, and with varying degrees of uncertainty. These factors complicate the fusion problem as multiple observation models are required, along with a dynamic prediction model. However, the problem is exacerbated when sensors are not registered correctly with respect to each other, i.e. if they are subject to a static or dynamic bias. In this case, measurements from different sensors may correspond to the same target, but do not correlate with each other when in the same Frame of Reference (FoR), which decreases track accuracy. This paper presents a method to jointly estimate the state of multiple targets in a surveillance region, and to correctly register a radar and an Infrared Search and Track (IRST) system onto the same FoR to perform sensor fusion. Previous work using this type of parent-offspring process has been successful when calibrating a pair of cameras, but has never been attempted on a heterogeneous sensor network, nor in a maritime environment. This article presents results on both simulated scenarios and a segment of real data that show a significant increase in track quality in comparison to using incorrectly calibrated sensors or single-radar only

    PETS 2017: dataset and challenge

    Get PDF
    This paper indicates the dataset and challenges evaluated under PETS2017. In this edition PETS continues the evaluation theme of on-board surveillance systems for protection of mobile critical assets as set in PETS 2016. The datasets include (1) the ARENA Dataset; an RGB camera dataset, as used for PETS2014 to PETS 2016, which addresses protection of trucks; and (2) the IPATCH Dataset; a multi sensor dataset, as used in PETS2016, addressing the application of multi sensor surveillance to protect a vessel at sea from piracy. The datasets allow for performance evaluation of tracking in low-density scenarios and detection of various surveillance events ranging from innocuous abnormalities to dangerous and criminal situations. Training data for tracking algorithms is released with the dataset; tracking data is also available for authors addressing only surveillance event detection challenges but not working on tracking
    corecore