The goal of sensor fusion is to take observations of an environment from multiple sources and combine them into the best possible track picture. For simplicity, this is usually done by sending all measurements to a single node whose sole task is to fuse measurements into a coherent track picture. This paper introduces a new framework, Distributed Fusion Architecture for Classification and Tracking (DFACT), that does not rely on a single “full-awareness node ” to fuse observations, but rather turns every sensor into a fusion center. Moving a network from a centralized to a distributed architecture complicates sensor fusion, but provides many tangible benefits. Since each sensor is both a source and a sink of information, the loss of any individual component means that only one of many possible information channels has been destroyed. In this paper, we discuss how to fuse both tracking and classification observations in a distributed network, and compare its performance to a centralized framework. Each track has both a kinematic state and a belief state associated with it. Components utilize the information form of the Kalman filter for kinematic tracking, while target type is determined using a hierarchical belief structure to determine object classification, recognition, and identification. Each component in the network maintains its own independent picture of the environment, and information is exchanged between components via a queued messaging system. This paper compares the performance of centralized architectures to distributed architectures, and their respective communication and computation costs
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.