2 research outputs found

    Distributed estimation over a low-cost sensor network: a review of state-of-the-art

    Get PDF
    Proliferation of low-cost, lightweight, and power efficient sensors and advances in networked systems enable the employment of multiple sensors. Distributed estimation provides a scalable and fault-robust fusion framework with a peer-to-peer communication architecture. For this reason, there seems to be a real need for a critical review of existing and, more importantly, recent advances in the domain of distributed estimation over a low-cost sensor network. This paper presents a comprehensive review of the state-of-the-art solutions in this research area, exploring their characteristics, advantages, and challenging issues. Additionally, several open problems and future avenues of research are highlighted

    Distributed target tracking in wireless camera networks

    Get PDF
    PhDDistributed target tracking (DTT) is desirable in wireless camera networks to achieve scalability and robustness to node or link failures. DTT estimates the target state via information exchange and fusion among cameras. This thesis proposes new DTT algorithms to handle five major challenges of DTT in wireless camera networks, namely non-linearity in the camera measurement model, temporary lack of measurements (benightedness) due to limited field of view, redundant information in the network, limited connectivity of the network due to limited communication ranges and asynchronous information caused by varying and unknown frame processing delays. The algorithms consist of two phases, namely estimation and fusion. In the estimation phase, the cameras process their captured frames, detect the target, and estimate the target state (location and velocity) and its uncertainty using the Extended Information Filter (EIF) that handles non-linearity. In the fusion phase, the cameras exchange their local target information with their communicative neighbours and fuse the information. The contributions of this thesis are as follows. The target states estimated by the EIFs undergo weighted fusion. The weights are chosen based on the estimated uncertainty (error covariance) and the number of nodes with redundant information such that the information of benighted nodes and the redundant information get lower weights. At each time step, only the cameras having the view of the target and the cameras that might have the view of the target in the next time step participate in the fusion (tracking). This reduces the energy consumption of the network. The algorithm selects the cameras dynamically by using a threshold on their shortest distances (in the communication graph) from the cameras having the view of the target. Before fusion, each camera predicts the target information of other cameras to temporally align its information with the (asynchronous) information received from other cameras. The algorithm predicts the target state using the latest estimated velocity of the target. The experimental results show that the proposed algorithms achieve higher tracking accuracy than the state of the art under the five DTT challenges
    corecore