4 research outputs found

    FireFly Mosaic: A Vision-Enabled Wireless Sensor Networking System

    Full text link
    Abstract — With the advent of CMOS cameras, it is now possible to make compact, cheap and low-power image sensors capable of on-board image processing. These embedded vision sensors provide a rich new sensing modality enabling new classes of wireless sensor networking applications. In order to build these applications, system designers need to overcome challanges associated with limited bandwith, limited power, group coordination and fusing of multiple camera views with various other sensory inputs. Real-time properties must be upheld if multiple vision sensors are to process data, com-municate with each other and make a group decision before the measured environmental feature changes. In this paper, we present FireFly Mosaic, a wireless sensor network image processing framework with operating system, networking and image processing primitives that assist in the development of distributed vision-sensing tasks. Each FireFly Mosaic wireless camera consists of a FireFly [1] node coupled with a CMUcam3 [2] embedded vision processor. The FireFly nodes run the Nano-RK [3] real-time operating system and communicate using the RT-Link [4] collision-free TDMA link protocol. Using FireFly Mosaic, we demonstrate an assisted living application capable of fusing multiple cameras with overlapping views to discover and monitor daily activities in a home. Using this application, we show how an integrated platform with support for time synchronization, a collision-free TDMA link layer, an underlying RTOS and an interface to an embedded vision sensor provides a stable framework for distributed real-time vision processing. To the best of our knowledge, this is the first wireless sensor networking system to integrate multiple coordinating cameras performing local processing. I

    Inference of Non-Overlapping Camera Network Topology using Statistical Approaches

    Get PDF
    This work proposes an unsupervised learning model to infer the topological information of a camera network automatically. This algorithm works on non-overlapped and overlapped cameras field of views (FOVs). The constructed model detects the entry/exit zones of the moving objects across the cameras FOVs using the Data-Spectroscopic method. The probabilistic relationships between each pair of entry/exit zones are learnt to localize the camera network nodes. Increase the certainty of the probabilistic relationships using Computer-Generating to create more Monte Carlo observations of entry/exit points. Our method requires no assumptions, no processors for each camera and no communication among the cameras. The purpose is to figure out the relationship between each pair of linked cameras using the statistical approaches which help to track the moving objects depending on their present location. The Output is shown as a Markov chain model that represents the weighted-unit links between each pair of cameras FOV

    Modeling and Optimizing the Coverage of Multi-Camera Systems

    Get PDF
    This thesis approaches the problem of modeling a multi-camera system\u27s performance from system and task parameters by describing the relationship in terms of coverage. This interface allows a substantial separation of the two concerns: the ability of the system to obtain data from the space of possible stimuli, according to task requirements, and the description of the set of stimuli required for the task. The conjecture is that for any particular system, it is in principle possible to develop such a model with ideal prediction of performance. Accordingly, a generalized structure and tool set is built around the core mathematical definitions of task-oriented coverage, without tying it to any particular model. A family of problems related to coverage in the context of multi-camera systems is identified and described. A comprehensive survey of the state of the art in approaching such problems concludes that by coupling the representation of coverage to narrow problem cases and applications, and by attempting to simplify the models to fit optimization techniques, both the generality and the fidelity of the models are reduced. It is noted that models exhibiting practical levels of fidelity are well beyond the point where only metaheuristic optimization techniques are applicable. Armed with these observations and a promising set of ideas from surveyed sources, a new high-fidelity model for multi-camera vision based on the general coverage framework is presented. This model is intended to be more general in scope than previous work, and despite the complexity introduced by the multiple criteria required for fidelity, it conforms to the framework and is thus tractable for certain optimization approaches. Furthermore, it is readily extended to different types of vision systems. This thesis substantiates all of these claims. The model\u27s fidelity and generality is validated and compared to some of the more advanced models from the literature. Three of the aforementioned coverage problems are then approached in application cases using the model. In one case, a bistatic variant of the sensing modality is used, requiring a modification of the model; the compatibility of this modification, both conceptually and mathematically, illustrates the generality of the framework

    Topology Inference for a Vision-Based Sensor Network

    No full text
    In this paper we describe a technique to infer the topology and connectivity information of a network of cameras based on observed motion in the environment. While the technique can use labels from reliable cameras systems, the algorithm is powerful enough to function using ambiguous tracking data. The method requires no prior knowledge of the relative locations of the cameras and operates under very weak environmental assumptions. Our approach stochastically samples plausible agent trajectories based on a delay model that allows for transitions to and from sources and sinks in the environment. The technique demonstrates considerable robustness both to sensor error and non-trivial patterns of agent motion. The output of the method is a Markov model describing the behavior of agents in the system and the underlying traffic patterns. The concept is demonstrated with simulation data and verified with experiments conducted on a six camera sensor network
    corecore