17 research outputs found

    Dempster-Shafer based multi-view occupancy maps

    Get PDF
    Presented is a method for calculating occupancy maps with a set of calibrated and synchronised cameras. In particular, Dempster-Shafer based fusion of the ground occupancies computed from each view is proposed. The method yields very accurate occupancy detection results and in terms of concentration of the occupancy evidence around ground truth person positions it outperforms the state-of-the- art probabilistic occupancy map method and fusion by summing

    Activity monitoring of people in buildings using distributed smart cameras

    Get PDF
    Systems for monitoring the activity of people inside buildings (e.g., how many people are there, where are they, what are they doing, etc.) have numerous (potential) applications including domotics (control of lighting, heating, etc.), elderly-care (gathering statistics on the daily live) and video teleconferencing. We will discuss the key challenges and present the preliminary results of our ongoing research on the use of distributed smart cameras for activity monitoring of people in buildings. The emphasis of our research is on: - the use of smart cameras (embedded devices): video is processed locally (distributed algorithms), and only meta-data is send over the network (minimal data exchange) - camera collaboration: cameras with overlapping views work together in a network in order to increase the overall system performance - robustness: system should work in real conditions (e.g., robust to lighting changes) Our research setup consists of cameras connected to PCs (to simulate smart cameras), each connected to one central PC. The system builds in real-time an occupancy map of a room (indicating the positions of the people in the room) by fusing the information from the different cameras in a Dempster-Shafer framework

    PhD forum: Dempster-Shafer based camera contribution evaluation for task assignment in vision networks

    No full text
    In a network of cameras, it is important that the right subset of cameras takes care of the right task. In this work, we describe a general framework to evaluate the contribution of subsets of cameras to a task. Each task is the observation of an event of interest and consists of assessing the validity of a set of hypotheses. All cameras gather evidence for those hypotheses. The evidence from different cameras is fused by using the Dempster-Shafer theory of evidence. After combining the evidence for a set of cameras, the remaining uncertainty about a set of hypotheses, allows us to identify how well a certain camera subset is suited for a certain task. Taking into account these subset contribution values, we can determine in an efficient way the set of subset-task assignments that yields the best overall task performance

    Rate allocation algorithm for pixel-domain distributed video coding without feedback channel

    No full text
    In some video coding applications, it is desirable to reduce the complexity of the video encoder at the expense of a more complex decoder. Distributed Video (DV) Coding is a new paradigm that aims to achieve this. To allocate a proper number of bits to each frame, most DV coding algorithms use a feedback channel (FBC). However, in some cases, a FBC does not exist. In this paper, we therefore propose a rate allocation (RA) algorithm for pixel-domain distributed video coders without FBC. Our algorithm estimates at the encoder the number of bits for every frame without significantly increasing the encoder complexity. Experimental results show that our RA algorithm delivers satisfactory estimates of the adequate encoding rate, especially for sequences with little motion

    Camera selection for tracking in distributed smart camera networks

    No full text
    Tracking persons with multiple cameras with overlapping fields of view instead of with one camera leads to more robust decisions. However, operating multiple cameras instead of one requires more processing power and communication bandwidth, which are limited resources in practical networks. When the fields of view of different cameras overlap, not all cameras are equally needed for localizing a tracking target. When only a selected set of cameras do processing and transmit data to track the target, a substantial saving of resources is achieved. The recent introduction of smart cameras with on-board image processing and communication hardware makes such a distributed implementation of tracking feasible. We present a novel framework for selecting cameras to track people in a distributed smart camera network that is based on generalized information-theory. By quantifying the contribution of one or more cameras to the tracking task, the limited network resources can be allocated appropriately, such that the best possible tracking performance is achieved. With the proposed method, we dynamically assign a subset of all available cameras to each target and track it in difficult circumstances of occlusions and limited fields of view with the same accuracy as when using all cameras

    Sub-optimal camera selection in practical vision networks through shape approximation

    No full text
    Within a camera network, the contribution of a camera to the observations of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time. An automatic selection of a subset of cameras that significantly contributes to the desired observation of a scene can be of great value for the reduction of the amount of transmitted and stored image data. We propose a, greedy algorithm for camera selection in practical vision networks where the selection decision has to be taken in real time. The selection criterion is based on the information from each camera sensor's observations of persons in a scene, and only low data rate information is required to be sent over wireless channels since the image frames are first locally processed by each sensor node before transmission. Experimental results show that the performance of the proposed greedy algorithm is close to the performance of the optimal selection algorithm. In addition, we propose communication protocols for such camera. networks, and through experiments, we show the proposed protocols improve latency and observation frequency Without deteriorating the performance

    Optimal camera selection in vision networks for shape approximation

    No full text
    Within a camera network, the contribution of a camera to the observation of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time. An automatic selection of a subset of cameras that significantly contributes to the desired observation of a scene can be of great value for the reduction of the amount of transmitted or stored image data. In this work, we propose low data rate schemes to select from a vision network a subset of cameras that provides a good frontal observation of the persons in the scene and allows for the best approximation of their 3D shape. We also investigate to what degree low data rates trade off quality of reconstructed 3D shapes

    Efficient approximate foreground detection for low-resource devices

    No full text
    A broad range of very powerful foreground detection methods exist because this is an essential step in many computer vision algorithms. However, because of memory and computational constraints, simple static background subtraction is very often the technique that is used in practice on a platform with limited resources such as a smart camera. In this paper we propose to apply more powerful techniques on a reduced scan line version of the captured image to construct an approximation of the actual foreground without overburdening the smart camera. We show that the performance of static background subtraction quickly drops outside of a controlled laboratory environment, and that this is not the case for the proposed method because of its ability to update its background model. Furthermore we provide a comparison with foreground detection on a subsampled version of the captured image. We show that with the proposed foreground approximation higher true positive rates can be achieved
    corecore