31 research outputs found

    An Epipolar Line from a Single Pixel

    Full text link
    Computing the epipolar geometry from feature points between cameras with very different viewpoints is often error prone, as an object's appearance can vary greatly between images. For such cases, it has been shown that using motion extracted from video can achieve much better results than using a static image. This paper extends these earlier works based on the scene dynamics. In this paper we propose a new method to compute the epipolar geometry from a video stream, by exploiting the following observation: For a pixel p in Image A, all pixels corresponding to p in Image B are on the same epipolar line. Equivalently, the image of the line going through camera A's center and p is an epipolar line in B. Therefore, when cameras A and B are synchronized, the momentary images of two objects projecting to the same pixel, p, in camera A at times t1 and t2, lie on an epipolar line in camera B. Based on this observation we achieve fast and precise computation of epipolar lines. Calibrating cameras based on our method of finding epipolar lines is much faster and more robust than previous methods.Comment: WACV 201

    Robust Object Detection with Real-Time Fusion of Multiview Foreground Silhouettes

    Get PDF

    Multi-view video segmentation and tracking for video surveillance

    Get PDF
    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate object-tracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single- view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios

    The Bluetooth And GPS Tracking System: Comparison And Analysis Of Technique

    Get PDF
    In recent years, tracking system is a popular approach to detect misplaced objects due to the availability of related systems in the market. The common technologies used are RIFD, Bluetooth, WIFI and GPS. Following framework more often than not consolidates the utilization of an electronic gadget (equipment) with portable application (programming) that tracks the missing objects. Numerous structures likewise join verbal trade parts together with satellite TV for pc transmitters to communicatewith equipment's for a distant client. Google maps are utilized to observe the equipment's region. This examination is centered around two administrations in following lost question. The first one is GPS and the second one is Bluetooth. The ascent of Bluetooth Low Energy (known as BLE or Bluetooth 4.0) opens up to unlimited potential outcomes of Bluetooth following applications. On the other hand, GPS is a service that communicates with satellites, provides location on the globe, and gives coordinates of the location tracker

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects

    Principal axis-based correspondence between multiple cameras for people tracking

    Full text link

    Tracking with constraints in a web of sensors

    Get PDF
    Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (leaves 49-50).With the dramatic fall in price of electronics over the past several years, large-scale networks of sensors are steadily becoming more feasible. The goal of this research project was to deploy a sensor network for data collection of human trajectories and to develop and test a method for taking walls into account using a penalty function in an ongoing research project in trajectory tracking model. There were two deployed sensor networks, one with distance sensors and one with cameras, and the camera network was used to collect data for the two papers. The trajectory tracking model was modified to incorporate wall constraints to enable exploration of more realistic scenarios, with encouraging preliminary results.by Brian Kerry Dunagan.M.Eng.and S.B

    Eyes in the Sky: Decentralized Control for the Deployment of Robotic Camera Networks

    Get PDF
    This paper presents a decentralized control strategy for positioning and orienting multiple robotic cameras to collectively monitor an environment. The cameras may have various degrees of mobility from six degrees of freedom, to one degree of freedom. The control strategy is proven to locally minimize a novel metric representing information loss over the environment. It can accommodate groups of cameras with heterogeneous degrees of mobility (e.g., some that only translate and some that only rotate), and is adaptive to robotic cameras being added or deleted from the group, and to changing environmental conditions. The robotic cameras share information for their controllers over a wireless network using a specially designed multihop networking algorithm. The control strategy is demonstrated in repeated experiments with three flying quadrotor robots indoors, and with five flying quadrotor robots outdoors. Simulation results for more complex scenarios are also presented.United States. Army Research Office. Multidisciplinary University Research Initiative. Scalable (Grant number W911NF-05-1-0219)United States. Office of Naval Research. Multidisciplinary University Research Initiative. Smarts (Grant number N000140911051)National Science Foundation (U.S.). (Grant number EFRI-0735953)Lincoln LaboratoryBoeing CompanyUnited States. Dept. of the Air Force (Contract FA8721-05-C-0002

    Consistent labeling of tracked objects in multiple cameras with overlapping fields of view

    No full text
    In this paper, we address the issue of tracking moving objects in an environment covered by multiple uncalibrated cameras with overlapping fields of view, typical of most surveillance setups. In such a scenario, it is essential to establish correspondence between tracks of the same object, seen in different cameras, to recover complete information about the object. We call this the problem of consistent labeling of objects when seen in multiple cameras. We employ a novel approach of finding the limits of field of view (FOV) of each camera as visible in the other cameras. We show that, if the FOV lines are known, it is possible to disambiguate between multiple possibilities for correspondence. We present a method to automatically recover these lines by observing motion in the environment. Furthermore, once these lines are initialized, the homography between the views can also be recovered. We present results on indoor and outdoor sequences containing persons and vehicles
    corecore