556 research outputs found

    Control of a PTZ camera in a hybrid vision system

    No full text
    In this paper, we propose a new approach to steer a PTZ camera in the direction of a detected object visible from another fixed camera equipped with a fisheye lens. This heterogeneous association of two cameras having different characteristics is called a hybrid stereo-vision system. The presented method employs epipolar geometry in a smart way in order to reduce the range of search of the desired region of interest. Furthermore, we proposed a target recognition method designed to cope with the illumination problems, the distortion of the omnidirectional image and the inherent dissimilarity of resolution and color responses between both cameras. Experimental results with synthetic and real images show the robustness of the proposed method

    Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes

    Full text link
    In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application

    RE@CT - Immersive Production and Delivery of Interactive 3D Content

    No full text
    International audienceThis paper describes the aims and concepts of the FP7 RE@CT project. Building upon the latest advances in 3D capture and free-viewpoint video RE@CT aims to revolutionise the production of realistic characters and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance capture in a multiple camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This will enable efficient authoring of interactive characters with video quality appearance and motion

    ROBUST BACKGROUND SUBTRACTION FOR MOVING CAMERAS AND THEIR APPLICATIONS IN EGO-VISION SYSTEMS

    Get PDF
    Background subtraction is the algorithmic process that segments out the region of interest often known as foreground from the background. Extensive literature and numerous algorithms exist in this domain, but most research have focused on videos captured by static cameras. The proliferation of portable platforms equipped with cameras has resulted in a large amount of video data being generated from moving cameras. This motivates the need for foundational algorithms for foreground/background segmentation in videos from moving cameras. In this dissertation, I propose three new types of background subtraction algorithms for moving cameras based on appearance, motion, and a combination of them. Comprehensive evaluation of the proposed approaches on publicly available test sequences show superiority of our system over state-of-the-art algorithms. The first method is an appearance-based global modeling of foreground and background. Features are extracted by sliding a fixed size window over the entire image without any spatial constraint to accommodate arbitrary camera movements. Supervised learning method is then used to build foreground and background models. This method is suitable for limited scene scenarios such as Pan-Tilt-Zoom surveillance cameras. The second method relies on motion. It comprises of an innovative background motion approximation mechanism followed by spatial regulation through a Mega-Pixel denoising process. This work does not need to maintain any costly appearance models and is therefore appropriate for resource constraint ego-vision systems. The proposed segmentation combined with skin cues is validated by a novel application on authenticating hand-gestured signature captured by wearable cameras. The third method combines both motion and appearance. Foreground probabilities are jointly estimated by motion and appearance. After the mega-pixel denoising process, the probability estimates and gradient image are combined by Graph-Cut to produce the segmentation mask. This method is universal as it can handle all types of moving cameras

    Design And Analysis Of Scalable Video Streaming Systems

    Get PDF
    Despite the advancement in multimedia streaming technology, many multimedia applications are still face major challenges, including provision of Quality-of-Service (QoS), system scalability, limited resources, and cost. In this dissertation, we develop and analyze a new set of metrics based on two particular video streaming systems, namely: (1) Video-on-Demand (VOD) with video advertisements system and (2) Automated Video Surveillance System (AVS). We address the main issues in the design of commercial VOD systems: scalability and support of video advertisements. We develop a scalable delivery framework for streaming media content with video advertisements. The delivery framework combines the benefits of stream merging and periodic broadcasting. In addition, we propose new scheduling policies that are well-suited for the proposed delivery framework. We also propose a new prediction scheme of the ad viewing times, called Assign Closest Ad Completion Time (ACA). Moreover, we propose an enhanced business model, in which the revenue generated from advertisements is used to subsidize the price. Additionally, we investigate the support of targeted advertisements, whereby clients receive ads that are well-suited for their interests and needs. Furthermore, we provide the clients with the ability to select from multiple price options, each with an associate expected number of viewed ads. We provide detailed analysis of the proposed VOD system, considering realistic workload and a wide range of design parameters. In the second system, Automated Video Surveillance (AVS), we consider the system design for optimizing the subjects recognition probabilities. We focus on the management and the control of various Pan, Tilt, Zoom (PTZ) video cameras. In particular, we develop a camera management solution that provides the best tradeoff between the subject recognition probability and time complexity. We consider both subject grouping and clustering mechanisms. In subject grouping, we propose the Grid Based Grouping (GBG) and the Elevator Based P lanning (EBP) algorithms. In the clustering approach, we propose the (GBG) with Clustering (GBGC) and the EBP with Clustering (EBPC) algorithms. We characterize the impact of various factors on recognition probability. These factors include resolution, pose and zoom-distance noise. We provide detailed analysis of the camera management solution, considering realistic workload and system design parameters

    A Photogrammetry-Based Hybrid System for Dynamic Tracking and Measurement

    Get PDF
    Noncontact measurements of lightweight flexible aerospace structures present several challenges. Objects are usually mounted on a test stand because current noncontact measurement techniques require that the net motion of the object be zero. However, it is often desirable to take measurements of the object under operational conditions, and in the case of miniature aerial vehicles (MAVs) and deploying space structures, the test article will undergo significant translational motion. This thesis describes a hybrid noncontact measurement system which will enable measurement of structural kinematics of an object freely moving about a volume. By using a real-time videogrammetry system, a set of pan-tilt-zoom (PTZ) cameras is coordinated to track large-scale net motion and produce high-speed, high-quality images for photogrammetric surface reconstruction. The design of the system is presented in detail. A method of generating the calibration parameters for the PTZ cameras is presented and evaluated and is shown to produce good results. The results of camera synchronization tests and tracking accuracy evaluation are presented as well. Finally, a demonstration of the hybrid system is presented in which all four PTZ cameras track an MAV in flight

    Multi-camera Control and Video Transmission Architecture for Distributed Systems

    Get PDF
    Proceedings of: Workshop on User-Centric Technologies and Applications (CONTEXTS 2011)The increasing number of autonomous systems monitoring and controlling visual sensor networks, make it necessary an homogeneous (deviceindependent), flexible (accessible from various places), and efficient (real-time) access to all their underlying video devices. This paper describes an architecture for camera control and video transmission in a distributed system like existing in a cooperative multi-agent video surveillance scenario. The proposed system enables the access to a limited-access resource (video sensors) in an easy, transparent and efficient way both for local and remote processes. It is particularly suitable for Pan-Tilt-Zoom (PTZ) cameras in which a remote control is essential.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI,CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM CONTEXTS S2009/TIC-1485 and DPS2008-07029-C02-02.Publicad
    • 

    corecore