105,666 research outputs found

    A distributed camera system for multi-resolution surveillance

    Get PDF
    We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance

    Proprioceptive perception of phase variability

    Get PDF
    Previous work has established that judgments of relative phase variability of 2 visually presented oscillators covary with mean relative phase. Ninety degrees is judged to be more variable than 0Ā° or 180Ā°, independently of the actual level of phase variability. Judged levels of variability also increase at 180Ā°. This pattern of judgments matches the pattern of movement coordination results. Here, participants judged the phase variability of their own finger movements, which they generated by actively tracking a manipulandum moving at 0Ā°, 90Ā°, or 180Ā°, and with 1 of 4 levels of Phase Variability. Judgments covaried as an inverted U-shaped function of mean relative phase. With an increase in frequency, 180Ā° was judged more variable whereas 0Ā° was not. Higher frequency also reduced discrimination of the levels of Phase Variability. This matching of the proprioceptive and visual results, and of both to movement results, supports the hypothesized role of online perception in the coupling of limb movements. Differences in the 2 cases are discussed as due primarily to the different sensitivities of the systems to the information

    Semi-autonomous scheme for pushing micro-objects

    Get PDF
    -In many microassembly applications, it is often desirable to position and orient polygonal micro-objects lying on a planar surface. Pushing micro-objects using point contact provides more flexibility and less complexity compared to pick and place operation. Due to the fact that in micro-world surface forces are much more dominant than inertial forces and these forces are distributed unevenly, pushing through the center of mass of the micro-object will not yield a pure translational motion. In order to translate a micro-object, the line of pushing should pass through the center of friction. In this paper, a semi-autonomous scheme based on hybrid vision/force feedback is proposed to push microobjects with human assistance using a custom built telemicromanipulation setup to achieve pure translational motion. The pushing operation is divided into two concurrent processes: In one process human operator who acts as an impedance controller alters the velocity of the pusher while in contact with the micro-object through scaled bilateral teleoperation with force feedback. In the other process, the desired line of pushing for the micro-object is determined continuously using visual feedback procedures so that it always passes through the varying center of friction. Experimental results are demonstrated to prove nanoNewton range force sensing, scaled bilateral teleoperation with force feedback and pushing microobjects

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Learning-based Image Enhancement for Visual Odometry in Challenging HDR Environments

    Full text link
    One of the main open challenges in visual odometry (VO) is the robustness to difficult illumination conditions or high dynamic range (HDR) environments. The main difficulties in these situations come from both the limitations of the sensors and the inability to perform a successful tracking of interest points because of the bold assumptions in VO, such as brightness constancy. We address this problem from a deep learning perspective, for which we first fine-tune a Deep Neural Network (DNN) with the purpose of obtaining enhanced representations of the sequences for VO. Then, we demonstrate how the insertion of Long Short Term Memory (LSTM) allows us to obtain temporally consistent sequences, as the estimation depends on previous states. However, the use of very deep networks does not allow the insertion into a real-time VO framework; therefore, we also propose a Convolutional Neural Network (CNN) of reduced size capable of performing faster. Finally, we validate the enhanced representations by evaluating the sequences produced by the two architectures in several state-of-art VO algorithms, such as ORB-SLAM and DSO
    • ā€¦
    corecore