105,666 research outputs found
A distributed camera system for multi-resolution surveillance
We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor.
Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database.
Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table.
We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating
under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance
Proprioceptive perception of phase variability
Previous work has established that judgments of relative phase variability of 2 visually presented oscillators covary with mean relative phase. Ninety degrees is judged to be more variable than 0Ā° or 180Ā°, independently of the actual level of phase variability. Judged levels of variability also increase at 180Ā°. This pattern of judgments matches the pattern of movement coordination results. Here, participants judged the phase variability of their own finger movements, which they generated by actively tracking a manipulandum moving at 0Ā°, 90Ā°, or 180Ā°, and with 1 of 4 levels of Phase Variability. Judgments covaried as an inverted U-shaped function of mean relative phase. With an increase in frequency, 180Ā° was judged more variable whereas 0Ā° was not. Higher frequency also reduced discrimination of the levels of Phase Variability. This matching of the proprioceptive and visual results, and of both to movement results, supports the hypothesized role of online perception in the coupling of limb movements. Differences in the 2 cases are discussed as due primarily to the different sensitivities of the systems to the information
Semi-autonomous scheme for pushing micro-objects
-In many microassembly applications, it is often
desirable to position and orient polygonal micro-objects lying on
a planar surface. Pushing micro-objects using point contact provides
more flexibility and less complexity compared to pick and
place operation. Due to the fact that in micro-world surface forces
are much more dominant than inertial forces and these forces
are distributed unevenly, pushing through the center of mass of
the micro-object will not yield a pure translational motion. In
order to translate a micro-object, the line of pushing should pass
through the center of friction. In this paper, a semi-autonomous
scheme based on hybrid vision/force feedback is proposed to push
microobjects with human assistance using a custom built telemicromanipulation
setup to achieve pure translational motion.
The pushing operation is divided into two concurrent processes:
In one process human operator who acts as an impedance
controller alters the velocity of the pusher while in contact with
the micro-object through scaled bilateral teleoperation with force
feedback. In the other process, the desired line of pushing for
the micro-object is determined continuously using visual feedback
procedures so that it always passes through the varying center of
friction. Experimental results are demonstrated to prove nanoNewton
range force sensing, scaled bilateral teleoperation with
force feedback and pushing microobjects
Recommended from our members
Dynamic multifactor hubs interact transiently with sites of active transcription in Drosophila embryos.
The regulation of transcription requires the coordination of numerous activities on DNA, yet how transcription factors mediate these activities remains poorly understood. Here, we use lattice light-sheet microscopy to integrate single-molecule and high-speed 4D imaging in developing Drosophila embryos to study the nuclear organization and interactions of the key transcription factors Zelda and Bicoid. In contrast to previous studies suggesting stable, cooperative binding, we show that both factors interact with DNA with surprisingly high off-rates. We find that both factors form dynamic subnuclear hubs, and that Bicoid binding is enriched within Zelda hubs. Remarkably, these hubs are both short lived and interact only transiently with sites of active Bicoid-dependent transcription. Based on our observations, we hypothesize that, beyond simply forming bridges between DNA and the transcription machinery, transcription factors can organize other proteins into hubs that transiently drive multiple activities at their gene targets.Editorial noteThis article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter)
Cognitive visual tracking and camera control
Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision
On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots
This paper describes a camera and hand-eye
calibration methodology for integrating an active binocular
robot head within a dual-arm robot. For this purpose, we
derive the forward kinematic model of our active robot head
and describe our methodology for calibrating and integrating
our robot head. This rigid calibration provides a closedform
hand-to-eye solution. We then present an approach for
updating dynamically camera external parameters for optimal
3D reconstruction that are the foundation for robotic tasks such
as grasping and manipulating rigid and deformable objects. We
show from experimental results that our robot head achieves
an overall sub millimetre accuracy of less than 0.3 millimetres
while recovering the 3D structure of a scene. In addition, we
report a comparative study between current RGBD cameras
and our active stereo head within two dual-arm robotic testbeds
that demonstrates the accuracy and portability of our proposed
methodology
Learning-based Image Enhancement for Visual Odometry in Challenging HDR Environments
One of the main open challenges in visual odometry (VO) is the robustness to
difficult illumination conditions or high dynamic range (HDR) environments. The
main difficulties in these situations come from both the limitations of the
sensors and the inability to perform a successful tracking of interest points
because of the bold assumptions in VO, such as brightness constancy. We address
this problem from a deep learning perspective, for which we first fine-tune a
Deep Neural Network (DNN) with the purpose of obtaining enhanced
representations of the sequences for VO. Then, we demonstrate how the insertion
of Long Short Term Memory (LSTM) allows us to obtain temporally consistent
sequences, as the estimation depends on previous states. However, the use of
very deep networks does not allow the insertion into a real-time VO framework;
therefore, we also propose a Convolutional Neural Network (CNN) of reduced size
capable of performing faster. Finally, we validate the enhanced representations
by evaluating the sequences produced by the two architectures in several
state-of-art VO algorithms, such as ORB-SLAM and DSO
- ā¦