2,284 research outputs found
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Real time eye tracking using Kalman extended spatio-temporal context learning
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements
Feature-based tracking of multiple people for intelligent video surveillance.
Intelligent video surveillance is the process of performing surveillance task automatically by a computer vision system. It involves detecting and tracking people in the video sequence and understanding their behavior. This thesis addresses the problem of detecting and tracking multiple moving people with unknown background. We have proposed a feature-based framework for tracking, which requires feature extraction and feature matching. We have considered color, size, blob bounding box and motion information as features of people. In our feature-based tracking system, we have proposed to use Pearson correlation coefficient for matching feature-vector with temporal templates. The occlusion problem has been solved by histogram backprojection. Our tracking system is fast and free from assumptions about human structure. We have implemented our tracking system using Visual C++ and OpenCV and tested on real-world images and videos. Experimental results suggest that our tracking system achieved good accuracy and can process videos in 10-15 fps.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2006 .A42. Source: Masters Abstracts International, Volume: 45-01, page: 0347. Thesis (M.Sc.)--University of Windsor (Canada), 2006
DroTrack: High-speed Drone-based Object Tracking Under Uncertainty
We present DroTrack, a high-speed visual single-object tracking framework for
drone-captured video sequences. Most of the existing object tracking methods
are designed to tackle well-known challenges, such as occlusion and cluttered
backgrounds. The complex motion of drones, i.e., multiple degrees of freedom in
three-dimensional space, causes high uncertainty. The uncertainty problem leads
to inaccurate location predictions and fuzziness in scale estimations. DroTrack
solves such issues by discovering the dependency between object representation
and motion geometry. We implement an effective object segmentation based on
Fuzzy C Means (FCM). We incorporate the spatial information into the membership
function to cluster the most discriminative segments. We then enhance the
object segmentation by using a pre-trained Convolution Neural Network (CNN)
model. DroTrack also leverages the geometrical angular motion to estimate a
reliable object scale. We discuss the experimental results and performance
evaluation using two datasets of 51,462 drone-captured frames. The combination
of the FCM segmentation and the angular scaling increased DroTrack precision by
up to and decreased the centre location error by pixels on average.
DroTrack outperforms all the high-speed trackers and achieves comparable
results in comparison to deep learning trackers. DroTrack offers high frame
rates up to 1000 frame per second (fps) with the best location precision, more
than a set of state-of-the-art real-time trackers.Comment: 10 pages, 12 figures, FUZZ-IEEE 202
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Data Fusion for Vision-Based Robotic Platform Navigation
Data fusion has become an active research topic in recent years. Growing computational performance has allowed the use of redundant sensors to measure a single phenomenon. While Bayesian fusion approaches are common in general applications, the computer vision community has largely relegated this approach. Most object following algorithms have gone towards pure machine learning fusion techniques that tend to lack flexibility. Consequently, a more general data fusion scheme is needed. The motivation for this work is to propose methods that allow for the development of simple and cost effective, yet robust visual following robots capable of tracking a general object with limited restrictions on target characteristics. With that purpose in mind, in this work, a hierarchical adaptive Bayesian fusion approach is proposed, which outperforms individual trackers by using redundant measurements. The adaptive framework is achieved by relying in each measurement\u27s local statistics and a global softened majority voting. Several approaches for robots that can follow targets have been proposed in recent years. However, many require the use of several, expensive sensors and often the majority of the image processing and other calculations are performed independently. In the proposed approach, objects are detected by several state-of-the-art vision-based tracking algorithms, which are then used within a Bayesian framework to filter and fuse the measurements and generate the robot control commands. Target scale variations and, in one of the platforms, a time-of-flight (ToF) depth camera, are used to determine the relative distance between the target and the robotic platforms. The algorithms are executed in real-time (approximately 30fps). The proposed approaches were validated in a simulated application and several robotics platforms: one stationary pan-tilt system, one small unmanned air vehicle, and one ground robot with a Jetson TK1 embedded computer. Experiments were conducted with different target objects in order to validate the system in scenarios including occlusions and various illumination conditions as well as to show how the data fusion improves the overall robustness of the system
Recommended from our members
Automated tracking and grasping of a moving object with a robotic hand-eye system
An attempt to achieve a high level of interaction between a real-time vision system capable of tracking moving objects in 3-D and a robot arm with gripper that can be used to pick up a moving object is described. The interplay of hand-eye coordination in dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts, or for grasping from a mobile robotic system is explored. The goal is to build an integrated sensing and actuation system that can operate in dynamic as opposed to static environments. The system built addresses three distinct problems in using robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and interception and grasping. The system operates at approximately human arm movement rates. Experimental results in which a moving model train is tracked, stably grasped, and picked up by the system are presented. The algorithms developed to relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks
- …