274,428 research outputs found

    An Efficient Multiple Object Vision Tracking System using Bipartite Graph Matching

    Get PDF
    For application domains like 11 vs. 11 robot soccer league, crowd surveillance and air traffic control, vision systems need to be able to identify and maintain information in real time about multiple objects as they move through an environment using video images. In this paper, we reduce the multi-object tracking problem to a bipartite graph matching and present efficient techniques that compute the optimal matching in real time. We demonstrate the robustness of our system on a task of tracking indistinguishable objects. One of the advantages of our tracking system is that it requires a much lower frame rate than standard tracking systems to reliably keep track of multiple objects

    Object tracking with stereo vision

    Get PDF
    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking

    Computer hardware and software for robotic control

    Get PDF
    The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems

    Summary report: A preliminary investigation into the use of fuzzy logic for the control of redundant manipulators

    Get PDF
    The Rice University Department of Mechanical Engineering and Materials Sciences' Robotics Group designed and built an eight degree of freedom redundant manipulator. Fuzzy logic was proposed as a control scheme for tasks not directly controlled by a human operator. In preliminary work, fuzzy logic control was implemented for a camera tracking system and a six degree of freedom manipulator. Both preliminary systems use real time vision data as input to fuzzy controllers. Related projects include integration of tactile sensing and fuzzy control of a redundant snake-like arm that is under construction

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    Maximizing the Use of Computational Resources in Multi-Camera Feedback Control

    Get PDF
    In vision-based feedback control systems, the time to obtain sensor information is usually non-negligible, and these systems thereby possess fundamentally different timing behavior compared to standard real-time control applications. For many image-based tracking algorithms, however, it is possible to trade-off the computational time versus the accuracy of the produced position/orientation estimates.This paper presents a method for optimizing the use of computational resources in a multi-camera based positioning system. A simplified equation for the covariance of the position estimation error is calculated, which depends on the set of cameras used and the number of edge detection points in each image. An efficient algorithm for selection of a suitable subset of the available cameras is presented, which attempts to minimize the estimation covariance given a desired, pre-specified maximum input-output latency of the feedback control loop.Simulations have been performed that capture the real-time properties of the vision-based tracking algorithm and the effects of the timing on the performance of the control system. The suggested strategy has been compared with heuristic algorithms, and it obtains large improvements in estimation accuracy and performance for objects both in free motion and under closed-loop position control

    Mouse Simulation Using Two Coloured Tapes

    Full text link
    In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can. The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.Comment: 5 page

    DroneTrack: Cloud-Based Real-Time Object Tracking Using Unmanned Aerial Vehicles Over the Internet

    Get PDF
    Low-cost drones represent an emerging technology that opens the horizon for new smart Internet-of-Things (IoT) applications. Recent research efforts in cloud robotics are pushing for the integration of low-cost robots and drones with the cloud and the IoT. However, the performance of real-time cloud robotics systems remains a fundamental challenge that demands further investigation. In this paper, we present DroneTrack, a real-time object tracking system using a drone that follows a moving object over the Internet. The DroneTrack leverages the use of Dronemap planner (DP), a cloud-based system, for the control, communication, and management of drones over the Internet. The main contributions of this paper consist in: (1) the development and deployment of the DroneTrack, a real-time object tracking application through the DP cloud platform and (2) a comprehensive experimental study of the real-time performance of the tracking application. We note that the tracking does not imply computer vision techniques but it is rather based on the exchange of GPS locations through the cloud. Three scenarios are used for conducting various experiments with real and simulated drones. The experimental study demonstrates the effectiveness of the DroneTrack system, and a tracking accuracy of 3.5 meters in average is achieved with slow-speed moving targets.info:eu-repo/semantics/publishedVersio

    Hand-eye coordination for grasping moving objects

    Get PDF
    Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for grasping a moving object. This task requires proper coordination between at least 3 separate subsystems: dynamic vision sensing, real-time arm control, and grasp control. As with humans, our system first visually tracks the object's 3-D position. Because the object is in motion, this must be done in a dynamic manner to coordinate the motion of the robotic arm as it tracks the object. The dynamic vision system is used to feed a real-time arm control algorithm that plans a trajectory. The arm control algorithm is implemented in two steps: 1) filtering and prediction, and 2) kinematic transformation computation. Once the trajectory of the object is tracked, the hand must intercept the object to actually grasp it. We present 3 different strategies for intercepting the object and results from the tracking algorithm
    • ā€¦
    corecore