1,191 research outputs found

    Reactive Motions In A Fully Autonomous CRS Catalyst 5 Robotic Arm Based On RGBD Data

    Get PDF
    This study proposes a method to perform velocity estimation using motion blur in a single image frame along x and y axes in the camera coordinate system and intercept a moving object with a robotic arm. It will be shown that velocity estimation in a single image frame improves the system\u27s performance. The majority of previous studies in this area require at least two image frames to measure the target\u27s velocity. In addition, they mostly employ specialized equipments which are able to generate high torques and accelerations. The setup consists of a 5 degree of freedom robotic arm and a Kinect camera. The RGBD (Red, Green, Blue and Depth) camera provides the RGB and depth information which are used to detect the position of the target. As the object is moving within a single image frame, the image contains motion blur. To recognize and differentiate the object from blurred area, the image intensity profiles are studied. Accordingly, the method determines the blur parameters based on the changes in the intensity profile. The aforementioned blur parameters are the length of the object and the length of the partial blur. Based on motion blur, the velocities along x and y camera coordinate axes are estimated. However, as the depth frame cannot record motion blur, the velocity along axis in the camera coordinate frame is initially unknown. The vectors of position and velocity are transformed into world coordinate frame and subsequently, the prospective position of the object, after a predefined time interval, is predicted. In order to intercept, the end-effector of the robotic arm must reach this predicted position within the time interval as well. For the end-effector to reach the predicted position within the predefined time interval, the robot\u27s joint angles and accelerations are determined through inverse kinematic methods. Then the robotic arm starts its motion. Once the second depth frame is obtained, the object\u27s velocity along z axis can be calculated as well. Accordingly, the predicted position of the object is recalculated, and the motion of the manipulator is modified. The proposed method is compared with existing methods which need at least two image frames to estimate the velocity of the target. It is shown that under identical kinematic conditions, the functionality of the system is improved by times for our setup. In addition, the experiment is repeated for times and the velocity data is recorded. According to the experimental results, there are two major limitations in our system and setup. The system cannot determine the velocity along z in the camera coordinate system from the initial image frame. Consequently, if the object travels faster along this axis, it becomes more susceptible to failure. In addition, our manipulator is an unspecialized equipment which is not designed for producing high torques and accelerations. Accordingly, the task becomes more challenging. The main cause of error in the experiments was operator\u27s. It is necessary to have the object pass through the working volume of the robot. Besides, the object must be still inside the working volume after the predefined time interval. It is possible that the operator throw the object within the designated working volume, but it leaves it earlier than the specified time interval

    Experiments in cooperative manipulation: A system perspective

    Get PDF
    In addition to cooperative dynamic control, the system incorporates real time vision feedback, a novel programming technique, and a graphical high level user interface. By focusing on the vertical integration problem, not only these subsystems are examined, but also their interfaces and interactions. The control system implements a multi-level hierarchical structure; the techniques developed for operator input, strategic command, and cooperative dynamic control are presented. At the highest level, a mouse-based graphical user interface allows an operator to direct the activities of the system. Strategic command is provided by a table-driven finite state machine; this methodology provides a powerful yet flexible technique for managing the concurrent system interactions. The dynamic controller implements object impedance control; an extension of Nevill Hogan's impedance control concept to cooperative arm manipulation of a single object. Experimental results are presented, showing the system locating and identifying a moving object catching it, and performing a simple cooperative assembly. Results from dynamic control experiments are also presented, showing the controller's excellent dynamic trajectory tracking performance, while also permitting control of environmental contact force

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison

    Particle Filters for Colour-Based Face Tracking Under Varying Illumination

    Get PDF
    Automatic human face tracking is the basis of robotic and active vision systems used for facial feature analysis, automatic surveillance, video conferencing, intelligent transportation, human-computer interaction and many other applications. Superior human face tracking will allow future safety surveillance systems which monitor drowsy drivers, or patients and elderly people at the risk of seizure or sudden falls and will perform with lower risk of failure in unexpected situations. This area has actively been researched in the current literature in an attempt to make automatic face trackers more stable in challenging real-world environments. To detect faces in video sequences, features like colour, texture, intensity, shape or motion is used. Among these feature colour has been the most popular, because of its insensitivity to orientation and size changes and fast process-ability. The challenge of colour-based face trackers, however, has been dealing with the instability of trackers in case of colour changes due to the drastic variation in environmental illumination. Probabilistic tracking and the employment of particle filters as powerful Bayesian stochastic estimators, on the other hand, is increasing in the visual tracking field thanks to their ability to handle multi-modal distributions in cluttered scenes. Traditional particle filters utilize transition prior as importance sampling function, but this can result in poor posterior sampling. The objective of this research is to investigate and propose stable face tracker capable of dealing with challenges like rapid and random motion of head, scale changes when people are moving closer or further from the camera, motion of multiple people with close skin tones in the vicinity of the model person, presence of clutter and occlusion of face. The main focus has been on investigating an efficient method to address the sensitivity of the colour-based trackers in case of gradual or drastic illumination variations. The particle filter is used to overcome the instability of face trackers due to nonlinear and random head motions. To increase the traditional particle filter\u27s sampling efficiency an improved version of the particle filter is introduced that considers the latest measurements. This improved particle filter employs a new colour-based bottom-up approach that leads particles to generate an effective proposal distribution. The colour-based bottom-up approach is a classification technique for fast skin colour segmentation. This method is independent to distribution shape and does not require excessive memory storage or exhaustive prior training. Finally, to address the adaptability of the colour-based face tracker to illumination changes, an original likelihood model is proposed based of spatial rank information that considers both the illumination invariant colour ordering of a face\u27s pixels in an image or video frame and the spatial interaction between them. The original contribution of this work lies in the unique mixture of existing and proposed components to improve colour-base recognition and tracking of faces in complex scenes, especially where drastic illumination changes occur. Experimental results of the final version of the proposed face tracker, which combines the methods developed, are provided in the last chapter of this manuscript

    Stereo Vision in Smart Camera Networks

    Get PDF

    Modeling Off-the-Shelf Pan/Tilt Cameras for Active Vision Systems

    Get PDF
    There are many existing multicamera systems that perform object identification and track ing. Some applications include but are not limited to security surveillance and smart rooms. Yet there is still much work to be done in improving such systems to achieve a high level of automation while obtaining reasonable performance. Thus far design and implementation of these systems has been done using heuristic methods, primarily due to the complexity of the problem. Most importantiy, the performance of these systems is assessed by evaluating subjective quantities. The goal of this work is to take the first step in structured analysis and design of multicamera systems, that is, to introduce a model of a single camera with asso ciated image processing algorithms capable of tracking a target. A single camera model is developed such that it could be easily used as a building block for a multicamera system

    Identification and tracking of marine objects for collision risk estimation.

    Get PDF
    With the advent of modem high-speed passenger ferries and the general increase in maritime traffic, both commercial and recreational, marine safety is becoming an increasingly important issue. From lightweight catamarans and fishing trawlers to container ships and cruise liners one question remains the same. Is anything in the way? This question is addressed in this thesis. Through the use of image processing techniques applied to video sequences of maritime scenes the images are segmented into two regions, sea and object. This is achieved using statistical measures taken from the histogram data of the images. Each segmented object has a feature vector built containing information including its size and previous centroid positions. The feature vectors are used to track the identified objects across many frames. With information recorded about an object's previous motion its future motion is predicted using a least squares method. Finally a high-level rule-based algorithm is applied in order to estimate the collision risk posed by each object present in the image. The result is an image with the objects identified by the placing of a white box around them. The predicted motion is shown and the estimated collision risk posed by that object is displayed. The algorithms developed in this work have been evaluated using two previously unseen maritime image sequences. These show that the algorithms developed here can be used to estimate the collision risk posed by maritime objects

    Occlusion handling in multiple people tracking

    Get PDF
    Object tracking with occlusion handling is a challenging problem in automated video surveillance. Occlusion handling and tracking have always been considered as separate modules. We have proposed an automated video surveillance system, which automatically detects occlusions and perform occlusion handling, while the tracker continues to track resulting separated objects. A new approach based on sub-blobbing is presented for tracking objects accurately and steadily, when the target encounters occlusion in video sequences. We have used a feature-based framework for tracking, which involves feature extraction and feature matching

    Object Manipulation using a Multirobot Cluster with Force Sensing

    Get PDF
    This research explored object manipulation using multiple robots by developing a control system utilizing force sensing. Multirobot solutions provide advantages of redundancy, greater coverage, fault-tolerance, distributed sensing and actuation, and reconfigurability. In object manipulation, a variety of solutions have been explored with different robot types and numbers, control strategies, sensors, etc. This research involved the integration of force sensing with a centralized position control method of two robots (cluster control) and building it into an object level controller. This controller commands the robots to push the object based on the measured interaction forces between them while maintaining proper formation with respect to each other and the object. To test this controller, force sensor plates were attached to the front of the Pioneer 3-AT robots. The object is a long, thin, rectangular prism made of cardboard, filled with paper for weight. An Ultra Wideband system was used to track the positions and headings of the robots and object. Force sensing was integrated into the position cluster controller by decoupling robot commands, derived from position and force control loops. The result was a successful pair of experiments demonstrating controlled transportation of the object, validating the control architecture. The robots pushed the object to follow linear and circular trajectories. This research is an initial step toward a hybrid force/position control architecture with cluster control for object transportation by a multirobot system

    Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review

    Get PDF
    The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools
    • …
    corecore