317 research outputs found

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Modeling, simulation and control of microrobots for the microfactory.

    Get PDF
    Future assembly technologies will involve higher levels of automation in order to satisfy increased microscale or nanoscale precision requirements. Traditionally, assembly using a top-down robotic approach has been well-studied and applied to the microelectronics and MEMS industries, but less so in nanotechnology. With the boom of nanotechnology since the 1990s, newly designed products with new materials, coatings, and nanoparticles are gradually entering everyone’s lives, while the industry has grown into a billion-dollar volume worldwide. Traditionally, nanotechnology products are assembled using bottom-up methods, such as self-assembly, rather than top-down robotic assembly. This is due to considerations of volume handling of large quantities of components, and the high cost associated with top-down manipulation requiring precision. However, bottom-up manufacturing methods have certain limitations, such as components needing to have predefined shapes and surface coatings, and the number of assembly components being limited to very few. For example, in the case of self-assembly of nano-cubes with an origami design, post-assembly manipulation of cubes in large quantities and cost-efficiency is still challenging. In this thesis, we envision a new paradigm for nanoscale assembly, realized with the help of a wafer-scale microfactory containing large numbers of MEMS microrobots. These robots will work together to enhance the throughput of the factory, while their cost will be reduced when compared to conventional nanopositioners. To fulfill the microfactory vision, numerous challenges related to design, power, control, and nanoscale task completion by these microrobots must be overcome. In this work, we study two classes of microrobots for the microfactory: stationary microrobots and mobile microrobots. For the stationary microrobots in our microfactory application, we have designed and modeled two different types of microrobots, the AFAM (Articulated Four Axes Microrobot) and the SolarPede. The AFAM is a millimeter-size robotic arm working as a nanomanipulator for nanoparticles with four degrees of freedom, while the SolarPede is a light-powered centimeter-size robotic conveyor in the microfactory. For mobile microrobots, we have introduced the world’s first laser-driven micrometer-size locomotor in dry environments, called ChevBot to prove the concept of the motion mechanism. The ChevBot is fabricated using MEMS technology in the cleanroom, following a microassembly step. We showed that it can perform locomotion with pulsed laser energy on a dry surface. Based on the knowledge gained with the ChevBot, we refined tits fabrication process to remove the assembly step and increase its reliability. We designed and fabricated a steerable microrobot, the SerpenBot, in order to achieve controllable behavior with the guidance of a laser beam. Through modeling and experimental study of the characteristics of this type of microrobot, we proposed and validated a new type of deep learning controller, the PID-Bayes neural network controller. The experiments showed that the SerpenBot can achieve closed-loop autonomous operation on a dry substrate

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Designing visually servoed tracking to augment camera teleoperators

    Get PDF
    Robots have now far more impact in humans life then ten years ago. Vacuum cleaning robots are already well known. Making today’s robots to work unassisted requires appropriate visual servoing architecture. In the past, a lot of efforts were directed towards designing controllers that relies exclusively on image data. Still most robots are servoed kinematically using joint data. Visual servoing architecture has applications not only in robotics. Video cameras are often mounted on platforms that can move like rovers, booms, gantries and aircrafts. People can operate such platforms to capture desired views of a scene or a target. To avoid collisions, with the environment and occlusions, such platforms demands much skill. Visual-servoing some degrees-of-freedom may reduce the operator burden and improve tracking. We call this concept human-in-the-loop visual servoing. Human-in-the-loop systems involve an operator who manipulates a device for desired tasks based on feedback from the device and environment. For example, devices like rovers gantries and aircrafts possess a video camera. The task is to control maneuver the vehicle and position the camera to obtain desired fields of view. To overcome joint limits, avoid collisions and ensure occlusion-free views, these devices are typically equipped with redundant degrees-of-freedom. Tracking moving subjects with such systems is a challenging task and requires a well skilled operator. In this approach, we use computer vision techniques to visually servo the camera. The net effect is that the operator just focuses on safely manipulating the boom and dolly while computer-control automatically servos the camera.Ph.D., Mechanical Engineering -- Drexel University, 200

    Design and integration of vision based sensors for unmanned aerial vehicles navigation and guidance

    Get PDF
    In this paper we present a novel Navigation and Guidance System (NGS) for Unmanned Aerial Vehicles (UAVs) based on Vision Based Navigation (VBN) and other avionics sensors. The main objective of our research is to design a lowcost and low-weight/volume NGS capable of providing the required level of performance in all flight phases of modern small- to medium-size UAVs, with a special focus on automated precision approach and landing, where VBN techniques can be fully exploited in a multisensory integrated architecture. Various existing techniques for VBN are compared and the Appearance-based Navigation (ABN) approach is selected for implementation

    Suivi Multi-Locuteurs avec des Informations Audio-Visuelles pour la Perception des Robots

    Get PDF
    Robot perception plays a crucial role in human-robot interaction (HRI). Perception system provides the robot information of the surroundings and enables the robot to give feedbacks. In a conversational scenario, a group of people may chat in front of the robot and move freely. In such situations, robots are expected to understand where are the people, who are speaking, or what are they talking about. This thesis concentrates on answering the first two questions, namely speaker tracking and diarization. We use different modalities of the robot’s perception system to achieve the goal. Like seeing and hearing for a human-being, audio and visual information are the critical cues for a robot in a conversational scenario. The advancement of computer vision and audio processing of the last decade has revolutionized the robot perception abilities. In this thesis, we have the following contributions: we first develop a variational Bayesian framework for tracking multiple objects. The variational Bayesian framework gives closed-form tractable problem solutions, which makes the tracking process efficient. The framework is first applied to visual multiple-person tracking. Birth and death process are built jointly with the framework to deal with the varying number of the people in the scene. Furthermore, we exploit the complementarity of vision and robot motorinformation. On the one hand, the robot’s active motion can be integrated into the visual tracking system to stabilize the tracking. On the other hand, visual information can be used to perform motor servoing. Moreover, audio and visual information are then combined in the variational framework, to estimate the smooth trajectories of speaking people, and to infer the acoustic status of a person- speaking or silent. In addition, we employ the model to acoustic-only speaker localization and tracking. Online dereverberation techniques are first applied then followed by the tracking system. Finally, a variant of the acoustic speaker tracking model based on von-Mises distribution is proposed, which is specifically adapted to directional data. All the proposed methods are validated on datasets according to applications.La perception des robots joue un rôle crucial dans l’interaction homme-robot (HRI). Le système de perception fournit les informations au robot sur l’environnement, ce qui permet au robot de réagir en consequence. Dans un scénario de conversation, un groupe de personnes peut discuter devant le robot et se déplacer librement. Dans de telles situations, les robots sont censés comprendre où sont les gens, ceux qui parlent et de quoi ils parlent. Cette thèse se concentre sur les deux premières questions, à savoir le suivi et la diarisation des locuteurs. Nous utilisons différentes modalités du système de perception du robot pour remplir cet objectif. Comme pour l’humain, l’ouie et la vue sont essentielles pour un robot dans un scénario de conversation. Les progrès de la vision par ordinateur et du traitement audio de la dernière décennie ont révolutionné les capacités de perception des robots. Dans cette thèse, nous développons les contributions suivantes : nous développons d’abord un cadre variationnel bayésien pour suivre plusieurs objets. Le cadre bayésien variationnel fournit des solutions explicites, rendant le processus de suivi très efficace. Cette approche est d’abord appliqué au suivi visuel de plusieurs personnes. Les processus de créations et de destructions sont en adéquation avecle modèle probabiliste proposé pour traiter un nombre variable de personnes. De plus, nous exploitons la complémentarité de la vision et des informations du moteur du robot : d’une part, le mouvement actif du robot peut être intégré au système de suivi visuel pour le stabiliser ; d’autre part, les informations visuelles peuvent être utilisées pour effectuer l’asservissement du moteur. Par la suite, les informations audio et visuelles sont combinées dans le modèle variationnel, pour lisser les trajectoires et déduire le statut acoustique d’une personne : parlant ou silencieux. Pour experimenter un scenario où l’informationvisuelle est absente, nous essayons le modèle pour la localisation et le suivi des locuteurs basé sur l’information acoustique uniquement. Les techniques de déréverbération sont d’abord appliquées, dont le résultat est fourni au système de suivi. Enfin, une variante du modèle de suivi des locuteurs basée sur la distribution de von-Mises est proposée, celle-ci étant plus adaptée aux données directionnelles. Toutes les méthodes proposées sont validées sur des bases de données specifiques à chaque application

    A survey on fractional order control techniques for unmanned aerial and ground vehicles

    Get PDF
    In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade

    Human-in-the-loop camera control for a mechatronic broadcast boom

    Get PDF
    IEEE/ASME Transactions on Mechatronics, 12(1): pp. 41-52.Platforms like gantries, booms, aircrafts, and submersibles are often used in the broadcasting industry. To avoid collisions and occlusions, such mechatronic platforms often possess redundant degrees-of-freedom (DOFs). As a result, manual manipulating of such platforms demands much skill. This paper describes the implementation of several controllers that, by using computer vision, attempts to reduce the number of manually manipulated DOFs. Experiments were performed to assess the performance of each controller. A model for such a system was developed and validated. To determine how the visual servoing can improve the tracking, a novice operator and an expert were asked to manually track a moving target with the assistance of visual servoing. The results of these tests were analyzed and compared
    • …
    corecore