6 research outputs found

    Modeling And Control For Robotic Assistants: Single And Multi-Robot Manipulation

    Get PDF
    As advances are made in robotic hardware, the complexity of tasks they are capable of performing also increases. One goal of modern robotics is to introduce robotic platforms that require very little augmentation of their environments to be effective and robust. Therefore the challenge for a roboticist is to develop algorithms and control strategies that leverage knowledge of the task while retaining the ability to be adaptive, adjusting to perturbations in the environment and task assumptions. This work considers approaches to these challenges in the context of a wet-lab robotic assistant. The tasks considered are cooperative transport with limited communication between team members, and robot-assisted rapid experiment preparation requiring pouring reagents from open containers useful for research and development scientists. For cooperative transport, robots must be able to plan collision-free trajectories and agree on a final destination to minimize internal forces on the carried load. Robot teammates are considered, where robots must reach consensus to minimize internal forces. The case of a human leader, and robot follower is then considered, where robots must use non-verbal information to estimate the human leader\u27s intended pose for the carried load. For experiment preparation, the robot must pour precisely from open containers with known fluid in a single attempt. Two scenarios examined are when the geometries of the pouring and receiving containers and behaviors are known, and when the pourer must be approximated. An analytical solution is presented for a given geometry in the first instance. In the second instance, a combination of online system identification and leveraging of model priors is used to achieve the precision-pour in a single attempt with considerations for long-term robot deployment. The main contributions of this work are considerations and implementations for making robots capable of performing complex tasks with an emphasis on combining model-based and data-driven approaches for best performance

    A vision-based optical character recognition system for real-time identification of tractors in a port container terminal

    Get PDF
    Automation has been seen as a promising solution to increase the productivity of modern sea port container terminals. The potential of increase in throughput, work efficiency and reduction of labor cost have lured stick holders to strive for the introduction of automation in the overall terminal operation. A specific container handling process that is readily amenable to automation is the deployment and control of gantry cranes in the container yard of a container terminal where typical operations of truck identification, loading and unloading containers, and job management are primarily performed manually in a typical terminal. To facilitate the overall automation of the gantry crane operation, we devised an approach for the real-time identification of tractors through the recognition of the corresponding number plates that are located on top of the tractor cabin. With this crucial piece of information, remote or automated yard operations can then be performed. A machine vision-based system is introduced whereby these number plates are read and identified in real-time while the tractors are operating in the terminal. In this paper, we present the design and implementation of the system and highlight the major difficulties encountered including the recognition of character information printed on the number plates due to poor image integrity. Working solutions are proposed to address these problems which are incorporated in the overall identification system.postprin

    Job shop scheduling with artificial immune systems

    Get PDF
    The job shop scheduling is complex due to the dynamic environment. When the information of the jobs and machines are pre-defined and no unexpected events occur, the job shop is static. However, the real scheduling environment is always dynamic due to the constantly changing information and different uncertainties. This study discusses this complex job shop scheduling environment, and applies the AIS theory and switching strategy that changes the sequencing approach to the dispatching approach by taking into account the system status to solve this problem. AIS is a biological inspired computational paradigm that simulates the mechanisms of the biological immune system. Therefore, AIS presents appealing features of immune system that make AIS unique from other evolutionary intelligent algorithm, such as self-learning, long-lasting memory, cross reactive response, discrimination of self from non-self, fault tolerance, and strong adaptability to the environment. These features of AIS are successfully used in this study to solve the job shop scheduling problem. When the job shop environment is static, sequencing approach based on the clonal selection theory and immune network theory of AIS is applied. This approach achieves great performance, especially for small size problems in terms of computation time. The feature of long-lasting memory is demonstrated to be able to accelerate the convergence rate of the algorithm and reduce the computation time. When some unexpected events occasionally arrive at the job shop and disrupt the static environment, an extended deterministic dendritic cell algorithm (DCA) based on the DCA theory of AIS is proposed to arrange the rescheduling process to balance the efficiency and stability of the system. When the disturbances continuously occur, such as the continuous jobs arrival, the sequencing approach is changed to the dispatching approach that involves the priority dispatching rules (PDRs). The immune network theory of AIS is applied to propose an idiotypic network model of PDRs to arrange the application of various dispatching rules. The experiments show that the proposed network model presents strong adaptability to the dynamic job shop scheduling environment.postprin

    Nachweislich sichere Bewegungsplanung für autonome Fahrzeuge durch Echtzeitverifikation

    Get PDF
    This thesis introduces fail-safe motion planning as the first approach to guarantee legal safety of autonomous vehicles in arbitrary traffic situations. The proposed safety layer verifies whether intended trajectories comply with legal safety and provides fail-safe trajectories when intended trajectories result in safety-critical situations. The presented results indicate that the use of fail-safe motion planning can drastically reduce the number of traffic accidents.Die vorliegende Arbeit führt ein neuartiges Verifikationsverfahren ein, mit dessen Hilfe zum ersten Mal die verkehrsregelkonforme Sicherheit von autonomen Fahrzeugen gewährleistet werden kann. Das Verifikationsverfahren überprüft, ob geplante Trajektorien sicher sind und generiert Rückfalltrajektorien falls diese zu einer unsicheren Situation führen. Die Ergebnisse zeigen, dass die Verwendung des Verfahrens zu einer deutlichen Reduktion von Verkehrsunfällen führt

    3D Gaze Estimation from Remote RGB-D Sensors

    Get PDF
    The development of systems able to retrieve and characterise the state of humans is important for many applications and fields of study. In particular, as a display of attention and interest, gaze is a fundamental cue in understanding people activities, behaviors, intentions, state of mind and personality. Moreover, gaze plays a major role in the communication process, like for showing attention to the speaker, indicating who is addressed or averting gaze to keep the floor. Therefore, many applications within the fields of human-human, human-robot and human-computer interaction could benefit from gaze sensing. However, despite significant advances during more than three decades of research, current gaze estimation technologies can not address the conditions often required within these fields, such as remote sensing, unconstrained user movements and minimum user calibration. Furthermore, to reduce cost, it is preferable to rely on consumer sensors, but this usually leads to low resolution and low contrast images that current techniques can hardly cope with. In this thesis we investigate the problem of automatic gaze estimation under head pose variations, low resolution sensing and different levels of user calibration, including the uncalibrated case. We propose to build a non-intrusive gaze estimation system based on remote consumer RGB-D sensors. In this context, we propose algorithmic solutions which overcome many of the limitations of previous systems. We thus address the main aspects of this problem: 3D head pose tracking, 3D gaze estimation, and gaze based application modeling. First, we develop an accurate model-based 3D head pose tracking system which adapts to the participant without requiring explicit actions. Second, to achieve a head pose invariant gaze estimation, we propose a method to correct the eye image appearance variations due to head pose. We then investigate on two different methodologies to infer the 3D gaze direction. The first one builds upon machine learning regression techniques. In this context, we propose strategies to improve their generalization, in particular, to handle different people. The second methodology is a new paradigm we propose and call geometric generative gaze estimation. This novel approach combines the benefits of geometric eye modeling (normally restricted to high resolution images due to the difficulty of feature extraction) with a stochastic segmentation process (adapted to low-resolution) within a Bayesian model allowing the decoupling of user specific geometry and session specific appearance parameters, along with the introduction of priors, which are appropriate for adaptation relying on small amounts of data. The aforementioned gaze estimation methods are validated through extensive experiments in a comprehensive database which we collected and made publicly available. Finally, we study the problem of automatic gaze coding in natural dyadic and group human interactions. The system builds upon the thesis contributions to handle unconstrained head movements and the lack of user calibration. It further exploits the 3D tracking of participants and their gaze to conduct a 3D geometric analysis within a multi-camera setup. Experiments on real and natural interactions demonstrate the system is highly accuracy. Overall, the methods developed in this dissertation are suitable for many applications, involving large diversity in terms of setup configuration, user calibration and mobility
    corecore