2,461 research outputs found

    Synchronization controller for a 3-RRR parallel manipulator

    Get PDF
    A 3-RRR parallel manipulator has been well-known as a closed-loop kinematic chain mechanism in which the end-effector generally a moving platform is connected to the base by several independent actuators. Performance of the robot is decided by performances of the component actuators which are independently driven by tracking controllers without acknowledging information from each other. The platform performance is degraded if any actuator could not be driven well. Therefore, this paper aims to develop an advanced synchronization (SYNC) controller for position tracking of a 3-RRR parallel robot using three DC motor-driven actuators. The proposed control scheme consists of three sliding mode controllers (SMC) to drive the actuators and a supervisory controller named PID-neural network controller (PIDNNC) to compensate the synchronization errors due to system nonlinearities, uncertainties and external disturbances. A Lyapunov stability condition is added to the PIDNNC training mechanism to ensure the robust tracking performance of the manipulator. Numerical simulations have been performed under different working conditions to demonstrate the effectiveness of the suggested control approach

    Dance Teaching by a Robot: Combining Cognitive and Physical Human-Robot Interaction for Supporting the Skill Learning Process

    Full text link
    This letter presents a physical human-robot interaction scenario in which a robot guides and performs the role of a teacher within a defined dance training framework. A combined cognitive and physical feedback of performance is proposed for assisting the skill learning process. Direct contact cooperation has been designed through an adaptive impedance-based controller that adjusts according to the partner's performance in the task. In measuring performance, a scoring system has been designed using the concept of progressive teaching (PT). The system adjusts the difficulty based on the user's number of practices and performance history. Using the proposed method and a baseline constant controller, comparative experiments have shown that the PT presents better performance in the initial stage of skill learning. An analysis of the subjects' perception of comfort, peace of mind, and robot performance have shown a significant difference at the p < .01 level, favoring the PT algorithm.Comment: Presented at IEEE International Conference on Robotics and Automation ICRA-201

    Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios

    Full text link
    Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this paper, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate - to the best of our knowledge - the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high-dynamic range scenes.Comment: 8 pages, 9 figures, 2 table

    Motion Generation and Planning System for a Virtual Reality Motion Simulator: Development, Integration, and Analysis

    Get PDF
    In the past five years, the advent of virtual reality devices has significantly influenced research in the field of immersion in a virtual world. In addition to the visual input, the motion cues play a vital role in the sense of presence and the factor of engagement in a virtual environment. This thesis aims to develop a motion generation and planning system for the SP7 motion simulator. SP7 is a parallel robotic manipulator in a 6RSS-R configuration. The motion generation system must be able to produce accurate motion data that matches the visual and audio signals. In this research, two different system workflows have been developed, the first for creating custom visual, audio, and motion cues, while the second for extracting the required motion data from an existing game or simulation. Motion data from the motion generation system are not bounded, while motion simulator movements are limited. The motion planning system commonly known as the motion cueing algorithm is used to create an effective illusion within the limited capabilities of the motion platform. Appropriate and effective motion cues could be achieved by a proper understanding of the perception of human motion, in particular the functioning of the vestibular system. A classical motion cueing has been developed using the model of the semi-circular canal and otoliths. A procedural implementation of the motion cueing algorithm has been described in this thesis. We have integrated all components together to make this robotic mechanism into a VR motion simulator. In general, the performance of the motion simulator is measured by the quality of the motion perceived on the platform by the user. As a result, a novel methodology for the systematic subjective evaluation of the SP7 with a pool of juries was developed to check the quality of motion perception. Based on the results of the evaluation, key issues related to the current configuration of the SP7 have been identified. Minor issues were rectified on the flow, so they were not extensively reported in this thesis. Two major issues have been addressed extensively, namely the parameter tuning of the motion cueing algorithm and the motion compensation of the visual signal in virtual reality devices. The first issue was resolved by developing a tuning strategy with an abstraction layer concept derived from the outcome of the novel technique for the objective assessment of the motion cueing algorithm. The origin of the second problem was found to be a calibration problem of the Vive lighthouse tracking system. So, a thorough experimental study was performed to obtain the optimal calibrated environment. This was achieved by benchmarking the dynamic position tracking performance of the Vive lighthouse tracking system using an industrial serial robot as a ground truth system. With the resolution of the identified issues, a general-purpose virtual reality motion simulator has been developed that is capable of creating custom visual, audio, and motion cues and of executing motion planning for a robotic manipulator with a human motion perception constraint

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Modeling and Control of the Cooperative Automated Fiber Placement System

    Get PDF
    The Automated Fiber Placement (AFP) machines have brought significant improvement on composite manufacturing. However, the current AFP machines are designed for the manufacture of simple structures like shallow shells or tubes, and not capable of handling some applications with more complex shapes. A cooperative AFP system is proposed to manufacture more complex composite components which pose high demand for trajectory planning than those by the current APF system. The system consists of a 6 degree-of-freedom (DOF) serial robot holding the fiber placement head, a 6-DOF revolute-spherical-spherical (RSS) parallel robot on which a 1-DOF mandrel holder is installed and an eye-to-hand photogrammetry sensor, i.e. C-track, to detect the poses of both end-effectors of parallel robot and serial robot. Kinematic models of the parallel robot and the serial robot are built. The analysis of constraints and singularities is conducted for the cooperative AFP system. The definitions of the tool frames for the serial robot and the parallel robot are illustrated. Some kinematic parameters of the parallel robot are calibrated using the photogrammetry sensor. Although, the cooperative AFP system increases the flexibility of composite manufacturing by adding more DOF, there might not be a feasible path for laying up the fiber in some cases due to the requirement of free from collisions and singularities. To meet the challenge, an innovative semi-offline trajectory synchronized algorithm is proposed to incorporate the on-line robot control in following the paths generated off-line especially when the generated paths are infeasible for the current multiple robots to realize. By adding correction to the path of the robots at the points where the collision and singularity occur, the fiber can be laid up continuously without interruption. The correction is calculated based on the pose tracking data of the parallel robot detected by the photogrammetry sensor on-line. Due to the flexibility of the 6-DOF parallel robot, the optimized offsets with varying movements are generated based on the different singularities and constraints. Experimental results demonstrate the successful avoidance of singularities and joint limits, and the designed cooperative AFP system can fulfill the movement needed for manufacturing a composite structure with Y-shape

    Hardware Development of an Ultra-Wideband System for High Precision Localization Applications

    Get PDF
    A precise localization system in an indoor environment has been developed. The developed system is based on transmitting and receiving picosecond pulses and carrying out a complete narrow-pulse, signal detection and processing scheme in the time domain. The challenges in developing such a system include: generating ultra wideband (UWB) pulses, pulse dispersion due to antennas, modeling of complex propagation channels with severe multipath effects, need for extremely high sampling rates for digital processing, synchronization between the tag and receivers’ clocks, clock jitter, local oscillator (LO) phase noise, frequency offset between tag and receivers’ LOs, and antenna phase center variation. For such a high precision system with mm or even sub-mm accuracy, all these effects should be accounted for and minimized. In this work, we have successfully addressed many of the above challenges and developed a stand-alone system for positioning both static and dynamic targets with approximately 2 mm and 6 mm of 3-D accuracy, respectively. The results have exceeded the state of the art for any commercially available UWB positioning system and are considered a great milestone in developing such technology. My contributions include the development of a picosecond pulse generator, an extremely wideband omni-directional antenna, a highly directive UWB receiving antenna with low phase center variation, an extremely high data rate sampler, and establishment of a non-synchronized UWB system architecture. The developed low cost sampler, for example, can be easily utilized to sample narrow pulses with up to 1000 GS/s while the developed antennas can cover over 6 GHz bandwidth with minimal pulse distortion. The stand-alone prototype system is based on tracking a target using 4-6 base stations and utilizing a triangulation scheme to find its location in space. Advanced signal processing algorithms based on first peak and leading edge detection have been developed and extensively evaluated to achieve high accuracy 3-D localization. 1D, 2D and 3D experiments have been carried out and validated using an optical reference system which provides better than 0.3 mm 3-D accuracy. Such a high accuracy wireless localization system should have a great impact on the operating room of the future

    Uncertainty Modelling of High-precision Trajectories for Industrial Real-time Measurement Applications

    Get PDF
    Within the field of large volume metrology, kinematic tasks such as the movement of an industrial robot have been measured using laser trackers. In spite of the kinematic applications, to date most research has focused on static measurements. It is crucial to have a reliable uncertainty of kinematic measurements in order to assess spatiotemporal path deviations of a robot. With this in mind an approach capable of real-time was developed, to determine the uncertainties of kinematic measurements

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Postprocesamiento CAM-ROBOTICA orientado al prototipado y mecanizado en células robotizadas complejas

    Full text link
    The main interest of this thesis consists of the study and implementation of postprocessors to adapt the toolpath generated by a Computer Aided Manufacturing (CAM) system to a complex robotic workcell of eight joints, devoted to the rapid prototyping of 3D CAD-defined products. It consists of a 6R industrial manipulator mounted on a linear track and synchronized with a rotary table. To accomplish this main objective, previous work is required. Each task carried out entails a methodology, objective and partial results that complement each other, namely: - It is described the architecture of the workcell in depth, at both displacement and joint-rate levels, for both direct and inverse resolutions. The conditioning of the Jacobian matrix is described as kinetostatic performance index to evaluate the vicinity to singular postures. These ones are analysed from a geometric point of view. - Prior to any machining, the additional external joints require a calibration done in situ, usually in an industrial environment. A novel Non-contact Planar Constraint Calibration method is developed to estimate the external joints configuration parameters by means of a laser displacement sensor. - A first control is originally done by means of a fuzzy inference engine at the displacement level, which is integrated within the postprocessor of the CAM software. - Several Redundancy Resolution Schemes (RRS) at the joint-rate level are compared for the configuration of the postprocessor, dealing not only with the additional joints (intrinsic redundancy) but also with the redundancy due to the symmetry on the milling tool (functional redundancy). - The use of these schemes is optimized by adjusting two performance criterion vectors related to both singularity avoidance and maintenance of a preferred reference posture, as secondary tasks to be done during the path tracking. Two innovative fuzzy inference engines actively adjust the weight of each joint in these tasks.Andrés De La Esperanza, FJ. (2011). Postprocesamiento CAM-ROBOTICA orientado al prototipado y mecanizado en células robotizadas complejas [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10627Palanci
    • …
    corecore