252 research outputs found

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Visual Registration and Navigation using Planar Features

    Get PDF
    This paper addresses the problem of registering the hexapedal robot RHex, relative to a known set of beacons, by real-time visual servoing. A suitably constructed navigation function represents the task, in the sense that for a completely actuated machine in the horizontal plane, the gradient dynamics guarantee convergence to the visually cued goal without ever losing sight of the beacons that define it. Since the horizontal plane behavior of RHex can be represented as a unicycle, feeding back the navigation function gradient avoids loss of beacons, but does not yield an asymptotically stable goal. We address new problems arising from the configuration of the beacons and present preliminary experimental results that illustrate the discrepancies between the idealized and physical robot actuation capabilities

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    Modeling, simulation and control of microrobots for the microfactory.

    Get PDF
    Future assembly technologies will involve higher levels of automation in order to satisfy increased microscale or nanoscale precision requirements. Traditionally, assembly using a top-down robotic approach has been well-studied and applied to the microelectronics and MEMS industries, but less so in nanotechnology. With the boom of nanotechnology since the 1990s, newly designed products with new materials, coatings, and nanoparticles are gradually entering everyone’s lives, while the industry has grown into a billion-dollar volume worldwide. Traditionally, nanotechnology products are assembled using bottom-up methods, such as self-assembly, rather than top-down robotic assembly. This is due to considerations of volume handling of large quantities of components, and the high cost associated with top-down manipulation requiring precision. However, bottom-up manufacturing methods have certain limitations, such as components needing to have predefined shapes and surface coatings, and the number of assembly components being limited to very few. For example, in the case of self-assembly of nano-cubes with an origami design, post-assembly manipulation of cubes in large quantities and cost-efficiency is still challenging. In this thesis, we envision a new paradigm for nanoscale assembly, realized with the help of a wafer-scale microfactory containing large numbers of MEMS microrobots. These robots will work together to enhance the throughput of the factory, while their cost will be reduced when compared to conventional nanopositioners. To fulfill the microfactory vision, numerous challenges related to design, power, control, and nanoscale task completion by these microrobots must be overcome. In this work, we study two classes of microrobots for the microfactory: stationary microrobots and mobile microrobots. For the stationary microrobots in our microfactory application, we have designed and modeled two different types of microrobots, the AFAM (Articulated Four Axes Microrobot) and the SolarPede. The AFAM is a millimeter-size robotic arm working as a nanomanipulator for nanoparticles with four degrees of freedom, while the SolarPede is a light-powered centimeter-size robotic conveyor in the microfactory. For mobile microrobots, we have introduced the world’s first laser-driven micrometer-size locomotor in dry environments, called ChevBot to prove the concept of the motion mechanism. The ChevBot is fabricated using MEMS technology in the cleanroom, following a microassembly step. We showed that it can perform locomotion with pulsed laser energy on a dry surface. Based on the knowledge gained with the ChevBot, we refined tits fabrication process to remove the assembly step and increase its reliability. We designed and fabricated a steerable microrobot, the SerpenBot, in order to achieve controllable behavior with the guidance of a laser beam. Through modeling and experimental study of the characteristics of this type of microrobot, we proposed and validated a new type of deep learning controller, the PID-Bayes neural network controller. The experiments showed that the SerpenBot can achieve closed-loop autonomous operation on a dry substrate

    Model-Free Large-Scale Cloth Spreading With Mobile Manipulation: Initial Feasibility Study

    Full text link
    Cloth manipulation is common in domestic and service tasks, and most studies use fixed-base manipulators to manipulate objects whose sizes are relatively small with respect to the manipulators' workspace, such as towels, shirts, and rags. In contrast, manipulation of large-scale cloth, such as bed making and tablecloth spreading, poses additional challenges of reachability and manipulation control. To address them, this paper presents a novel framework to spread large-scale cloth, with a single-arm mobile manipulator that can solve the reachability issue, for an initial feasibility study. On the manipulation control side, without modeling highly deformable cloth, a vision-based manipulation control scheme is applied and based on an online-update Jacobian matrix mapping from selected feature points to the end-effector motion. To coordinate the control of the manipulator and mobile platform, Behavior Trees (BTs) are used because of their modularity. Finally, experiments are conducted, including validation of the model-free manipulation control for cloth spreading in different conditions and the large-scale cloth spreading framework. The experimental results demonstrate the large-scale cloth spreading task feasibility with a single-arm mobile manipulator and the model-free deformation controller.Comment: 6 pages, 6 figures, submit to CASE202

    Automated Gait Adaptation for Legged Robots

    Get PDF
    Gait parameter adaptation on a physical robot is an error-prone, tedious and time-consuming process. In this paper we present a system for gait adaptation in our RHex series of hexapedal robots that renders this arduous process nearly autonomous. The robot adapts its gait parameters by recourse to a modified version of Nelder-Mead descent while managing its self-experiments and measuring the outcome by visual servoing within a partially engineered environment. The resulting performance gains extend considerably beyond what we have managed with hand tuning. For example, the hest hand tuned alternating tripod gaits never exceeded 0.8 m/s nor achieved specific resistance helow 2.0. In contrast, Nelder-Mead based tuning has yielded alternating tripod gaits at 2.7 m/s (well over 5 body lengths per second) and reduced specific resistance to 0.6 while requiring little human intervention at low and moderate speeds. Comparable gains have been achieved on the much larger ruggedized version of this machine

    Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    Get PDF
    The use of the cognitive capabilities of humans to help guide the autonomy of robotics platforms in what is typically called “supervised-autonomy” is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a “Supervised Remote Robot with Guided Autonomy and Teleoperation” (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of “behaviors” to chain together sequences of “actions” for the robot to perform which is then executed real time

    Analysis and Observations from the First Amazon Picking Challenge

    Full text link
    This paper presents a overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team's background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team's success in the competition, and discuss observations and lessons learned based on survey results and the authors' personal experiences during the challenge

    Sensor-Based Legged Robot Homing Using Range-Only Target Localization

    Get PDF
    This paper demonstrates a fully sensor-based reactive homing behavior on a physical quadrupedal robot, using onboard sensors, in simple (convex obstacle-cluttered) unknown, GPS-denied environments. Its implementation is enabled by our empirical success in controlling the legged machine to approximate the (abstract) unicycle mechanics assumed by the navigation algorithm, and our proposed method of range-only target localization using particle filters. For more information: Kod*la
    • 

    corecore