3,575 research outputs found

    Machine vision for space telerobotics and planetary rovers

    Get PDF
    Machine vision allows a non-contact means of determining the three-dimensional shape of objects in the environment, enabling the control of contact forces when manipulation by a telerobot or traversal by a vehicle is desired. Telerobotic manipulation in Earth orbit requires a system that can recognize known objects in spite of harsh lighting conditions and highly specular or absorptive surfaces. Planetary surface traversal requires a system that can recognize the surface shape and properties of an unknown and arbitrary terrain. Research on these two rather disparate types of vision systems is described

    Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Get PDF
    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion

    Fuzzy reactive piloting for continuous driving of long range autonomous planetary micro-rovers

    Full text link
    Abstract — A complete piloting control subsystem for a highly autonomous long range rover will be defined in order to identify the key control functions needed to achieve contin-uous driving. This capability can maximize range and num-ber of interesting scientific sites visited during the limited life time of a planetary rover. To achieve continuous driving, a complete set of techniques have been employed: fuzzy based control, real-time artificial intelligence reasoning, fast and ro-bust rover position estimation based on odometry and angu-lar rate sensing, efficient stereo vision elevation maps based on grids, and fast reaction and planning for obstacle detec-tion and obstacle avoidance based on a simple IF-THEN ex-pert system with fuzzy reasoning. To quickly design and im-plement these techniques, graphical programming has been used to build a fully autonomous piloting system using jus

    Instructing Hierarchical Tasks to Robots by Verbal Commands

    Full text link
    Natural language is an effective tool for communication, as information can be expressed in different ways and at different levels of complexity. Verbal commands, utilized for instructing robot tasks, can therefor replace traditional robot programming techniques, and provide a more expressive means to assign actions and enable collaboration. However, the challenge of utilizing speech for robot programming is how actions and targets can be grounded to physical entities in the world. In addition, to be time-efficient, a balance needs to be found between fine- and course-grained commands and natural language phrases. In this work we provide a framework for instructing tasks to robots by verbal commands. The framework includes functionalities for single commands to actions and targets, as well as longer-term sequences of actions, thereby providing a hierarchical structure to the robot tasks. Experimental evaluation demonstrates the functionalities of the framework by human collaboration with a robot in different tasks, with different levels of complexity. The tools are provided open-source at https://petim44.github.io/voice-jogger/Comment: 7 pages, accepted to 16th IEEE/SICE International Symposium on System Integratio

    Developing A Robot to Improve The Accuracy of Ring Retrieval and Throwing at The ABU Robocon Indonesia Robot Competition

    Get PDF
    This article outlines the creation and application of a technologically improved robot designed to amplify the precision and effectiveness of ring retrieval and projection tasks in the ABU Robocon Indonesia Robot Challenge. The ABU Robocon competition is an annual event that tasks teams with crafting robots capable of accomplishing specific assignments under a predetermined time limit. The ring retrieval and projection task, historically known for its precision requirements, has proven to be quite demanding. Our strategy entailed the incorporation of cutting-edge technologies into the robot's design, encompassing computer vision and machine learning algorithms, to augment its accuracy and performance. We equipped the robot with cameras and sensors for the detection and analysis of ring positions and orientations. Real-time decisions regarding the optimal approach for retrieving and accurately projecting the rings were made using machine learning models that had undergone training. The outcomes of our experiments reveal a marked enhancement in the robot's performance when compared to conventional methods. The tech-enhanced robot consistently exhibited a heightened success rate when performing ring retrieval and projection tasks. This development not only boosts the competitiveness of our robot in the ABU Robocon competition but also underscores the potential of advanced technologies in enhancing the performance of robotics systems when confronted with intricate tasks

    Human factors in space telepresence

    Get PDF
    The problems of interfacing a human with a teleoperation system, for work in space are discussed. Much of the information presented here is the result of experience gained by the M.I.T. Space Systems Laboratory during the past two years of work on the ARAMIS (Automation, Robotics, and Machine Intelligence Systems) project. Many factors impact the design of the man-machine interface for a teleoperator. The effects of each are described in turn. An annotated bibliography gives the key references that were used. No conclusions are presented as a best design, since much depends on the particular application desired, and the relevant technology is swiftly changing

    Method and system for providing autonomous control of a platform

    Get PDF
    The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator)
    • …
    corecore