2,760 research outputs found

    Recent Developments in Aerial Robotics: A Survey and Prototypes Overview

    Full text link
    In recent years, research and development in aerial robotics (i.e., unmanned aerial vehicles, UAVs) has been growing at an unprecedented speed, and there is a need to summarize the background, latest developments, and trends of UAV research. Along with a general overview on the definition, types, categories, and topics of UAV, this work describes a systematic way to identify 1,318 high-quality UAV papers from more than thirty thousand that have been appeared in the top journals and conferences. On top of that, we provide a bird's-eye view of UAV research since 2001 by summarizing various statistical information, such as the year, type, and topic distribution of the UAV papers. We make our survey list public and believe that the list can not only help researchers identify, study, and compare their work, but is also useful for understanding research trends in the field. From our survey results, we find there are many types of UAV, and to the best of our knowledge, no literature has attempted to summarize all types in one place. With our survey list, we explain the types within our survey and outline the recent progress of each. We believe this summary can enhance readers' understanding on the UAVs and inspire researchers to propose new methods and new applications.Comment: 14 pages, 16 figures, typos correcte

    Visual end-effector tracking using a 3D model-aided particle filter for humanoid robot platforms

    Full text link
    This paper addresses recursive markerless estimation of a robot's end-effector using visual observations from its cameras. The problem is formulated into the Bayesian framework and addressed using Sequential Monte Carlo (SMC) filtering. We use a 3D rendering engine and Computer Aided Design (CAD) schematics of the robot to virtually create images from the robot's camera viewpoints. These images are then used to extract information and estimate the pose of the end-effector. To this aim, we developed a particle filter for estimating the position and orientation of the robot's end-effector using the Histogram of Oriented Gradient (HOG) descriptors to capture robust characteristic features of shapes in both cameras and rendered images. We implemented the algorithm on the iCub humanoid robot and employed it in a closed-loop reaching scenario. We demonstrate that the tracking is robust to clutter, allows compensating for errors in the robot kinematics and servoing the arm in closed loop using vision

    Fuzzy Logic of Speed and Steering Control System for Three Dimensional Line Following of an Autonomous Vehicle

    Full text link
    ... This paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic [8] [9] approach for steering and speed control [37], a FL approach for ultrasound sensing and an overall expert system for guidance. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test bed has been constructed in university of Cincinnati using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised through a multi-axis motion controller. The obstacle avoidance system is based on a microcontroller interfaced with ultrasonic transducers. This micro-controller independently handles all timing and distance calculations and sends distance information back to the fuzzy logic controller via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance has been accomplished with the use of CCD cameras judging the current position of the robot.[34] [35][36] It will be generating a good image for reducing an uncertain wrong command from ground coordinate to tackle the parameter uncertainties of the system, and to obtain good WMR dynamic response.[1] Here we Apply 3D line following mythology. It transforms from 3D to 2D and also maps the image coordinates and vice versa, leading to the improved accuracy of the WMR position. ...Comment: IEEE Publication format, International Journal of Computer Science and Information Security, IJCSIS, Vol. 7 No. 3, March 2010, USA. ISSN 1947 5500, http://sites.google.com/site/ijcsis

    Quasi-Direct Drive for Low-Cost Compliant Robotic Manipulation

    Full text link
    Robots must cost less and be force-controlled to enable widespread, safe deployment in unconstrained human environments. We propose Quasi-Direct Drive actuation as a capable paradigm for robotic force-controlled manipulation in human environments at low-cost. Our prototype - Blue - is a human scale 7 Degree of Freedom arm with 2kg payload. Blue can cost less than $5000. We show that Blue has dynamic properties that meet or exceed the needs of human operators: the robot has a nominal position-control bandwidth of 7.5Hz and repeatability within 4mm. We demonstrate a Virtual Reality based interface that can be used as a method for telepresence and collecting robot training demonstrations. Manufacturability, scaling, and potential use-cases for the Blue system are also addressed. Videos and additional information can be found online at berkeleyopenarms.github.ioComment: This is our long version - 8 pages. Our 6 page version without a discussion of thermal limits was accepted to ICRA 2019. 11 Figure

    Understanding Human Motion and Gestures for Underwater Human-Robot Collaboration

    Full text link
    In this paper, we present a number of robust methodologies for an underwater robot to visually detect, follow, and interact with a diver for collaborative task execution. We design and develop two autonomous diver-following algorithms, the first of which utilizes both spatial- and frequency-domain features pertaining to human swimming patterns in order to visually track a diver. The second algorithm uses a convolutional neural network-based model for robust tracking-by-detection. In addition, we propose a hand gesture-based human-robot communication framework that is syntactically simpler and computationally more efficient than the existing grammar-based frameworks. In the proposed interaction framework, deep visual detectors are used to provide accurate hand gesture recognition; subsequently, a finite-state machine performs robust and efficient gesture-to-instruction mapping. The distinguishing feature of this framework is that it can be easily adopted by divers for communicating with underwater robots without using artificial markers or requiring memorization of complex language rules. Furthermore, we validate the performance and effectiveness of the proposed methodologies through extensive field experiments in closed- and open-water environments. Finally, we perform a user interaction study to demonstrate the usability benefits of our proposed interaction framework compared to existing methods.Comment: arXiv admin note: text overlap with arXiv:1709.0877

    Adjustable impedance, force feedback and command language aids for telerobotics (parts 1-4 of an 8-part MIT progress report)

    Get PDF
    Projects recently completed or in progress at MIT Man-Machine Systems Laboratory are summarized. (1) A 2-part impedance network model of a single degree of freedom remote manipulation system is presented in which a human operator at the master port interacts with a task object at the slave port in a remote location is presented. (2) The extension of the predictor concept to include force feedback and dynamic modeling of the manipulator and the environment is addressed. (3) A system was constructed to infer intent from the operator's commands and the teleoperation context, and generalize this information to interpret future commands. (4) A command language system is being designed that is robust, easy to learn, and has more natural man-machine communication. A general telerobot problem selected as an important command language context is finding a collision-free path for a robot

    What Communication Modalities Do Users Prefer in Real Time HRI?

    Full text link
    This paper investigates users' preferred interaction modalities when playing an imitation game with KASPAR, a small child-sized humanoid robot. The study involved 16 adult participants teaching the robot to mime a nursery rhyme via one of three interaction modalities in a real-time Human-Robot Interaction (HRI) experiment: voice, guiding touch and visual demonstration. The findings suggest that the users appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot's perceived ability to imitate.Comment: 5th International Symposium on New Frontiers in Human-Robot Interaction 2016 (arXiv:1602.05456

    Visual servoing

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Towards a Framework for Embodying Any-Body through Sensory Translation and Proprioceptive Remapping: A Pilot Study

    Get PDF
    We address the problem of physical avatar embodiment and investi- gate the most general factors that may allow a person to “wear” an- other body, different from her own. A general approach is required to exploit the fact that an avatar can have any kind of body. With this pilot study we introduce a conceptual framework for the design of non-anthropomorphic embodiment, to foster immersion and user engagement. The person is interfaced with the avatar, a robot, through a system that induces a divergent internal sensorimotor mapping while controlling the avatar, to create an immersive expe- rience. Together with the conceptual framework, we present two implementations: a prototype tested in the lab and an interactive in- stallation exhibited to general public. These implementations consist of a wheeled robot, and control and sensory feedback systems. The control system includes mechanisms that both detect and resist the user’s movement, increasing the sense of connection with the avatar; the feedback system is a virtual reality (VR) environment represent- ing the avatar’s unique perception, combining sensor and control in- formation to generate visual cues. Data gathered from users indicate that the systems implemented following the proposed framework create a challenging and engaging experience, thus providing solid ground for further developments

    Compare Contact Model-based Control and Contact Model-free Learning: A Survey of Robotic Peg-in-hole Assembly Strategies

    Full text link
    In this paper, we present an overview of robotic peg-in-hole assembly and analyze two main strategies: contact model-based and contact model-free strategies. More specifically, we first introduce the contact model control approaches, including contact state recognition and compliant control two steps. Additionally, we focus on a comprehensive analysis of the whole robotic assembly system. Second, without the contact state recognition process, we decompose the contact model-free learning algorithms into two main subfields: learning from demonstrations and learning from environments (mainly based on reinforcement learning). For each subfield, we survey the landmark studies and ongoing research to compare the different categories. We hope to strengthen the relation between these two research communities by revealing the underlying links. Ultimately, the remaining challenges and open questions in the field of robotic peg-in-hole assembly community is discussed. The promising directions and potential future work are also considered
    • …
    corecore