2,027 research outputs found

    Modeling and control of robotic yo-yo with visual feedback

    Get PDF
    Yo-yo is a toy made of two thick circular pieces of wood, plastic, etc., connected with a short axle that can be made to run up and down, by moving a string tied to it. Humans can play with a yo-yo without difficulty. However, developing a robot system that can play with a yo-yo presents a significant challenge to controller design, because the dynamics of the yo-yo are difficult to model precisely. Moreover, since the dynamics are not continuous and there are integral-type constraints on the hand trajectory, conventional continuous time feedback control theory does not work well. This paper present a model and a control scheme for robotic yo-yo with visual feedback. Experiments on a PUMA 560 are carried out to evaluate the validity of the discrete dime formulation and the controller design</p

    Evaluation of automated decision making methodologies and development of an integrated robotic system simulation: Study results

    Get PDF
    The implementation of a generic computer simulation for manipulator systems (ROBSIM) is described. The program is written in FORTRAN, and allows the user to: (1) Interactively define a manipulator system consisting of multiple arms, load objects, targets, and an environment; (2) Request graphic display or replay of manipulator motion; (3) Investigate and simulate various control methods including manual force/torque and active compliance control; and (4) Perform kinematic analysis, requirements analysis, and response simulation of manipulamotion. Previous reports have described the algorithms and procedures for using ROBSIM. These reports are superseded and additional features which were added are described. They are: (1) The ability to define motion profiles and compute loads on a common base to which manipulator arms are attached; (2) Capability to accept data describing manipulator geometry from a Computer Aided Design data base using the Initial Graphics exchange Specification format; (3) A manipulator control algorithm derived from processing the TV image of known reference points on a target; and (4) A vocabulary of simple high level task commands which can be used to define task scenarios

    Weighted feature selection criteria for visual servoing of a telerobot

    Get PDF
    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed

    Robotic Micromanipulation and Microassembly using Mono-view and Multi-scale visual servoing.

    No full text
    International audienceThis paper investigates sequential robotic micromanipulation and microassembly in order to build 3-D microsystems and devices. A mono-view and multiple scale 2-D visual control scheme is implemented for that purpose. The imaging system used is a photon video microscope endowed with an active zoom enabling to work at multiple scales. It is modelled by a non-linear projective method where the relation between the focal length and the zoom factor is explicitly established. A distributed robotic system (xy system, z system) with a twofingers gripping system is used in conjunction with the imaging system. The results of experiments demonstrate the relevance of the proposed approaches. The tasks were performed with the following accuracy: 1.4 m for the positioning error, and 0.5 for the orientation error

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    ビジュアルエンコーダを用いた回転制御のための高速ロボットマニピュレーション

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 石川 正俊, 東京大学教授 篠田 裕之, 東京大学教授 生田 幸士, 東京大学教授 稲見 昌彦, 東京大学講師 渡辺 義浩University of Tokyo(東京大学
    corecore