49 research outputs found

    相対座標における高速視覚フィードバックに基づくダイナミックコンペンセーション

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Force Sensing and Control in Micromanipulation

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Visual Velocity Impedance Controller

    Get PDF
    Successful object insertion systems allow the object to translate and rotate to accommodate contact forces. Compliant controllers are used in robotics to provide this accommodation. The impedance compliant controller is one of the more researched and well known compliant controllers used for assembly. The velocity filtered visual impedance controller is introduced as a compliant controller to improve upon the impedance controller. The velocity filtered impedance controller introduces a filter of the velocity impedance and a gain from the stiffness. The velocity impedance controller was found to be more stable over larger ranges of stiffness values than the position based impedance controller. This led to the velocity impedance controller being more accurate and stable with respect to external forces. The velocity impedance controller was also found to have a better compliant response when tested on various insertion geometries in various configurations, including a key insertion acting against gravity. Finally, a novel kinetic friction cone compliance model is introduced for the velocity impedance controller. It was determined that the new compliance model provided a more reliable insertion than the standard insertion model by increasing the error tolerance for failure

    Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Get PDF
    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications

    Robot Manipulators

    Get PDF
    Robot manipulators are developing more in the direction of industrial robots than of human workers. Recently, the applications of robot manipulators are spreading their focus, for example Da Vinci as a medical robot, ASIMO as a humanoid robot and so on. There are many research topics within the field of robot manipulators, e.g. motion planning, cooperation with a human, and fusion with external sensors like vision, haptic and force, etc. Moreover, these include both technical problems in the industry and theoretical problems in the academic fields. This book is a collection of papers presenting the latest research issues from around the world

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space 1994

    Get PDF
    The Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space (i-SAIRAS 94), held October 18-20, 1994, in Pasadena, California, was jointly sponsored by NASA, ESA, and Japan's National Space Development Agency, and was hosted by the Jet Propulsion Laboratory (JPL) of the California Institute of Technology. i-SAIRAS 94 featured presentations covering a variety of technical and programmatic topics, ranging from underlying basic technology to specific applications of artificial intelligence and robotics to space missions. i-SAIRAS 94 featured a special workshop on planning and scheduling and provided scientists, engineers, and managers with the opportunity to exchange theoretical ideas, practical results, and program plans in such areas as space mission control, space vehicle processing, data analysis, autonomous spacecraft, space robots and rovers, satellite servicing, and intelligent instruments

    From plain visualisation to vibration sensing: using a camera to control the flexibilities in the ITER remote handling equipment

    Get PDF
    Thermonuclear fusion is expected to play a key role in the energy market during the second half of this century, reaching 20% of the electricity generation by 2100. For many years, fusion scientists and engineers have been developing the various technologies required to build nuclear power stations allowing a sustained fusion reaction. To the maximum possible extent, maintenance operations in fusion reactors are performed manually by qualified workers in full accordance with the "as low as reasonably achievable" (ALARA) principle. However, the option of hands-on maintenance becomes impractical, difficult or simply impossible in many circumstances, such as high biological dose rates. In this case, maintenance tasks will be performed with remote handling (RH) techniques. The International Thermonuclear Experimental Reactor ITER, to be commissioned in southern France around 2025, will be the first fusion experiment producing more power from fusion than energy necessary to heat the plasma. Its main objective is “to demonstrate the scientific and technological feasibility of fusion power for peaceful purposes”. However ITER represents an unequalled challenge in terms of RH system design, since it will be much more demanding and complex than any other remote maintenance system previously designed. The introduction of man-in-the-loop capabilities in the robotic systems designed for ITER maintenance would provide useful assistance during inspection, i.e. by providing the operator the ability and flexibility to locate and examine unplanned targets, or during handling operations, i.e. by making peg-in-hole tasks easier. Unfortunately, most transmission technologies able to withstand the very specific and extreme environmental conditions existing inside a fusion reactor are based on gears, screws, cables and chains, which make the whole system very flexible and subject to vibrations. This effect is further increased as structural parts of the maintenance equipment are generally lightweight and slender structures due to the size and the arduous accessibility to the reactor. Several methodologies aiming at avoiding or limiting the effects of vibrations on RH system performance have been investigated over the past decade. These methods often rely on the use of vibration sensors such as accelerometers. However, reviewing market shows that there is no commercial off-the-shelf (COTS) accelerometer that meets the very specific requirements for vibration sensing in the ITER in-vessel RH equipment (resilience to high total integrated dose, high sensitivity). The customisation and qualification of existing products or investigation of new concepts might be considered. However, these options would inevitably involve high development costs. While an extensive amount of work has been published on the modelling and control of flexible manipulators in the 1980s and 1990s, the possibility to use vision devices to stabilise an oscillating robotic arm has only been considered very recently and this promising solution has not been discussed at length. In parallel, recent developments on machine vision systems in nuclear environment have been very encouraging. Although they do not deal directly with vibration sensing, they open up new prospects in the use of radiation tolerant cameras. This thesis aims to demonstrate that vibration control of remote maintenance equipment operating in harsh environments such as ITER can be achieved without considering any extra sensor besides the embarked rad-hardened cameras that will inevitably be used to provide real-time visual feedback to the operators. In other words it is proposed to consider the radiation-tolerant vision devices as full sensors providing quantitative data that can be processed by the control scheme and not only as plain video feedback providing qualitative information. The work conducted within the present thesis has confirmed that methods based on the tracking of visual features from an unknown environment are effective candidates for the real-time control of vibrations. Oscillations induced at the end effector are estimated by exploiting a simple physical model of the manipulator. Using a camera mounted in an eye-in-hand configuration, this model is adjusted using direct measurement of the tip oscillations with respect to the static environment. The primary contribution of this thesis consists of implementing a markerless tracker to determine the velocity of a tip-mounted camera in an untrimmed environment in order to stabilise an oscillating long-reach robotic arm. In particular, this method implies modifying an existing online interaction matrix estimator to make it self-adjustable and deriving a multimode dynamic model of a flexible rotating beam. An innovative vision-based method using sinusoidal regression to sense low-frequency oscillations is also proposed and tested. Finally, the problem of online estimation of the image capture delay for visual servoing applications with high dynamics is addressed and an original approach based on the concept of cross-correlation is presented and experimentally validated

    Programming by Demonstration on Riemannian Manifolds

    Get PDF
    This thesis presents a Riemannian approach to Programming by Demonstration (PbD). It generalizes an existing PbD method from Euclidean manifolds to Riemannian manifolds. In this abstract, we review the objectives, methods and contributions of the presented approach. OBJECTIVES PbD aims at providing a user-friendly method for skill transfer between human and robot. It enables a user to teach a robot new tasks using few demonstrations. In order to surpass simple record-and-replay, methods for PbD need to \u2018understand\u2019 what to imitate; they need to extract the functional goals of a task from the demonstration data. This is typically achieved through the application of statisticalmethods. The variety of data encountered in robotics is large. Typical manipulation tasks involve position, orientation, stiffness, force and torque data. These data are not solely Euclidean. Instead, they originate from a variety of manifolds, curved spaces that are only locally Euclidean. Elementary operations, such as summation, are not defined on manifolds. Consequently, standard statistical methods are not well suited to analyze demonstration data that originate fromnon-Euclidean manifolds. In order to effectively extract what-to-imitate, methods for PbD should take into account the underlying geometry of the demonstration manifold; they should be geometry-aware. Successful task execution does not solely depend on the control of individual task variables. By controlling variables individually, a task might fail when one is perturbed and the others do not respond. Task execution also relies on couplings among task variables. These couplings describe functional relations which are often called synergies. In order to understand what-to-imitate, PbDmethods should be able to extract and encode synergies; they should be synergetic. In unstructured environments, it is unlikely that tasks are found in the same scenario twice. The circumstances under which a task is executed\u2014the task context\u2014are more likely to differ each time it is executed. Task context does not only vary during task execution, it also varies while learning and recognizing tasks. To be effective, a robot should be able to learn, recognize and synthesize skills in a variety of familiar and unfamiliar contexts; this can be achieved when its skill representation is context-adaptive. THE RIEMANNIAN APPROACH In this thesis, we present a skill representation that is geometry-aware, synergetic and context-adaptive. The presented method is probabilistic; it assumes that demonstrations are samples from an unknown probability distribution. This distribution is approximated using a Riemannian GaussianMixtureModel (GMM). Instead of using the \u2018standard\u2019 Euclidean Gaussian, we rely on the Riemannian Gaussian\u2014 a distribution akin the Gaussian, but defined on a Riemannian manifold. A Riev mannian manifold is a manifold\u2014a curved space which is locally Euclidean\u2014that provides a notion of distance. This notion is essential for statistical methods as such methods rely on a distance measure. Examples of Riemannian manifolds in robotics are: the Euclidean spacewhich is used for spatial data, forces or torques; the spherical manifolds, which can be used for orientation data defined as unit quaternions; and Symmetric Positive Definite (SPD) manifolds, which can be used to represent stiffness and manipulability. The Riemannian Gaussian is intrinsically geometry-aware. Its definition is based on the geometry of the manifold, and therefore takes into account the manifold curvature. In robotics, the manifold structure is often known beforehand. In the case of PbD, it follows from the structure of the demonstration data. Like the Gaussian distribution, the Riemannian Gaussian is defined by a mean and covariance. The covariance describes the variance and correlation among the state variables. These can be interpreted as local functional couplings among state variables: synergies. This makes the Riemannian Gaussian synergetic. Furthermore, information encoded in multiple Riemannian Gaussians can be fused using the Riemannian product of Gaussians. This feature allows us to construct a probabilistic context-adaptive task representation. CONTRIBUTIONS In particular, this thesis presents a generalization of existing methods of PbD, namely GMM-GMR and TP-GMM. This generalization involves the definition ofMaximum Likelihood Estimate (MLE), Gaussian conditioning and Gaussian product for the Riemannian Gaussian, and the definition of ExpectationMaximization (EM) and GaussianMixture Regression (GMR) for the Riemannian GMM. In this generalization, we contributed by proposing to use parallel transport for Gaussian conditioning. Furthermore, we presented a unified approach to solve the aforementioned operations using aGauss-Newton algorithm. We demonstrated how synergies, encoded in a Riemannian Gaussian, can be transformed into synergetic control policies using standard methods for LinearQuadratic Regulator (LQR). This is achieved by formulating the LQR problem in a (Euclidean) tangent space of the Riemannian manifold. Finally, we demonstrated how the contextadaptive Task-Parameterized Gaussian Mixture Model (TP-GMM) can be used for context inference\u2014the ability to extract context from demonstration data of known tasks. Our approach is the first attempt of context inference in the light of TP-GMM. Although effective, we showed that it requires further improvements in terms of speed and reliability. The efficacy of the Riemannian approach is demonstrated in a variety of scenarios. In shared control, the Riemannian Gaussian is used to represent control intentions of a human operator and an assistive system. Doing so, the properties of the Gaussian can be employed to mix their control intentions. This yields shared-control systems that continuously re-evaluate and assign control authority based on input confidence. The context-adaptive TP-GMMis demonstrated in a Pick & Place task with changing pick and place locations, a box-taping task with changing box sizes, and a trajectory tracking task typically found in industr

    The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)

    Get PDF
    This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry
    corecore