95 research outputs found

    A robotics approach for interpreting the gaze-related modulation of the activity of premotor neurons during reaching

    Get PDF
    International audienceThis paper deals with the modeling of the activity of premotor neurons associated with the execution of a visually guided reaching movement in primates. We address this question from a robotics point of view, by considering a simplified kinematic model of the head, eye and arm joints. By using the formalism of visual servoing, we show that the hand controller depends on the direction of the head and the eye, as soon as the hand-target difference vector is expressed in eye-centered coordinates. Based on this result, we propose a new interpretation of previous electrophysiological recordings in monkey, showing the existence of a gaze-related modulation of the activity of premotor neurons during reaching. This approach sheds a new light on this phenomenon which, so far, is not clearly understood

    A Short Review Of Neural Network Techniques In Visual Servoing Of Robotic Manipulators.

    Get PDF
    Robotics is one of the most challenging applications of soft computing techniques. It is characterized by direct interaction with a real world, sensory feedback and a complex control system

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    Editorial for the Special Issue Recognition Robotics

    Get PDF
    Perception of the environment is an essential skill for robotic applications that interact with their surroundings. Alongside perception often comes the ability to recognize objects, people, or dynamic situations. This skill is of paramount importance in many use cases, from industrial to social robotics

    Generating Visual Scenes from Touch

    Full text link
    An emerging line of work has sought to generate plausible imagery from touch. Existing approaches, however, tackle only narrow aspects of the visuo-tactile synthesis problem, and lag significantly behind the quality of cross-modal synthesis methods in other domains. We draw on recent advances in latent diffusion to create a model for synthesizing images from tactile signals (and vice versa) and apply it to a number of visuo-tactile synthesis tasks. Using this model, we significantly outperform prior work on the tactile-driven stylization problem, i.e., manipulating an image to match a touch signal, and we are the first to successfully generate images from touch without additional sources of information about the scene. We also successfully use our model to address two novel synthesis problems: generating images that do not contain the touch sensor or the hand holding it, and estimating an image's shading from its reflectance and touch.Comment: ICCV 2023; Project site: https://fredfyyang.github.io/vision-from-touch

    Design and integration of vision based sensors for unmanned aerial vehicles navigation and guidance

    Get PDF
    In this paper we present a novel Navigation and Guidance System (NGS) for Unmanned Aerial Vehicles (UAVs) based on Vision Based Navigation (VBN) and other avionics sensors. The main objective of our research is to design a lowcost and low-weight/volume NGS capable of providing the required level of performance in all flight phases of modern small- to medium-size UAVs, with a special focus on automated precision approach and landing, where VBN techniques can be fully exploited in a multisensory integrated architecture. Various existing techniques for VBN are compared and the Appearance-based Navigation (ABN) approach is selected for implementation

    Neural Network-Based Model for Classification of Faults During Operation of a Robotic Manipulator

    Get PDF
    The importance of error detection is high, especially in modern manufacturing processes where assembly lines operate without direct supervision. Stopping the faulty operation in time can prevent damage to the assembly line. Public dataset is used, containing 15 classes, 2 types of faultless operation and 13 types of faults, with 463 force and torsion datapoints. Four different methods are used: Multilayer Perceptron (MLP) selected due to high classification performance, Support Vector Machines (SVM) commonly used for a low number of datapoints, Convolutional Neural Network (CNN) known for high performance in classification with matrix inputs and Siamese Neural Network (SNN) novel method with high performance in small datasets. Two classification tasks are performed-error detection and classification. Grid search is used for hyperparameter variation and F1 score as a metric, with a 10 fold cross-validation. Authors propose a hybrid system consisting of SNN for detection and CNN for fault classification

    Implementing Selective Attention in Machines: The Case of Touch-Driven Saccades

    Get PDF
    Recent paradigms in the fields of robotics and machine perception have emphasized the importance of selective attention mechanisms for perceiving and interacting with the environment. In the case of a system involved in operations requiring a physical interaction with the surrounding environment, a major role is played by the capability of attentively responding to tactile events. By performing somatosensory saccades, the nature of the cutaneous stimulation can be assessed, and new motor actions can be planned. However, the study of touch-driven attention, has almost been neglected by robotics researchers. In this paper the development of visuo-cutaneo coordination for the production of somatosensory saccades is investigated, and a general architecture for integrating different kinds of attentive mechanisms is proposed. The system autonomously discovers the sensorymotor transformation which links tactile events to visual saccades, on the basis of multisensory consistencies and basic, built-in, motor reflexes. Results obtained both with simulations and robotic experiments are analyzed
    corecore