1,098 research outputs found

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Change processes in relationships: a relational-historical research approach

    Get PDF
    Journal ArticleThis work was supported by grants to Alan Fogel from the National Institute of Health (R01 HD21036), the National Science Foundation (BNS9006756) and the National Institute of Mental Health (R01MH48680), and by a grant to Andrea Garvey from the National Science Foundation of Brazil (CNPq). We gratefully acknowledge the comments and suggestions of Yolanda van Beek, Antonella Brighi, George Butterworth, Ken Critchfield, Maria Luisa Genta, Shane Roller, Manuela Lavelli, Marc Lewis, Sarah Lucas, G. Christina Nelson-Goens, Marie-Germaine Pecheaux, Josette Ruel, Lisa Taylor, and Dankert Vedeler

    An Analysis of Presence and User Experiences Over Time

    Get PDF
    This manuscript presents the result of a series of studies intended to shed light on understanding how trends regarding user experiences in VR changes over time when engaging with VR games. In my first study, I explored how user experiences compared when playing Minecraft on the desktop against playing Minecraft within an immersive virtual reality port. Fourteen players completed six 45 minute sessions, three sessions were played on the desktop, and three in VR. The Gaming Experience Questionnaire, i-Group presence questionnaire, and Simulator Sickness Questionnaire were administered after each session, and players were interviewed at the end of the experiment. Survey data showed substantial increases in presence and positive emotions when playing Minecraft in VR while multiple themes emerged in participant interviews: participants\u27 heightened emotional experiences playing Minecraft in VR was closely linked to feelings of immersion and improved sense of scale; participants overall enjoyed using motion controls, though they felt indirect input was better for some actions; and players generally disliked traveling via teleportation, as they found it disorienting and immersion-breaking. In my second study, I identified temporal shifts in user perceptions that had taken place within the first two years that consumer VR devices had become available. To consider what could be learned about the long-term use of consumer VR devices, I analyzed online forums discussions devoted to specifically VR. I gathered posts made on the /r/Vive subreddit from the first two years after the HTC Vive\u27s release. Over time, users moved from passive to active as their attitudes and expectations towards presence and simulator sickness matured. The significant trends of interest found to influence this was game design implementation and locomotion techniques. In my third study, again, I examined the data taken from the /r/Vive subreddit forum posts to gain further insights into the scope of what ``lingering effects\u27\u27 users had reported experiencing after using VR and the progression of these effects over time. After identifying search terms designed to discover comments made about lingering effects, I found three significant categories of lingering effects (besides simulator sickness) during my qualitative analysis: perceptual effects, behavioral effects, and changes in dreams. The perceptual and behavioral categories were further divided into sub-themes; including disruption of body ownership and proprioception, loss of a sense of depth in the real world, visual after effects, the need to verify the reality of the natural world through touch, hesitation when moving in the real world, and attempts to apply VR interaction metaphors to real-life interactions. After identifying these categories of effects, I mapped out how these effects progressed concerning time. In particular, I coded data according to four temporal concepts: 1) how long must be spent in VR to trigger an effect, 2) how long before the onset of an effect upon exiting VR, 3) the duration of any specific effect, and 4) the total duration that all effects can continue to occur overall. In my fourth study, I examined how user experiences and trends regarding presence changed throughout a single gaming session. Participants were immersed in a virtual experience called \u27The Secret Shop\u27 and given instructions to explore their surroundings with no guided direction. After their experience ended, users performed an After Action Review (AAR) while watching a recording of their recent experience, followed by a semi-structured interview. I graphed each user\u27s feelings of presence over time from second to second using the results of the After Action Review. Presence was shown in these graphs to both rise and fall, gradually and rapidly, throughout the course of each user\u27s experience. The analysis of both the graphs and the interviews then showed that presence was significantly impacted by user expectations, affordance inconsistencies, and the intensity of engagement experienced throughout the session. In my final study, I loaned out VR headsets to local novice users to track their perceptions of presence across the span of four weeks. Users were given the freedom to explore any VR games and applications of interest to them off-site to simulate regular VR consumer experiences. In this study, I analyzed how over time, novice users gradually evolved in their understanding of presence and what became most important to them in order to maintain and create it in the form of visual appeal, interaction techniques, and locomotion. I also found that the levels of engagement experienced across games were shown to be linked to whether users experienced lingering effects, how their perceptions of time spent within VR had been altered, and whether or not they retained any interest in investing in future VR-related purchases

    Micro-analysis of seriation skills

    Get PDF

    Peripersonal Space in the Humanoid Robot iCub

    Get PDF
    Developing behaviours for interaction with objects close to the body is a primary goal for any organism to survive in the world. Being able to develop such behaviours will be an essential feature in autonomous humanoid robots in order to improve their integration into human environments. Adaptable spatial abilities will make robots safer and improve their social skills, human-robot and robot-robot collaboration abilities. This work investigated how a humanoid robot can explore and create action-based representations of its peripersonal space, the region immediately surrounding the body where reaching is possible without location displacement. It presents three empirical studies based on peripersonal space findings from psychology, neuroscience and robotics. The experiments used a visual perception system based on active-vision and biologically inspired neural networks. The first study investigated the contribution of binocular vision in a reaching task. Results indicated the signal from vergence is a useful embodied depth estimation cue in the peripersonal space in humanoid robots. The second study explored the influence of morphology and postural experience on confidence levels in reaching assessment. Results showed that a decrease of confidence when assessing targets located farther from the body, possibly in accordance to errors in depth estimation from vergence for longer distances. Additionally, it was found that a proprioceptive arm-length signal extends the robot’s peripersonal space. The last experiment modelled development of the reaching skill by implementing motor synergies that progressively unlock degrees of freedom in the arm. The model was advantageous when compared to one that included no developmental stages. The contribution to knowledge of this work is extending the research on biologically-inspired methods for building robots, presenting new ways to further investigate the robotic properties involved in the dynamical adaptation to body and sensing characteristics, vision-based action, morphology and confidence levels in reaching assessment.CONACyT, Mexico (National Council of Science and Technology

    The development of hand-mouth coordination in early infancy

    Get PDF
    The aim of the thesis is to offer a comprehensive account of the developmental course of hand-mouth (HM) coordination from birth until a mature form of the coordination is attained. Questions relating both to the structure and function of the coordination were addressed. Three studies are reported. The method of observation was the same in each case; video records of two perpendicular views of the infant were obtained and a micro-analysis of movement structure was carried out. The main question addressed in study 1 was whether spontaneous HM contacts in newborns are related to hunger. HM contacts were compared before and after feeding in a group of newborn babies. There was no change in the relative distribution of locations of contacts on the mouth and face before and after feeding, but anticipatory mouth opening prior to HM contacts only occurred before feeding. Study 2 sought to obtain detailed measures of transitions taking place between 1-5 months in the structure of HM coordination, and to investigate what factors could be responsible for the changes observed. A longitudinal design was employed where babies were observed at monthly intervals. A small object was placed in the hands of infants to promote oral contacts. At 4 months of age, contacts began to be centred on the mouth (as opposed to other parts of the face) and the frequency of contacts was significantly higher when the object was present relative to the frequency of spontaneous contacts. Anticipatory mouth opening only occurred at 5 months of age, suggesting that this aspect of the coordination follows a U-shaped developmental trajectory. There was evidence that vision was playing a role in motivating HM contacts by 5 months of age. Consistent individual differences between babies were found in different aspects of HM coordination raising the possibility that more than one developmental route is followed in the achievement of mature HM coordination. Study 3 investigated HM coordination cross-sectionally between the ages of 5-9 months. The possibility that the development of reaching was influencing the development of HM coordination was investigated. Two situations were compared, one where the infant had to reach for an object prior to transportation to the mouth and another where the object was placed in the hand of the infant. Although HM coordination and reaching and grasping were already integrated at 5 months, the two coordinations appear to develop independently of each other. The development of HM coordination was found to be marked by motivational and structural shifts and apparent regressions. The results are interpreted within a dynamic systems view of development

    Programming by Demonstration for in-contact tasks using Dynamic Movement Primitives

    Get PDF
    Despite the rapid growth in the number of robots in the world, the number of service robots is still very low. The major reasons for this include the robots' lack of world knowledge, sensitivity, safety and flexibility. This thesis experimentally addresses the last three of these issues (sensitivity, safety and flexibility) with reference to advanced, industrial level robotic arms provided with integrated torque sensors at each joint. The aims of this work are twofold. The first one, at a more technical level, is the implementation of a real-time software infrastructure, based on Orocos and ROS, for a general, robust, flexible and modular robot control framework with a relatively high level of abstraction. The second aim is to utilize this software framework for Programming by Demonstration with a class of algorithms known as Dynamic Movement Primitives. Using kinesthetic teaching with one or multiple demonstrations, the robot performs simple sequential in-contact tasks (e. g. writing on a notepad a previously demonstrated sequence of characters). The system is not only able to imitate and generalize from demonstrated trajectories, but also from their associated force profiles during the execution of in-contact tasks. The framework is further extended to successfully recover from perturbations during the execution and to cope with dynamic environments

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots
    corecore