179,720 research outputs found

    Different Motion Cues Are Used to Estimate Time-to-arrival for Frontoparallel and Loming Trajectories

    Get PDF
    Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance

    Seeing motion of controlled object improves grip timing in adults with autism spectrum condition: evidence for use of inverse dynamics in motor control

    Get PDF
    Previous studies (Haswell et al. in Nat Neurosci 12:970–972, 2009; Marko et al. in Brain J Neurol 138:784–797, 2015) reported that people with autism rely less on vision for learning to reach in a force field. This suggested a possibility that they have difficulties in extracting force information from visual motion signals, a process called inverse dynamics computation. Our recent study (Takamuku et al. in J Int Soc Autism Res 11:1062–1075, 2018) examined the ability of inverse computation with two perceptual tasks and found similar performances in typical and autistic adults. However, this tested the computation only in the context of sensory perception while it was possible that the suspected disability is specific to the motor domain. Here, in order to address the concern, we tested the use of inverse dynamics computation in the context of motor control by measuring changes in grip timing caused by seeing/not seeing a controlled object. The motion of the object was informative of its inertial force and typical participants improved their grip timing based on the visual feedback. Our interest was on whether the autism participants show the same improvement. While some autism participants showed atypical hand slowing when seeing the controlled object, we found no evidence of abnormalities in the inverse computation in our grip timing task or in a replication of the perceptual task. This suggests that the ability of inverse dynamics computation is preserved not only for sensory perception but also for motor control in adults with autism

    Improved Stratified Control for Hexapod Robots and Object Manipulation with Finger Relocation

    Get PDF
    The paper deals with the motion design of legged robots and dextrous hands. We show the possibilities and limitation of conventional stratified control approach through the relative simple example of hexapod robot and offer some proposals for a more robust motion planning solution. The precision of the algorithms was improved by step length modification and the applicability was increased by time scaling. The developed software is based on symbolic computation. On the other hand, our fundamental goal is to provide a powerful basic concept for object manipulation with finger relocation in the context of stratified control as an extension of earlier works. The concept focuses on the finger gaiting manipulation (based on finger relocation) to gain some attributes of the nonsmooth object

    Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Get PDF
    Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS) and cross diamond search algorithms (CDS) are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS) algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS) in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications
    • …
    corecore