9,670 research outputs found

    A Laminar Cortical Model for 3D Perception of Slanted and Curved Surfaces and of 2D Images: Developement, attention, and Bistability

    Full text link
    A model of laminar visual cortical dynamics proposes how 3D boundary and surface representations of slated and curved 3D objects and 2D images arise. The 3D boundary representations emerge from interactions between non-classical horizontal receptive field interactions with intracorticcal and intercortical feedback circuits. Such non-classical interactions contextually disambiguate classical receptive field responses to ambiguous visual cues using cells that are sensitive to angles and disparity gradients with cortical areas V1 and V2. These cells are all variants of bipole grouping cells. Model simulations show how horizontal connections can develop selectively to angles, how slanted surfaces can activate 3D boundary representations that are sensitive to angles and disparity gradients, how 3D filling-in occurs across slanted surfaces, how a 2D Necker cube image can be represented in 3D, and how bistable Necker cuber percepts occur. The model also explains data about slant aftereffects and 3D neon color spreading. It shows how habituative transmitters that help to control developement also help to trigger bistable 3D percepts and slant aftereffects, and how attention can influence which of these percepts is perceived by propogating along some object boundaries.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-98-1-0108); Defense Advanced Research Projects Agency and the Office of Naval Research (N0014-95-1-0409, N00014-01-1-0624, N00014-95-1-0657); National Science Foundation (IIS-97-20333

    Object grasping and manipulation in capuchin monkeys (genera Cebus and Sapajus)

    Get PDF
    The abilities to perform skilled hand movements and to manipulate objects dexterously are landmarks in the evolution of primates. The study of how primates use their hands to grasp and manipulate objects in accordance with their needs sheds light on how these species are physically and mentally equipped to deal with the problems they encounter in their daily life. We report data on capuchin monkeys, highly manipulative platyrrhine species that usually spend a great deal of time in active manipulation to search for food and to prepare it for ingestion. Our aim is to provide an overview of current knowledge on the ability of capuchins to grasp and manipulate objects, with a special focus on how these species express their cognitive potential through manual behaviour. Data on the ability of capuchins to move their hands and on the neural correlates sustaining their actions are reported, as are findings on the manipulative ability of capuchins to anticipate future actions and to relate objects to other objects and substrates. The manual behaviour of capuchins is considered in different domains, such as motor planning, extractive foraging and tool use, in both captive and natural settings. Anatomofunctional and behavioural similarities to and differences from other haplorrhine species regarding manual dexterity are also discussed

    Learning to Place Unseen Objects Stably using a Large-scale Simulation

    Full text link
    Object placement is a fundamental task for robots, yet it remains challenging for partially observed objects. Existing methods for object placement have limitations, such as the requirement for a complete 3D model of the object or the inability to handle complex shapes and novel objects that restrict the applicability of robots in the real world. Herein, we focus on addressing the Unseen Object Placement (UOP}=) problem. We tackled the UOP problem using two methods: (1) UOP-Sim, a large-scale dataset to accommodate various shapes and novel objects, and (2) UOP-Net, a point cloud segmentation-based approach that directly detects the most stable plane from partial point clouds. Our UOP approach enables robots to place objects stably, even when the object's shape and properties are not fully known, thus providing a promising solution for object placement in various environments. We verify our approach through simulation and real-world robot experiments, demonstrating state-of-the-art performance for placing single-view and partial objects. Robot demos, codes, and dataset are available at https://gistailab.github.io/uop/Comment: 8 pages (main

    Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model

    Get PDF
    The activity of certain parietal neurons has been interpreted as encoding affordances (directly perceivable opportunities) for grasping. Separate computational models have been developed for infant grasp learning and affordance learning, but no single model has yet combined these processes in a neurobiologically plausible way. We present the Integrated Learning of Grasps and Affordances (ILGA) model that simultaneously learns grasp affordances from visual object features and motor parameters for planning grasps using trial-and-error reinforcement learning. As in the Infant Learning to Grasp Model, we model a stage of infant development prior to the onset of sophisticated visual processing of hand–object relations, but we assume that certain premotor neurons activate neural populations in primary motor cortex that synergistically control different combinations of fingers. The ILGA model is able to extract affordance representations from visual object features, learn motor parameters for generating stable grasps, and generalize its learned representations to novel objects

    Slope-Driven Goal Location Behavior in Pigeons

    Get PDF
    A basic tenet of principles of associative learning applicable to models of spatial learning is that a cue should be assigned greater weight if it is a better predictor of the goal location. Pigeons were trained to locate a goal in an acute corner of an isosceles trapezoid arena, presented on a slanted floor with 3 (Experiment 1) or 2 (Experiment 2) orientations. The goal could be consistently determined by the geometric shape of the arena; however, its position with respect to the slope gradient varied, such that slope position was not a good predictor of the goal. Pigeons learned to solve the task, and testing on a flat surface revealed successful encoding of the goal relative to the geometric shape of the arena. However, when tested in the arena placed in a novel orientation on the slope, pigeons surprisingly made systematic errors to the other acute—but geometrically incorrect—mirror image corner. The results indicate that, for each arena orientation, pigeons encoded the goal location with respect to the slope. Then, in the novel orientation, they chose the corner that matched the goal’s position on the slope plus local cue (corner angle). Although geometry was 2 times (Experiment 2) or even 3 times (Experiment 1) as predictive as slope, it failed to control behavior during novel test trials. Instead, searching was driven by the less predictive slope cues. The reliance on slope and the unresponsiveness to geometry are explained by the greater salience of slope despite its lower predictive value

    Advancing the Underactuated Grasping Capabilities of Single Actuator Prosthetic Hands

    Get PDF
    The last decade has seen significant advancements in upper limb prosthetics, specifically in the myoelectric control and powered prosthetic hand fields, leading to more active and social lifestyles for the upper limb amputee community. Notwithstanding the improvements in complexity and control of myoelectric prosthetic hands, grasping still remains one of the greatest challenges in robotics. Upper-limb amputees continue to prefer more antiquated body-powered or powered hook terminal devices that are favored for their control simplicity, lightweight and low cost; however, these devices are nominally unsightly and lack in grasp variety. The varying drawbacks of both complex myoelectric and simple body-powered devices have led to low adoption rates for all upper limb prostheses by amputees, which includes 35% pediatric and 23% adult rejection for complex devices and 45% pediatric and 26% adult rejection for body-powered devices [1]. My research focuses on progressing the grasping capabilities of prosthetic hands driven by simple control and a single motor, to combine the dexterous functionality of the more complex hands with the intuitive control of the more simplistic body-powered devices with the goal of helping upper limb amputees return to more active and social lifestyles. Optimization of a prosthetic hand driven by a single actuator requires the optimization of many facets of the hand. This includes optimization of the finger kinematics, underactuated mechanisms, geometry, materials and performance when completing activities of daily living. In my dissertation, I will present chapters dedicated to improving these subsystems of single actuator prosthetic hands to better replicate human hand function from simple control. First, I will present a framework created to optimize precision grasping – which is nominally unstable in underactuated configurations – from a single actuator. I will then present several novel mechanisms that allow a single actuator to map to higher degree of freedom motion and multiple commonly used grasp types. I will then discuss how fingerpad geometry and materials can better grasp acquisition and frictional properties within the hand while also providing a method of fabricating lightweight custom prostheses. Last, I will analyze the results of several human subject testing studies to evaluate the optimized hands performance on activities of daily living and compared to other commercially available prosthesis

    Spatial orientation and navigation in microgravity

    Get PDF
    Manuscript for Spatial Processing in Navigation, Imagery and Perception, F. Mast and L. Janeke, eds.This chapter summarizes the spatial disorientation problems and navigation difficulties described by astronauts and cosmonauts, and relates them to research findings on orientation and navigation in humans and animals. Spacecraft crew are uniquely free to float in any relative orientation with respect to the cabin, and experience no vestibular and haptic cues that directly indicate the direction of “down”. They frequently traverse areas with inconsistently aligned visual vertical cues. As a result, most experience “Visual Reorientation Illusions” (VRIs) where the spacecraft floors, walls and ceiling surfaces exchange subjective identities. The illusion apparently results from a sudden reorientation of the observer’s allocentric reference frame. Normally this frame realigns to local interior surfaces, but in some cases it can jump to the Earth beyond, as with “Inversion Illusions” and EVA height vertigo. These perceptual illusions make it difficult for crew to maintain a veridical perception of orientation and place within the spacecraft, make them more reliant upon landmark and route strategies for 3D navigation, and can trigger space motion sickness. This chapter distinguishes VRIs and Inversion Illusions, based on firsthand descriptions from Vostok, Apollo, Skylab, Mir, Shuttle and International Space Station crew. Theories on human “gravireceptor” and “idiotropic” biases, visual “frame” and “polarity” cues, top-down processing effects on object orientation perception, mental rotation and “direction vertigo” are discussed and related to animal experiments on limbic head direction and place cell responses. It is argued that the exchange in perceived surface identity characteristic of human VRIs is caused by a reorientation of the unseen allocentric navigation plane used by CNS mechanisms coding place and direction, as evidenced in the animal models. Human VRI susceptibility continues even on long flights, perhaps because our orientation and navigation mechanisms evolved to principally support 2D navigation.NASA Cooperative Research Agreement NCC9-58 with the National Space Biomedical Research Institut
    • …
    corecore