937 research outputs found
Biologically Inspired Dynamic Textures for Probing Motion Perception
Perception is often described as a predictive process based on an optimal
inference with respect to a generative model. We study here the principled
construction of a generative model specifically crafted to probe motion
perception. In that context, we first provide an axiomatic, biologically-driven
derivation of the model. This model synthesizes random dynamic textures which
are defined by stationary Gaussian distributions obtained by the random
aggregation of warped patterns. Importantly, we show that this model can
equivalently be described as a stochastic partial differential equation. Using
this characterization of motion in images, it allows us to recast motion-energy
models into a principled Bayesian inference framework. Finally, we apply these
textures in order to psychophysically probe speed perception in humans. In this
framework, while the likelihood is derived from the generative model, the prior
is estimated from the observed results and accounts for the perceptual bias in
a principled fashion.Comment: Twenty-ninth Annual Conference on Neural Information Processing
Systems (NIPS), Dec 2015, Montreal, Canad
A Biologically Inspired Controllable Stiffness Multimodal Whisker Follicle
This thesis takes a soft robotics approach to understand the computational role of a soft whisker follicle with mechanisms to control the stiffness of the whisker. In particular, the thesis explores the role of the controllable stiffness whisker follicle to selectively favour low frequency geometric features of an object or the high frequency texture features of the object.Tactile sensing is one of the most essential and complex sensory systems for most living beings. To acquire tactile information and explore the environment, animals use various biological mechanisms and transducing techniques. Whiskers, or vibrissae are a form of mammalian hair, found on almost all mammals other than homo sapiens. For many mammals, and especially rodents, these whiskers are essential as a means of tactile sensing.The mammalian whisker follicle contains multiple sensory receptors strategically organised to capture tactile sensory stimuli of different frequencies via the vibrissal system. Nocturnal mammals such as rats heavily depend on whisker based tactile perception to find their way through burrows and identify objects. There is diversity in the whiskers in terms of the physical structure and nervous innervation. The robotics community has developed many different whisker sensors inspired by this biological basis. They take diverse mechanical, electronic, and computational approaches to use whiskers to identify the geometry, mechanical properties, and objects' texture. Some work addresses specific object identification features and others address multiple features such as texture and shape etc. Therefore, it is vital to have a comprehensive discussion of the literature and to understand the merits of bio-inspired and pure-engineered approaches to whisker-based tactile perception.The most important contribution is the design and use of a novel soft whisker follicle comprising two different frequency-dependent data capturing modules to derive more profound insights into the biological basis of tactile perception in the mammalian whisker follicle. The new insights into the biological basis of tactile perception using whiskers provide new design guidelines to develop efficient robotic whiskers
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016
Recommended from our members
Soft Morphological Computation
Soft Robotics is a relatively new area of research, where progress in material science has powered the next generation of robots, exhibiting biological-like properties such as soft/elastic tissues, compliance, resilience and more besides. One of the issues when employing soft robotics technologies is the soft nature of the interactions arising between the robot and its environment. These interactions are complex, and the their dynamics are non-linear and hard to capture with known models. In this thesis we argue that complex soft interactions
can actually be beneficial to the robot, and give rise to rich stimuli which can be used for the resolution of robot tasks. We further argue that the usefulness of these interactions depends on statistical regularities, or structure, that appear in the stimuli. To this end, robots should appropriately employ their morphology and their actions, to influence the system-environment interactions such that structure can arise in the stimuli. In this thesis we show that learning processes can be used to perform such a task. Following this rationale, this thesis proposes and supports the theory of Soft Morphological Computation (SoMComp), by which a soft robot should appropriately condition, or ‘affect’, the soft interactions to improve the quality of the physical stimuli arising from it. SoMComp is composed of four main principles, i.e.: Soft Proprioception, Soft Sensing, Soft Morphology and Soft Actuation. Each of these principles is explored in the context of haptic object recognition or object handling in soft robots. Finally, this thesis provides an overview of this research and its future directions.AHDB CP17
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
The current interacting hand (IH) datasets are relatively simplistic in terms
of background and texture, with hand joints being annotated by a machine
annotator, which may result in inaccuracies, and the diversity of pose
distribution is limited. However, the variability of background, pose
distribution, and texture can greatly influence the generalization ability.
Therefore, we present a large-scale synthetic dataset RenderIH for interacting
hands with accurate and diverse pose annotations. The dataset contains 1M
photo-realistic images with varied backgrounds, perspectives, and hand
textures. To generate natural and diverse interacting poses, we propose a new
pose optimization algorithm. Additionally, for better pose estimation accuracy,
we introduce a transformer-based pose estimation network, TransHand, to
leverage the correlation between interacting hands and verify the effectiveness
of RenderIH in improving results. Our dataset is model-agnostic and can improve
more accuracy of any hand pose estimation method in comparison to other real or
synthetic datasets. Experiments have shown that pretraining on our synthetic
data can significantly decrease the error from 6.76mm to 5.79mm, and our
Transhand surpasses contemporary methods. Our dataset and code are available at
https://github.com/adwardlee/RenderIH.Comment: Accepted by ICCV 202
Visual scene recognition with biologically relevant generative models
This research focuses on developing visual object categorization methodologies that are based on machine learning techniques and biologically inspired generative models of visual scene recognition. Modelling the statistical variability in visual patterns, in the space of features extracted from them by an appropriate low level signal processing technique, is an important matter of investigation for both humans and machines. To study this problem, we have examined in detail two recent probabilistic models of vision: a simple multivariate Gaussian model as suggested by (Karklin & Lewicki, 2009) and a restricted Boltzmann machine (RBM) proposed by (Hinton, 2002). Both the models have been widely used for visual object classification and scene analysis tasks before. This research highlights that these models on their own are not plausible enough to perform the classification task, and suggests Fisher kernel as a means of inducing discrimination into these models for classification power. Our empirical results on standard benchmark data sets reveal that the classification performance of these generative models could be significantly boosted near to the state of the art performance, by drawing a Fisher kernel from compact generative models that computes the data labels in a fraction of total computation time. We compare the proposed technique with other distance based and kernel based classifiers to show how computationally efficient the Fisher kernels are. To the best of our knowledge, Fisher kernel has not been drawn from the RBM before, so the work presented in the thesis is novel in terms of its idea and application to vision problem
Lessons for Robotics From the Control Architecture of the Octopus
Biological and artificial agents are faced with many of the same computational and mechanical problems, thus strategies evolved in the biological realm can serve as inspiration for robotic development. The octopus in particular represents an attractive model for biologically-inspired robotic design, as has been recognized for the emerging field of soft robotics. Conventional global planning-based approaches to controlling the large number of degrees of freedom in an octopus arm would be computationally intractable. Instead, the octopus appears to exploit a distributed control architecture that enables effective and computationally efficient arm control. Here we will describe the neuroanatomical organization of the octopus peripheral nervous system and discuss how this distributed neural network is specialized for effectively mediating decisions made by the central brain and the continuous actuation of limbs possessing an extremely large number of degrees of freedom. We propose top-down and bottom-up control strategies that we hypothesize the octopus employs in the control of its soft body. We suggest that these strategies can serve as useful elements in the design and development of soft-bodied robotics
- …