15,505 research outputs found
Nemo: a computational tool for analyzing nematode locomotion
The nematode Caenorhabditis elegans responds to an impressive range of
chemical, mechanical and thermal stimuli and is extensively used to investigate
the molecular mechanisms that mediate chemosensation, mechanotransduction and
thermosensation. The main behavioral output of these responses is manifested as
alterations in animal locomotion. Monitoring and examination of such
alterations requires tools to capture and quantify features of nematode
movement. In this paper, we introduce Nemo (nematode movement), a
computationally efficient and robust two-dimensional object tracking algorithm
for automated detection and analysis of C. elegans locomotion. This algorithm
enables precise measurement and feature extraction of nematode movement
components. In addition, we develop a Graphical User Interface designed to
facilitate processing and interpretation of movement data. While, in this
study, we focus on the simple sinusoidal locomotion of C. elegans, our approach
can be readily adapted to handle complicated locomotory behaviour patterns by
including additional movement characteristics and parameters subject to
quantification. Our software tool offers the capacity to extract, analyze and
measure nematode locomotion features by processing simple video files. By
allowing precise and quantitative assessment of behavioral traits, this tool
will assist the genetic dissection and elucidation of the molecular mechanisms
underlying specific behavioral responses.Comment: 12 pages, 2 figures. accepted by BMC Neuroscience 2007, 8:8
Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain
In this paper we tackle the problem of visually predicting surface friction
for environments with diverse surfaces, and integrating this knowledge into
biped robot locomotion planning. The problem is essential for autonomous robot
locomotion since diverse surfaces with varying friction abound in the real
world, from wood to ceramic tiles, grass or ice, which may cause difficulties
or huge energy costs for robot locomotion if not considered. We propose to
estimate friction and its uncertainty from visual estimation of material
classes using convolutional neural networks, together with probability
distribution functions of friction associated with each material. We then
robustly integrate the friction predictions into a hierarchical (footstep and
full-body) planning method using chance constraints, and optimize the same
trajectory costs at both levels of the planning method for consistency. Our
solution achieves fully autonomous perception and locomotion on slippery
terrain, which considers not only friction and its uncertainty, but also
collision, stability and trajectory cost. We show promising friction prediction
results in real pictures of outdoor scenarios, and planning experiments on a
real robot facing surfaces with different friction
Towards automated visual flexible endoscope navigation
Background:\ud
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud
Methods:\ud
A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud
Results:\ud
Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud
Conclusions:\ud
Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process
Gaze Behaviour during Space Perception and Spatial Decision Making
A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screen-shots of choice points taken in large virtual environments. Each screen-shot depicted alternative path options. In Experiment 1, participants had to decide between them in order to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 & 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making
- …