450 research outputs found

    Mapping the Navigational Information Content of Insect Habitats

    Get PDF
    For developing and validating models of insect navigation it is essential to identify the visual input insects experience in their natural habitats. Here we report on the development of methods to reconstruct what insects see when making navigational decisions and critically assess the current limitations of such methods. We used a laser-range finder as well as camera-based methods to capture the 3D structure and the appearance of outdoor environments. Both approaches produce coloured point clouds that allow within the model scale the reconstruction of views at defined positions and orientations. For instance, we filmed bees and wasps with a high-speed stereo camera system to estimate their 3D flight paths and gaze direction. The high-speed system is registered with a 3D model of the same environment, such that panoramic images can be rendered along the insects’ flight paths (see accompanying abstract “Benchmark 3D-models of natural navigation environments @ www.InsectVision.org” by Mair et al.). The laser-range finder (see figure A) is equipped with a rotating camera that provides colour information for the measured 3D points. This system is robust and easy-to-use in the field generating high resolution data (about 50 × 106 points) with large field of view, up to a distance of 80 m at typical acquisition times of about 8 minutes. However, a large number of scans at different locations has to be recorded and registered to account for occlusions. In comparison, data acquisition in camera-based reconstruction from multiple view-points is fast, but model generation is computationally more complex due to bundle adjustment and dense pair-wise stereo computation (see figure B, C for views rendered from a 3D model based on 6 image pairs). In addition it is non-trivial and often time-consuming in the field to ensure the acquisition of sufficient information. We are currently developing the tools that will allow us to combine the results of laser-scanner and camera-based 3D reconstruction methods

    The Behavioral Relevance of Landmark Texture for Honeybee Homing

    Get PDF
    Honeybees visually pinpoint the location of a food source using landmarks. Studies on the role of visual memories have suggested that bees approach the goal by finding a close match between their current view and a memorized view of the goal location. The most relevant landmark features for this matching process seem to be their retinal positions, the size as defined by their edges, and their color. Recently, we showed that honeybees can use landmarks that are statically camouflaged, suggesting that motion cues are relevant as well. Currently it is unclear how bees weight these different landmark features when accomplishing navigational tasks, and whether this depends on their saliency. Since natural objects are often distinguished by their texture, we investigate the behavioral relevance and the interplay of the spatial configuration and the texture of landmarks. We show that landmark texture is a feature that bees memorize, and being given the opportunity to identify landmarks by their texture improves the bees’ navigational performance. Landmark texture is weighted more strongly than landmark configuration when it provides the bees with positional information and when the texture is salient. In the vicinity of the landmark honeybees changed their flight behavior according to its texture

    Konfokale Mikroskopie an gebĂŒndelten und individuellen einwandigen Kohlenstoffnanoröhren

    Get PDF
    Methoden zur Raman- und PL-Mikroskopie wurden in Hinblick auf Detektionsbereich, Korrelation verschiedener Spektroskopiearten und Geschwindigkeit weiterentwickelt. So konnten niederfrequente Schwingungen von Kohlenstoffnanoröhren erstmalig an unbehandelten Röhren und RöhrenbĂŒndeln untersucht werden. RöhrenbĂŒndel wurden mittels Rasterkraftmikroskop manipuliert, um spektroskopische Zuordnungen zu erleichtern. Zudem wurden Röhrentransistoren spektroskopisch untersucht

    How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation

    Get PDF
    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently

    A model of ant route navigation driven by scene familiarity

    Get PDF
    In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints

    Molecularly Characterised Xenograft Tumour Mouse Models: Valuable Tools for Evaluation of New Therapeutic Strategies for Secondary Liver Cancers

    Get PDF
    To develop and evaluate new therapeutic strategies for the treatment of human cancers, well-characterised preclinical model systems are a prerequisite. To this aim, we have established xenotransplantation mouse models and corresponding cell cultures from surgically obtained secondary human liver tumours. Established xenograft tumours were patho- and immunohistologically characterised, and expression levels of cancer-relevant genes were quantified in paired original and xenograft tumours and the derivative cell cultures applying RT-PCR-based array technology. Most of the characteristic morphological and immunohistochemical features of the original tumours were shown to be maintained. No differences were found concerning expression of genes involved in cell cycle regulation and oncogenesis. Interestingly, cytokine and matrix metalloproteinase encoding genes appeared to be expressed differentially. Thus, the established models are closely reflecting pathohistological and molecular characteristics of the selected human tumours and may therefore provide useful tools for preclinical analyses of new antitumour strategies in vivo

    Software to convert terrestrial LiDAR scans of natural environments into photorealistic meshes

    Get PDF
    The introduction of 3D scanning has strongly influenced environmental sciences. If the resulting point clouds can be transformed into polygon meshes, a vast range of visualisation and analysis tools can be applied. But extracting accurate meshes from large point clouds gathered in natural environments is not trivial, requiring a suite of customisable processing steps. We present Habitat3D, an open source software tool to generate photorealistic meshes from registered point clouds of natural outdoor scenes. We demonstrate its capability by extracting meshes of different environments: 8,800 m2 grassland featuring several Eucalyptus trees (combining 9 scans and 41,989,885 data points); 1,018 m2 desert densely covered by vegetation (combining 56 scans and 192,223,621 data points); a well-structured garden; and a rough, volcanic surface. The resultant reconstructions accurately preserve all spatial features with millimetre accuracy whilst reducing the memory load by up to 98.5%. This enables rapid visualisation of the environments using off-the-shelf game engines and graphics hardware
    • 

    corecore