76 research outputs found

    Mapping the Navigational Information Content of Insect Habitats

    Get PDF
    For developing and validating models of insect navigation it is essential to identify the visual input insects experience in their natural habitats. Here we report on the development of methods to reconstruct what insects see when making navigational decisions and critically assess the current limitations of such methods. We used a laser-range finder as well as camera-based methods to capture the 3D structure and the appearance of outdoor environments. Both approaches produce coloured point clouds that allow within the model scale the reconstruction of views at defined positions and orientations. For instance, we filmed bees and wasps with a high-speed stereo camera system to estimate their 3D flight paths and gaze direction. The high-speed system is registered with a 3D model of the same environment, such that panoramic images can be rendered along the insects’ flight paths (see accompanying abstract “Benchmark 3D-models of natural navigation environments @ www.InsectVision.org” by Mair et al.). The laser-range finder (see figure A) is equipped with a rotating camera that provides colour information for the measured 3D points. This system is robust and easy-to-use in the field generating high resolution data (about 50 × 106 points) with large field of view, up to a distance of 80 m at typical acquisition times of about 8 minutes. However, a large number of scans at different locations has to be recorded and registered to account for occlusions. In comparison, data acquisition in camera-based reconstruction from multiple view-points is fast, but model generation is computationally more complex due to bundle adjustment and dense pair-wise stereo computation (see figure B, C for views rendered from a 3D model based on 6 image pairs). In addition it is non-trivial and often time-consuming in the field to ensure the acquisition of sufficient information. We are currently developing the tools that will allow us to combine the results of laser-scanner and camera-based 3D reconstruction methods

    The Behavioral Relevance of Landmark Texture for Honeybee Homing

    Get PDF
    Honeybees visually pinpoint the location of a food source using landmarks. Studies on the role of visual memories have suggested that bees approach the goal by finding a close match between their current view and a memorized view of the goal location. The most relevant landmark features for this matching process seem to be their retinal positions, the size as defined by their edges, and their color. Recently, we showed that honeybees can use landmarks that are statically camouflaged, suggesting that motion cues are relevant as well. Currently it is unclear how bees weight these different landmark features when accomplishing navigational tasks, and whether this depends on their saliency. Since natural objects are often distinguished by their texture, we investigate the behavioral relevance and the interplay of the spatial configuration and the texture of landmarks. We show that landmark texture is a feature that bees memorize, and being given the opportunity to identify landmarks by their texture improves the bees’ navigational performance. Landmark texture is weighted more strongly than landmark configuration when it provides the bees with positional information and when the texture is salient. In the vicinity of the landmark honeybees changed their flight behavior according to its texture

    Software to convert terrestrial LiDAR scans of natural environments into photorealistic meshes

    Get PDF
    The introduction of 3D scanning has strongly influenced environmental sciences. If the resulting point clouds can be transformed into polygon meshes, a vast range of visualisation and analysis tools can be applied. But extracting accurate meshes from large point clouds gathered in natural environments is not trivial, requiring a suite of customisable processing steps. We present Habitat3D, an open source software tool to generate photorealistic meshes from registered point clouds of natural outdoor scenes. We demonstrate its capability by extracting meshes of different environments: 8,800 m2 grassland featuring several Eucalyptus trees (combining 9 scans and 41,989,885 data points); 1,018 m2 desert densely covered by vegetation (combining 56 scans and 192,223,621 data points); a well-structured garden; and a rough, volcanic surface. The resultant reconstructions accurately preserve all spatial features with millimetre accuracy whilst reducing the memory load by up to 98.5%. This enables rapid visualisation of the environments using off-the-shelf game engines and graphics hardware

    Interactive OAISYS: A photorealistic terrain simulation for robotics research

    Get PDF
    Photorealistic simulation pipelines are crucial for the development of novel robotic methods and modern machine vision approaches. Simulations have been particularly popular for generating labeled synthetic data sets, which otherwise would require vast efforts of manual annotation when using real data. However, these simulators are usually not interactive, and the data generation process cannot be interrupted. Therefore, these simulators are not suitable for evaluating active methods, such as active learning or perception aware path planning, which make decisions based on the observed perception data. In order to address this problem, we propose a modified version of the simulator OAISYS, a photorealistic scene simulator for unstructured outdoor environments. We extended the simulator in order to use it in an interactive way, and implemented a developer-friendly RPC interface so that it is easy for any environment to integrate into the simulator. In this paper, we demonstrate the functionality of the extension on 3D scene reconstruction to show its future research potential and provide an example of the implementation using the middleware ROS. The code is publicly available under https://github.com/DLR-RM/oaisy

    Uncertainty Estimation for Planetary Robotic Terrain Segmentation

    Get PDF
    Terrain Segmentation information is crucial input for current and future planetary robotic missions. Labeling training data for terrain segmentation is a difficult task and can often cause semantic ambiguity. As a result, large portion of an image usually remains unlabeled. Therefore, it is difficult to evaluate network performance on such regions. Worse is the problem of using such a network for inference, since the quality of predictions cannot be guaranteed if trained with a standard semantic segmentation network. This can be very dangerous for real autonomous robotic missions since the network could predict any of the classes in a particular region, and the robot does not know how much of the prediction to trust. To overcome this issue, we investigate the benefits of uncertainty estimation for terrain segmentation. Knowing how certain the network is about its prediction is an important element for a robust autonomous navigation. In this paper, we present neural networks, which not only give a terrain segmentation prediction, but also an uncertainty estimation. We compare the different methods on the publicly released real world Mars data from the MSL mission

    Out of the box: how bees orient in an ambiguous environment

    Get PDF
    Dittmar L, Stürzl W, Jetzschke S, Mertes M, Boeddeker N. Out of the box: how bees orient in an ambiguous environment. Animal Behaviour. 2014;89:13-21.How do bees employ multiple visual cues for homing? They could either combine the available cues using a view-based computational mechanism or pick one cue. We tested these strategies by training honeybees, Apis mellifera carnica, and bumblebees, Bombus terrestris, to locate food in one of the four corners of a box-shaped flight arena, providing multiple and also ambiguous cues. In tests, bees confused the diagonally opposite corners, which looked the same from the inside of the box owing to its rectangular shape and because these corners carried the same local colour cues. These 'rotational errors' indicate that the bees did not use compass information inferred from the geomagnetic field under our experimental conditions. When we then swapped cues between corners, bees preferred corners that had local cues similar to the trained corner, even when the geometric relations were incorrect. Apparently, they relied on views, a finding that we corroborated by computer simulations in which we assumed that bees try to match a memorized view of the goal location with the current view when they return to the box. However, when extra visual cues outside the box were provided, bees were able to resolve the ambiguity and locate the correct corner. We show that this performance cannot be explained by view matching from inside the box. Indeed, the bees adapted their behaviour and actively acquired information by leaving the arena and flying towards the cues outside the box. From there they re-entered the arena at the correct corner, now ignoring local cues that previously dominated their choices. All individuals of both species came up with this new behavioural strategy for solving the problem provided by the local ambiguity within the box. Thus both species seemed to be solving the ambiguous task by using their route memory, which is always available during their natural foraging behaviour. (C) 2014 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved

    Testing for the MMX Rover Autonomous Navigation Experiment on Phobos

    Get PDF
    The MMX rover will explore the surface of Phobos, Mars´ bigger moon. It will use its stereo cameras for perceiving the environment, enabling the use of vision based autonomous navigation algorithms. The German Aerospace Center (DLR) is currently developing the corresponding autonomous navigation experiment that will allow the rover to efficiently explore the surface of Phobos, despite limited communication with Earth and long turn-around times for operations. This paper discusses our testing strategy regarding the autonomous navigation solution. We present our general testing strategy for the software considering a development approach with agile aspects. We detail, how we ensure successful integration with the rover system despite having limited access to the flight hardware. We furthermore discuss, what environmental conditions on Phobos pose a potential risk for the navigation algorithms and how we test for these accordingly. Our testing is mostly data set-based and we describe our approaches for recording navigation data that is representative both for the rover system and also for the Phobos environment. Finally, we make the corresponding data set publicly available and provide an overview on its content

    Still no convincing evidence for cognitive map use by honeybees

    Get PDF
    Cheeseman et al. (1) claim that an ability of honey bees to travel home through a landscape with conflicting information from a celestial compass proves the bees' use of a cognitive map. Their claim involves a curious assumption about the visual information that can be extracted from the terrain: that there is sufficient information for a bee to identify where it is, but insufficient to guide its path without resorting to a cognitive map. We contend that the authors’ claims are unfounded

    The DLR Moon-Mars Test Site for Robotic Planetary Exploration

    Get PDF
    Building robots for planetary exploration missions requires intensive testing throughout all phases of the design process. Especially, during hard- and software development as well as mission training the process benefits of easy-to-access test sites that offer realistic conditions. For this purpose we have built the 1500 m² DLR Moon-Mars test site in Oberpfaffenhofen, Germany. The facility provides a large variety of geological formations and ground substrates on a compact terrain as well as a rich set of power and network connections. As a unique feature of the outdoor test site, we prepared a dedicated link to the German Space Operations Center that enables telerobotic experiments from ISS. Furthermore, we provide an optical tracking system for ground truth measurement and control. We describe the design and construction process of the test site and present an overview of its features. Three experiments with our robots LRU1, LRU2 and the Scout rover regarding autonomous navigation and mapping, autonomous manipulation and sampling as well as advanced mobility tests demonstrate the usage of the test site

    Acute mountain sickness.

    Get PDF
    Acute mountain sickness (AMS) is a clinical syndrome occurring in otherwise healthy normal individuals who ascend rapidly to high altitude. Symptoms develop over a period ofa few hours or days. The usual symptoms include headache, anorexia, nausea, vomiting, lethargy, unsteadiness of gait, undue dyspnoea on moderate exertion and interrupted sleep. AMS is unrelated to physical fitness, sex or age except that young children over two years of age are unduly susceptible. One of the striking features ofAMS is the wide variation in individual susceptibility which is to some extent consistent. Some subjects never experience symptoms at any altitude while others have repeated attacks on ascending to quite modest altitudes. Rapid ascent to altitudes of 2500 to 3000m will produce symptoms in some subjects while after ascent over 23 days to 5000m most subjects will be affected, some to a marked degree. In general, the more rapid the ascent, the higher the altitude reached and the greater the physical exertion involved, the more severe AMS will be. Ifthe subjects stay at the altitude reached there is a tendency for acclimatization to occur and symptoms to remit over 1-7 days
    corecore