5,997 research outputs found

    How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation

    Get PDF
    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently

    Satellite-map position estimation for the Mars rover

    Get PDF
    A method for locating the Mars rover using an elevation map generated from satellite data is described. In exploring its environment, the rover is assumed to generate a local rover-centered elevation map that can be used to extract information about the relative position and orientation of landmarks corresponding to local maxima. These landmarks are integrated into a stochastic map which is then matched with the satellite map to obtain an estimate of the robot's current location. The landmarks are not explicitly represented in the satellite map. The results of the matching algorithm correspond to a probabilistic assessment of whether or not the robot is located within a given region of the satellite map. By assigning a probabilistic interpretation to the information stored in the satellite map, researchers are able to provide a precise characterization of the results computed by the matching algorithm

    Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation

    Full text link
    Autonomous harvesting and transportation is a long-term goal of the forest industry. One of the main challenges is the accurate localization of both vehicles and trees in a forest. Forests are unstructured environments where it is difficult to find a group of significant landmarks for current fast feature-based place recognition algorithms. This paper proposes a novel approach where local observations are matched to a general tree map using the Delaunay triangularization as the representation format. Instead of point cloud based matching methods, we utilize a topology-based method. First, tree trunk positions are registered at a prior run done by a forest harvester. Second, the resulting map is Delaunay triangularized. Third, a local submap of the autonomous robot is registered, triangularized and matched using triangular similarity maximization to estimate the position of the robot. We test our method on a dataset accumulated from a forestry site at Lieksa, Finland. A total length of 2100\,m of harvester path was recorded by an industrial harvester with a 3D laser scanner and a geolocation unit fixed to the frame. Our experiments show a 12\,cm s.t.d. in the location accuracy and with real-time data processing for speeds not exceeding 0.5\,m/s. The accuracy and speed limit is realistic during forest operations

    Visual navigation in ants

    Get PDF
    Les remarquables capacités de navigation des insectes nous prouvent à quel point ces " mini-cerveaux " peuvent produire des comportements admirablement robustes et efficaces dans des environnements complexes. En effet, être capable de naviguer de façon efficace et autonome dans un environnement parfois hostile (désert, forêt tropicale) sollicite l'intervention de nombreux processus cognitifs impliquant l'extraction, la mémorisation et le traitement de l'information spatiale préalables à une prise de décision locomotrice orientée dans l'espace. Lors de leurs excursions hors du nid, les insectes tels que les abeilles, guêpes ou fourmis, se fient à un processus d'intégration du trajet, mais également à des indices visuels qui leur permettent de mémoriser des routes et de retrouver certains sites alimentaires familiers et leur nid. L'étude des mécanismes d'intégration du trajet a fait l'objet de nombreux travaux, par contre, nos connaissances à propos de l'utilisation d'indices visuels sont beaucoup plus limitées et proviennent principalement d'études menées dans des environnements artificiellement simplifiés, dont les conclusions sont parfois difficilement transposables aux conditions naturelles. Cette thèse propose une approche intégrative, combinant 1- des études de terrains et de laboratoire conduites sur deux espèces de fourmis spécialistes de la navigation visuelle (Melophorus bagoti et Gigantiops destructor) et 2- des analyses de photos panoramiques prisent aux endroits où les fourmis naviguent qui permettent de quantifier objectivement l'information visuelle accessible à l'insecte. Les résultats convergents obtenus sur le terrain et au laboratoire permettent de montrer que, chez ces deux espèces, les fourmis ne fragmentent pas leur monde visuel en multiples objets indépendants, et donc ne mémorisent pas de 'repères visuels' ou de balises particuliers comme le ferait un être humain. En fait, l'efficacité de leur navigation émergerait de l'utilisation de paramètres visuels étendus sur l'ensemble de leur champ visuel panoramique, incluant repères proximaux comme distaux, sans les individualiser. Contre-intuitivement, de telles images panoramiques, même à basse résolution, fournissent une information spatiale précise et non ambiguë dans les environnements naturels. Plutôt qu'une focalisation sur des repères isolés, l'utilisation de vues dans leur globalité semble être plus efficace pour représenter la complexité des scènes naturelles et être mieux adaptée à la basse résolution du système visuel des insectes. Les photos panoramiques enregistrées peuvent également servir à l'élaboration de modèles navigationnels. Les prédictions de ces modèles sont ici directement comparées au comportement des fourmis, permettant ainsi de tester et d'améliorer les différentes hypothèses envisagées. Cette approche m'a conduit à la conclusion selon laquelle les fourmis utilisent leurs vues panoramiques de façons différentes suivant qu'elles se déplacent en terrain familier ou non. Par exemple, aligner son corps de manière à ce que la vue perçue reproduise au mieux l'information mémorisée est une stratégie très efficace pour naviguer le long d'une route bien connue ; mais n'est d'aucune efficacité si l'insecte se retrouve en territoire nouveau, écarté du chemin familier. Dans ces cas critiques, les fourmis semblent recourir à une seconde stratégie qui consiste à se déplacer vers les régions présentant une ligne d'horizon plus basse que celle mémorisée, ce qui généralement conduit vers le terrain familier. Afin de choisir parmi ces deux différentes stratégies, les fourmis semblent tout simplement se fier au degré de familiarisation avec le panorama perçu. Cette thèse soulève aussi la question de la nature de l'information visuelle mémorisée par les insectes. Le modèle du " snapshot " qui prédomine dans la littérature suppose que les fourmis mémorisent une séquence d'instantanés photographiques placés à différents points le long de leurs routes. A l'inverse, les résultats obtenus dans le présent travail montrent que l'information visuelle mémorisée au bout d'une route (15 mètres) modifie l'information mémorisée à l'autre extrémité de cette même route, ce qui suggère que la connaissance visuelle de l'ensemble de la route soit compactée en une seule et même représentation mémorisée. Cette hypothèse s'accorde aussi avec d'autres de nos résultats montrant que la mémoire visuelle ne s'acquiert pas instantanément, mais se développe et s'affine avec l'expérience répétée. Lorsqu'une fourmi navigue le long de sa route, ses récepteurs visuels sont stimulés de façon continue par une scène évoluant doucement et régulièrement au fur et à mesure du déplacement. Mémoriser un pattern général de stimulations, plutôt qu'une série de " snapshots " indépendants et très ressemblants les uns aux autres, constitue une hypothèse parcimonieuse. Cette hypothèse s'applique en outre particulièrement bien aux modèles en réseaux de neurones, suggérant sa pertinence biologique. Dans l'ensemble, cette thèse s'intéresse à la nature des perceptions et de la mémoire visuelle des fourmis, ainsi qu'à la manière dont elles sont intégrées et traitées afin de produire une réponse navigationnelle appropriée. Nos résultats sont aussi discutés dans le cadre de la cognition comparée. Insectes comme vertébrés ont résolu le même problème qui consiste à naviguer de façon efficace sur terre. A la lumière de la théorie de l'évolution de Darwin, il n'y a 'a priori' aucune raison de penser qu'il existe une forme de transition brutale entre les mécanismes cognitifs des différentes espèces animales. Le fossé marqué entre insectes et vertébrés au sein des sciences cognitives pourrait bien être dû à des approches différentes plutôt qu'à de vraies différences ontologiques. Historiquement, l'étude de la navigation de l'insecte a suivi une approche de type 'bottom-up' qui recherche comment des comportements apparemment complexes peuvent découler de mécanismes simples. Ces solutions parcimonieuses, comme celles explorées dans cette thèse, peuvent fournir de remarquables hypothèses de base pour expliquer la navigation chez d'autres espèces animales aux cerveaux et comportements apparemment plus complexes, contribuant ainsi à une véritable cognition comparée.Navigating efficiently in the outside world requires many cognitive abilities like extracting, memorising, and processing information. The remarkable navigational abilities of insects are an existence proof of how small brains can produce exquisitely efficient, robust behaviour in complex environments. During their foraging trips, insects, like ants or bees, are known to rely on both path integration and learnt visual cues to recapitulate a route or reach familiar places like the nest. The strategy of path integration is well understood, but much less is known about how insects acquire and use visual information. Field studies give good descriptions of visually guided routes, but our understanding of the underlying mechanisms comes mainly from simplified laboratory conditions using artificial, geometrically simple landmarks. My thesis proposes an integrative approach that combines 1- field and lab experiments on two visually guided ant species (Melophorus bagoti and Gigantiops destructor) and 2- an analysis of panoramic pictures recorded along the animal's route. The use of panoramic pictures allows an objective quantification of the visual information available to the animal. Results from both species, in the lab and the field, converged, showing that ants do not segregate their visual world into objects, such as landmarks or discrete features, as a human observers might assume. Instead, efficient navigation seems to arise from the use of cues widespread on the ants' panoramic visual field, encompassing both proximal and distal objects together. Such relatively unprocessed panoramic views, even at low resolution, provide remarkably unambiguous spatial information in natural environment. Using such a simple but efficient panoramic visual input, rather than focusing on isolated landmarks, seems an appropriate strategy to cope with the complexity of natural scenes and the poor resolution of insects' eyes. Also, panoramic pictures can serve as a basis for running analytical models of navigation. The predictions of these models can be directly compared with the actual behaviour of real ants, allowing the iterative tuning and testing of different hypotheses. This integrative approach led me to the conclusion that ants do not rely on a single navigational technique, but might switch between strategies according to whether they are on or off their familiar terrain. For example, ants can recapitulate robustly a familiar route by simply aligning their body in a way that the current view matches best their memory. However, this strategy becomes ineffective when displaced away from the familiar route. In such a case, ants appear to head instead towards the regions where the skyline appears lower than the height recorded in their memory, which generally leads them closer to a familiar location. How ants choose between strategies at a given time might be simply based on the degree of familiarity of the panoramic scene currently perceived. Finally, this thesis raises questions about the nature of ant memories. Past studies proposed that ants memorise a succession of discrete 2D 'snapshots' of their surroundings. Contrastingly, results obtained here show that knowledge from the end of a foraging route (15 m) impacts strongly on the behaviour at the beginning of the route, suggesting that the visual knowledge of a whole foraging route may be compacted into a single holistic memory. Accordingly, repetitive training on the exact same route clearly affects the ants' behaviour, suggesting that the memorised information is processed and not 'obtained at once'. While navigating along their familiar route, ants' visual system is continually stimulated by a slowly evolving scene, and learning a general pattern of stimulation rather than storing independent but very similar snapshots appears a reasonable hypothesis to explain navigation on a natural scale; such learning works remarkably well with neural networks. Nonetheless, what the precise nature of ants' visual memories is and how elaborated they are remain wide open question. Overall, my thesis tackles the nature of ants' perception and memory as well as how both are processed together to output an appropriate navigational response. These results are discussed in the light of comparative cognition. Both vertebrates and insects have resolved the same problem of navigating efficiently in the world. In light of Darwin's theory of evolution, there is no a priori reason to think that there is a clear division between cognitive mechanisms of different species. The actual gap between insect and vertebrate cognitive sciences may result more from different approaches rather than real differences. Research on insect navigation has been approached with a bottom-up philosophy, one that examines how simple mechanisms can produce seemingly complex behaviour. Such parsimonious solutions, like the ones explored in the present thesis, can provide useful baseline hypotheses for navigation in other larger-brained animals, and thus contribute to a more truly comparative cognition

    Visual navigation and path tracking using street geometry information for image alignment and servoing

    Get PDF
    Single camera-based navigation systems need information from other sensors or from the work environment to produce reliable and accurate position measurements. Providing such trustable, accurate, and available information in the environment is very important. The work highlights that the availability of well-described streets in urban environments can be exploited by drones for navigation and path tracking purposes, thus benefitting from such structures is not limited to only automated driving cars. While the drone position is continuously computed using visual odometry, scene matching is used to correct the position drift depending on some landmarks. The drone path is defined by several waypoints, and landmarks centralized by those waypoints are carefully chosen in the street intersections. The known streets’ geometry and dimensions are used to estimate the image scale and orientation which are necessary for images alignment, to compensate for the visual odometry drift, and to pass closer to the landmark center by the visual servoing process. Probabilistic Hough transform is used to detect and extract the street borders. The system is realized in a simulation environment consisting of the Robot Operating System ROS, 3D dynamic simulator Gazebo, and IRIS drone model. The results prove the suggested system efficiency with a 1.4 m position RMS error

    User manual and programmer reference manual for the ATS-6 navigation model AOIPS and McIDAS versions, part 2

    Get PDF
    Development of a navigation system for a given satellite is reported. An algorithm for converting a satellite picture element location to earth location and vice versa was defined as well as a procedure for measuring the set of constants needed by the algorithm. A user manual briefly describing the current version of the navigation model and how to use the computer programs developed for it is presented
    corecore