39 research outputs found

    Map-enhanced visual taxiway extraction for autonomous taxiing of UAVs

    Get PDF
    In this paper, a map-enhanced method is proposed for vision-based taxiway centreline extraction, which is a prerequisite of autonomous visual navigation systems for unmanned aerial vehicles. Comparing with other sensors, cameras are able to provide richer information. Consequently, vision based navigations have been intensively studied in the recent two decades and computer vision techniques are shown to be capable of dealing with various problems in applications. However, there are signi cant drawbacks associated with these computer vision techniques that the accuracy and robustness may not meet the required standard in some application scenarios. In this paper, a taxiway map is incorporated into the analysis as prior knowledge to improve on the vehicle localisation and vision based centreline extraction. We develop a map updating algorithm so that the traditional map is able to adapt to the dynamic environment via Bayesian learning. The developed method is illustrated using a simulation study

    Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    Get PDF
    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future safety-critical situations and enhance time-critical decision-making missions in dynamic environments, and to support the easy and effective managing, browsing, and searching of spatiotemporal data in a dynamic environment, we propose an asynchronous, scalable, and comprehensive spatiotemporal data organization, display, and interaction method that allows operators to navigate through spatiotemporal information rather than through the environments being examined, and to maintain all necessary global and local situation awareness. To empirically prove the viability of our approach, we developed the Event-Lens system, which generates asynchronous prioritized images to provide the operator with a manageable, comprehensive view of the information that is collected by multiple sensors. The user study and interaction mode experiments were designed and conducted. The Event-Lens system was discovered to have a consistent advantage in multiple moving-target marking-task performance measures. It was also found that participants’ attentional control, spatial ability, and action video gaming experience affected their overall performance

    Brain dynamic during landmark-based learning spatial navigation

    Get PDF
    In the current study, I investigated both human behavior and brain dynamics during spatial navigation to gain a better understanding of human navigational strategies and brain signals that underlie spatial cognition. To this end, a custom-built virtual reality task and a 64-channel scalp electroencephalogram (EEG) were utilized to study participants. At the first step, we presented a novel, straightforward, yet powerful tool to evaluate individual differences during navigation, comprising of a virtual radial-arm maze inspired to the animal experiments. The virtual maze is designed and furnished, similar to an art gallery, to provide a more realistic and exciting environment for subjects’ exploration. We investigated whether a different set of instructions (explicit or implicit) affects subjects’ navigational performance, and we assessed the effect of the set of instructions on exploration strategies during both place learning and recall. We tested 42 subjects and evaluated their way-finding ability. Individual differences were assessed through the analysis of the navigational paths, which permitted the isolation and definition of a few strategies adopted by both subjects who adopted a more explicit strategy, based on explicit instructions, and an implicit strategy, based on implicit instructions. The second step aimed to explore brain dynamics and neurophysiological activity during spatial navigation. More specifically, we aimed to figure out how navigational related brain regions are connected and how their interactions and electrical activity vary according to different navigational tasks and environment. This experiment was divided into two steps: learning phase and test phase. The same virtual maze (art gallery) as the behavioral part of the study was used so that subjects to perform landmark-based navigation. The main task of the experiment was finding and memorizing the position of some goals within the environment during the learning phase and retrieving the spatial information of the goals during the test phase. We recorded EEG signals of 20 subjects during the experiment, and both scalp-level and source-level analysis approaches were employed to figure out how the brain represents the spatial location of landmarks and targets and, more precisely, how different brain regions contribute to spatial orientation and landmark-based learning during navigation

    Spatial memories in place recognition

    Get PDF
    Die Dissertation untersucht die Rolle von Raumgedächtnissen in der Ortserkennung. Die erste Studie fokussiert sich auf das Zusammenspiel zwischen dem räumlichen Arbeitsgedächtnis und der Ortskodierung im Langzeitgedächtnis. Die zweite Studie adressiert die Frage, ob der Abruf von zeitlichem Kontext aus dem episodischen Gedächtnis, der entscheidend für die Integration von vielfachen Ansichten in der Ortserkennung ist, ein charakteristisches Ereignis-korreliertes Potential hervorruft
    corecore