2,164 research outputs found

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robotā€™s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    Interfaz de software Autonavi3at para navegar de forma autĆ³noma en vĆ­as urbanas mediante visiĆ³n omnidireccional y un robot mĆ³vil

    Get PDF
    The design of efficient autonomous navigation systems for mobile robots or autonomous vehicles is fundamental to perform the programmed tasks. Basically, two kind of sensors are used in urban road following: LIDAR and cameras. LIDAR sensors are highly accurate but expensive and extra work is needed for human understanding of the point cloud scenes; however, visual content is understood better by human beings, which should be used to develop human-robot interfaces. In this work, a computer vision-based urban road following software tool called AutoNavi3AT for mobile robots and autonomous vehicles is presented. The urban road following scheme proposed in AutoNavi3AT uses vanishing point estimation and tracking on panoramic images to control the mobile robot heading on the urban road. To do that, Gabor filters, region growing, and particle filters were used. In addition, laser range data are also employed for local obstacle avoidance. Quantitative results were achieved using two kind of tests, one uses datasets acquired at the Universidad del Valle campus, and field tests using a Pioneer 3AT mobile robot. As a result, important improvements in the vanishing point estimation of 68.26 % and 61.46 % in average were achieved, which is useful for mobile robots and autonomous vehicles when they are moving on urban roads.El disenĢƒo de sistemas de navegacioĢn autoĢnomos eficientes para robots moĢviles o vehiĢculos autoĢnomos es fundamental para realizar las tareas programadas. BaĢsicamente, se utilizan dos tipos de sensores en el seguimiento de viĢas urbanas: LIDAR y caĢmaras. Los sensores LIDAR son muy precisos, pero costosos y se necesita trabajo adicional para la comprensioĢn humana de las escenas de nubes de puntos; sin embargo, los seres humanos entienden mejor el contenido visual, lo que deberiĢa usarse para desarrollar interfaces humano-robot. En este trabajo, se presenta una herramienta de software de seguimiento de carreteras urbanas basada en visioĢn artificial llamada AutoNavi3AT para robots moĢviles y vehiĢculos autoĢnomos. El esquema de seguimiento de viĢas urbanas propuesto en AutoNavi3AT utiliza la estimacioĢn del punto de fuga y el seguimiento de imaĢgenes panoraĢmicas para controlar el avance del robot moĢvil en la viĢa urbana. Para ello se utilizaron filtros Gabor, crecimiento de regiones y filtros de partiĢculas. AdemaĢs, los datos de alcance del laĢser tambieĢn se emplean para evitar obstaĢculos locales. Los resultados cuantitativos se lograron utilizando dos tipos de pruebas, una utiliza conjuntos de datos adquiridos en el campus de la Universidad del Valle y pruebas de campo utilizando un robot moĢvil Pioneer 3AT. Como resultado, se lograron mejoras importantes en la estimacioĢn del punto de fuga de 68.26% y 61.46% en promedio, lo cual es uĢtil para robots moĢviles y vehiĢculos autoĢnomos cuando se desplazan por viĢas urbanas

    Omnidirectional video stabilisation on a virtual camera using sensor fusion

    Get PDF
    This paper presents a method for robustly stabilising omnidirectional video given the presence of significantrotations and translations by creating a virtual camera and using a combination of sensor fusion and scene tracking. Real time rotational movements of the camera are measured by an Inertial Measurement Unit (IMU), which provides an initial estimate of the ego-motion of the camera platform. Image registration is then used to refine these estimates. The calculated ego-motion is then used to adjust an extract of the omnidirectional video, forming a virtual camera that is focused on the scene. Experiments show the technique is effective under challenging ego-motions and overcomes deficiencies that are associated with unimodal approaches making it robust and suitable to be used in many surveillance applications

    Multiple Views Tracking of Maritime Targets

    Get PDF
    This paper explores techniques for multiple views target tracking in a maritime environment using a mobile surveillance platform. We utilise an omnidirectional camera to capture full spherical video and use an Inertial Measurement Unit (IMU) to estimate the platform?s ego-motion. For each target a part of the omnidirectional video is extracted, forming a corresponding set of virtual cameras. Each target is then tracked using a dynamic template matching method and particle filtering. Its predictions are then used to continuously adjust the orientations of the virtual cameras, keeping a lock on the targets. We demonstrate the performance of the application in several real-world maritime settings

    An adaptive appearance-based map for long-term topological localization of mobile robots

    Get PDF
    This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor

    Omnidirectional Vision-based Robot Localization on Soccer Field by Particle Filter

    Get PDF
    [[abstract]]An omnidirectional vision-based localization method based on the particle Alter is proposed to achieve the location of robot on a soccer field in this paper. Two kinds of sensor information are considered in the method to let the robot on the soccer field can estimate its location and then to decide an appropriate strategy. One is the robot action sensor information obtained by the motor's feedback and the other is the observation sensor information obtained by the image captured by an omnidirectional vision system. The action sensor information is used to expect the robot location distribution. The location distribution is represented by particles. The omnidirectional image is used to observe the environment information. The differences between these particles location's environment information and the robot observation environment information are considered to calculate the belief values of particles. Then the posture of the particle with the highest belief is used to be the estimated posture of the robot. Some experimental results are presented to illustrate the effectiveness of the proposed method.[[conferencetype]]國際[[conferencedate]]20100818~20100821[[iscallforpapers]]Y[[conferencelocation]]Taipei, Taiwa

    A vision system for mobile maritime surveillance platforms

    Get PDF
    Mobile surveillance systems play an important role to minimise security and safety threats in high-risk or hazardous environments. Providing a mobile marine surveillance platform with situational awareness of its environment is important for mission success. An essential part of situational awareness is the ability to detect and subsequently track potential target objects.Typically, the exact type of target objects is unknown, hence detection is addressed as a problem of finding parts of an image that stand out in relation to their surrounding regions or are atypical to the domain. Contrary to existing saliency methods, this thesis proposes the use of a domain specific visual attention approach for detecting potential regions of interest in maritime imagery. For this, low-level features that are indicative of maritime targets are identified. These features are then evaluated with respect to their local, regional, and global significance. Together with a domain specific background segmentation technique, the features are combined in a Bayesian classifier to direct visual attention to potential target objects.The maritime environment introduces challenges to the camera system: gusts, wind, swell, or waves can cause the platform to move drastically and unpredictably. Pan-tilt-zoom cameras that are often utilised for surveillance tasks can adjusting their orientation to provide a stable view onto the target. However, in rough maritime environments this requires high-speed and precise inputs. In contrast, omnidirectional cameras provide a full spherical view, which allows the acquisition and tracking of multiple targets at the same time. However, the target itself only occupies a small fraction of the overall view. This thesis proposes a novel, target-centric approach for image stabilisation. A virtual camera is extracted from the omnidirectional view for each target and is adjusted based on the measurements of an inertial measurement unit and an image feature tracker. The combination of these two techniques in a probabilistic framework allows for stabilisation of rotational and translational ego-motion. Furthermore, it has the specific advantage of being robust to loosely calibrated and synchronised hardware since the fusion of tracking and stabilisation means that tracking uncertainty can be used to compensate for errors in calibration and synchronisation. This then completely eliminates the need for tedious calibration phases and the adverse effects of assembly slippage over time.Finally, this thesis combines the visual attention and omnidirectional stabilisation frameworks and proposes a multi view tracking system that is capable of detecting potential target objects in the maritime domain. Although the visual attention framework performed well on the benchmark datasets, the evaluation on real-world maritime imagery produced a high number of false positives. An investigation reveals that the problem is that benchmark data sets are unconsciously being influenced by human shot selection, which greatly simplifies the problem of visual attention. Despite the number of false positives, the tracking approach itself is robust even if a high number of false positives are tracked
    • ā€¦
    corecore