83,842 research outputs found
Simulation of a navigation system based on computer vision and correlation algorithms
Navigation based on computer vision is a cheap method for drone’s position calculation. This paper presents an implementation and simulation of a navigation system based on extreme correlation. Simulation was done using ROS (Robot Operating System), Gazebo (3D dynamic simulator), a drone model and a 3D urban flight environment. Correlation between the captured image and the georeferenced image was done to calculate the drone’s position. Testes showed a position rms error less than 3m and 40ms execution time
USE OF ASSISTED PHOTOGRAMMETRY FOR INDOOR AND OUTDOOR NAVIGATION PURPOSES
Nowadays, devices and applications that require navigation solutions are continuously growing. For instance, consider the increasing demand of mapping information or the development of applications based on users’ location. In some case it could be sufficient an approximate solution (e.g. at room level), but in the large amount of cases a better solution is required.
The navigation problem has been solved from a long time using Global Navigation Satellite System (GNSS). However, it can be unless in obstructed areas, such as in urban areas or inside buildings. An interesting low cost solution is photogrammetry, assisted using additional information to scale the photogrammetric problem and recovering a solution also in critical situation for image-based methods (e.g. poor textured surfaces). In this paper, the use of assisted photogrammetry has been tested for both outdoor and indoor scenarios. Outdoor navigation problem has been faced developing a positioning system with Ground Control Points extracted from urban maps as constrain and tie points automatically extracted from the images acquired during the survey. The proposed approach has been tested under different scenarios, recovering the followed trajectory with an accuracy of 0.20 m.
For indoor navigation a solution has been thought to integrate the data delivered by Microsoft Kinect, by identifying interesting features on the RGB images and re-projecting them on the point clouds generated from the delivered depth maps. Then, these points have been used to estimate the rotation matrix between subsequent point clouds and, consequently, to recover the trajectory with few centimeters of error
Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification
In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology.
This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems
Dynamic Body VSLAM with Semantic Constraints
Image based reconstruction of urban environments is a challenging problem
that deals with optimization of large number of variables, and has several
sources of errors like the presence of dynamic objects. Since most large scale
approaches make the assumption of observing static scenes, dynamic objects are
relegated to the noise modeling section of such systems. This is an approach of
convenience since the RANSAC based framework used to compute most multiview
geometric quantities for static scenes naturally confine dynamic objects to the
class of outlier measurements. However, reconstructing dynamic objects along
with the static environment helps us get a complete picture of an urban
environment. Such understanding can then be used for important robotic tasks
like path planning for autonomous navigation, obstacle tracking and avoidance,
and other areas. In this paper, we propose a system for robust SLAM that works
in both static and dynamic environments. To overcome the challenge of dynamic
objects in the scene, we propose a new model to incorporate semantic
constraints into the reconstruction algorithm. While some of these constraints
are based on multi-layered dense CRFs trained over appearance as well as motion
cues, other proposed constraints can be expressed as additional terms in the
bundle adjustment optimization process that does iterative refinement of 3D
structure and camera / object motion trajectories. We show results on the
challenging KITTI urban dataset for accuracy of motion segmentation and
reconstruction of the trajectory and shape of moving objects relative to ground
truth. We are able to show average relative error reduction by a significant
amount for moving object trajectory reconstruction relative to state-of-the-art
methods like VISO 2, as well as standard bundle adjustment algorithms
Recommended from our members
Towards Rapid Generation and Visualisation of Large 3D Urban Landscapes for Mobile Device Navigation
In this paper a procedural 3D modelling solution for mobile devices is presented based on scripting algorithms allowing for both the automatic and also semi-automatic creation of photorealistic quality virtual urban content. The combination of aerial images, GIS data, 2D ground maps and terrestrial photographs as input data coupled with a user-friendly customized interface permits the automatic and interactive generation of large-scale, accurate, georeferenced and fully-textured 3D virtual city content, content that can be specially optimized for use with mobile devices but also with navigational tasks in mind. Furthermore, a user-centred mobile virtual reality (VR) visualisation and interaction tool operating on PDAs (Personal Digital Assistants) for pedestrian navigation is also discussed. Via this engine, the import and display of various navigational file formats (2D and 3D) is supported, including a comprehensive front-end user-friendly graphical user interface providing immersive virtual 3D navigation
Featureless visual processing for SLAM in changing outdoor environments
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features
A contribution to vision-based autonomous helicopter flight in urban environments
A navigation strategy that exploits the optic flow and inertial information to continuously avoid collisions with both lateral and frontal obstacles has been used to control a simulated helicopter flying autonomously in a textured urban environment. Experimental results demonstrate that the corresponding controller generates cautious behavior, whereby the helicopter tends to stay in the middle of narrow corridors, while its forward velocity is automatically reduced when the obstacle density increases. When confronted with a frontal obstacle, the controller is also able to generate a tight U-turn that ensures the UAV’s survival. The paper provides comparisons with related work, and discusses the applicability of the approach to real platforms
- …