491 research outputs found

    2D Forward Looking Sonar Simulation with Ground Echo Modeling

    Full text link
    Imaging sonar produces clear images in underwater environments, independent of water turbidity and lighting conditions. The next generation 2D forward looking sonars are compact in size and able to generate high-resolution images which facilitate underwater robotics research. Considering the difficulties and expenses of implementing experiments in underwater environments, tremendous work has been focused on sonar image simulation. However, sonar artifacts like multi-path reflection were not sufficiently discussed, which cannot be ignored in water tank environments. In this paper, we focus on the influence of echoes from the flat ground. We propose a method to simulate the ground echo effect physically in acoustic images. We model the multi-bounce situations using the single-bounce framework for computation efficiency. We compare the real image captured in the water tank with the synthetic images to validate the proposed methods.Comment: Final version of UR202

    Underwater simulation and mapping using imaging sonar through ray theory and Hilbert maps

    Get PDF
    Mapping, sometimes as part of a SLAM system, is an active topic of research and has remarkable solutions using laser scanners, but most of the underwater mapping is focused on 2D maps, treating the environment as a floor plant, or on 2.5D maps of the seafloor. The reason for the problematic of underwater mapping originates in its sensor, i.e. sonars. In contrast to lasers (LIDARs), sonars are unprecise high-noise sensors. Besides its noise, imaging sonars have a wide sound beam effectuating a volumetric measurement. The first part of this dissertation develops an underwater simulator for highfrequency single-beam imaging sonars capable of replicating multipath, directional gain and typical noise effects on arbitrary environments. The simulation relies on a ray theory based method and explanations of how this theory follows from first principles under short-wavelegnth assumption are provided. In the second part of this dissertation, the simulator is combined to a continous map algorithm based on Hilbert Maps. Hilbert maps arise as a machine learning technique over Hilbert spaces, using features maps, applied to the mapping context. The embedding of a sonar response in such a map is a contribution. A qualitative comparison between the simulator ground truth and the reconstucted map reveal Hilbert maps as a promising technique to noisy sensor mapping and, also, indicates some hard to distinguish characteristics of the surroundings, e.g. corners and non smooth features.O mapeamento, às vezes como parte de um sistema SLAM, é um tema de pesquisa ativo e tem soluções notáveis usando scanners a laser, mas a maioria do mapeamento subaquático é focada em mapas 2D, que tratam o ambiente como uma planta, ou mapas 2.5D do fundo do mar. A razão para a dificuldade do mapeamento subaquático origina-se no seu sensor, i.e. sonares. Em contraste com lasers (LIDARs), os sonares são sensores imprecisos e com alto nível de ruído. Além do seu ruído, os sonares do tipo imaging têm um feixe sonoro muito amplo e, com isso, efetuam uma medição volumétrica, ou seja, sobre todo um volume. Na primeira parte dessa dissertação se desenvolve um simulador para sonares do tipo imaging de feixo único de alta frequência capaz de replicar os efeitos típicos de multicaminho, ganho direcional e ruído de fundo em ambientes arbitrários. O simulador implementa um método baseado na teoria geométrica de raios, com todo seu desenvolvimento partindo da acústica subaquática. Na segunda parte dessa dissertação, o simulador é incorporado em um algoritmo de reconstrução de mapas contínuos baseado em Hilbert Maps. Hilbert Maps surge como uma técnica de aprendizado de máquina sobre espaços de Hilbert, usando mapas de características, aplicadas ao contexto de mapeamento. A incorporação de uma resposta de sonar em um tal mapa é uma contribuição desse trabalho. Uma comparação qualitativa entre o ambiente de referência fornecido ao simulador e o mapa reconstruído pela técnica proposta, revela Hilbert Maps como uma técnica promissora para mapeamento atráves de sensores ruidosos e, também, aponta para algumas características do ambiente difíceis de se distinguir, e.g. cantos e regiões não suaves

    A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    Get PDF
    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems

    The fusion and integration of virtual sensors

    Get PDF
    There are numerous sensors from which to choose when designing a mobile robot: ultrasonic, infrared, radar, or laser range finders, video, collision detectors, or beacon based systems such as the Global Positioning System. In order to meet the need for reliability, accuracy, and fault tolerance, mobile robot designers often place multiple sensors on the same platform, or combine sensor data from multiple platforms. The combination of the data from multiple sensors to improve reliability, accuracy, and fault tolerance is termed Sensor Fusion.;The types of robotic sensors are as varied as the properties of the environment that need to be sensed. to reduce the complexity of system software, Roboticists have found it highly desirable to adopt a common interface between each type of sensor and the system responsible for fusing the information. The process of abstracting the essential properties of a sensor is called Sensor Virtualization.;Sensor virtualization to date has focused on abstracting the properties shared by sensors of the same type. The approach taken by T. Henderson is simply to expose to the fusion system only the data from the sensor, along with a textual label describing the sensor. We extend Henderson\u27s work in the following manner. First, we encapsulate both the fusion algorithm and the interface layer in the virtual sensor. This allows us to build multi-tiered virtual sensor hierarchies. Secondly, we show how common fusion algorithms can be encapsulated in the virtual sensor, facilitating the integration and replacement of both physical and virtual sensors. Finally, we provide a physical proof of concept using monostatic sonars, vector sonars, and a laser range-finder

    ACOUSTIC METHODS FOR MAPPING AND CHARACTERIZING SUBMERGED AQUATIC VEGETATION USING A MULTIBEAM ECHOSOUNDER

    Get PDF
    Submerged aquatic vegetation (SAV) is an important component of many temperate global coastal ecosystems. SAV monitoring programs using optical remote sensing are limited by water clarity and attenuation with depth. Here underwater acoustics is used to analyze the water volume above the bottom to detect, map and characterize SAV. In particular, this dissertation developed and applied new methods for analyzing the full time series of acoustic intensity data (e.g., water column data) collected by a multibeam echosounder. This dissertation is composed of three separate but related studies. In the first study, novel methods for detecting and measuring the canopy height of eelgrass beds are developed and used to map eelgrass in a range of different environments throughout the Great Bay Estuary, New Hampshire, and Cape Cod Bay, Massachusetts. The results of this study validated these methods by showing agreement between boundaries of eelgrass beds in acoustic and aerial datasets more in shallow water than at the deeper edges, where the acoustics were able to detect eelgrass more easily and at lower densities. In the second study, the methods developed for measuring canopy height in the first study are used to delineate between kelp-dominated and non-kelp-dominated habitat at several shallow rocky subtidal sites on the Maine and New Hampshire coast. The kelp detection abilities of these methods are first tested and confirmed at a pilot site with detailed diver quadrat macroalgae data, and then these methods are used to successfully extrapolate kelp- and non-kelp-dominated percent coverages derived from video photomosaic data. The third study examines the variability of the acoustic signature and acoustically-derived canopy height under different tidal currents. Submerged aquatic canopies are known to bend to accommodate the drag they generate in response to hydrodynamic forcing, and, in turn, the canopy height measured by acoustics will not be a perfect representation of canopy height as defined by common seagrass monitoring protocols, which is usually measured as the length of the blade of seagrass. Additionally, the bending of the canopy affects how the blades of seagrass are distributed within the footprint of the sonar, changing the acoustic signature of the seagrass canopy. For this study, a multibeam echosounder, a current profiler and an HD video camera were deployed on a stationary frame in a single eelgrass bed over 2 tidal cycles. Acoustic canopy heights varied by as much as 30 cm over the experiment, and although acoustic canopy height was correlated to current magnitude, the relationship did not follow the predictive flexible vegetation reconfiguration model of Luhar and Nepf (2011). Results indicate that there are significant differences in the shape of the return from a deflected (i.e., bent-over) canopy and an upright canopy, and that these differences in shape have implications for the accuracy of bottom detection using the maximum amplitude of a beam time series. These three studies clearly show the potential for using multibeam water column backscatter data for mapping coastal submerged aquatic vegetation while also testing the natural variability in acoustic canopy height measurements in the field

    Flat surface reconstruction using sonar

    Get PDF
    Journal ArticleA technique is given for the recovery of planar surfaces using two beam-spread sonar readings. If a single, planar surface gave rise to the two readings, then the method recovers the surface quite accurately. Simulation and experiment demonstrate the effectiveness of the technique and recommend its use in practice

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
    corecore