8,281 research outputs found
Active Classification: Theory and Application to Underwater Inspection
We discuss the problem in which an autonomous vehicle must classify an object
based on multiple views. We focus on the active classification setting, where
the vehicle controls which views to select to best perform the classification.
The problem is formulated as an extension to Bayesian active learning, and we
show connections to recent theoretical guarantees in this area. We formally
analyze the benefit of acting adaptively as new information becomes available.
The analysis leads to a probabilistic algorithm for determining the best views
to observe based on information theoretic costs. We validate our approach in
two ways, both related to underwater inspection: 3D polyhedra recognition in
synthetic depth maps and ship hull inspection with imaging sonar. These tasks
encompass both the planning and recognition aspects of the active
classification problem. The results demonstrate that actively planning for
informative views can reduce the number of necessary views by up to 80% when
compared to passive methods.Comment: 16 page
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
Size constancy in bat biosonar?
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed "size constancy". It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the 'sonar aperture', i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats
An Empirical Evaluation of Deep Learning on Highway Driving
Numerous groups have applied a variety of deep learning techniques to
computer vision problems in highway perception scenarios. In this paper, we
presented a number of empirical evaluations of recent deep learning advances.
Computer vision, combined with deep learning, has the potential to bring about
a relatively inexpensive, robust solution to autonomous driving. To prepare
deep learning for industry uptake and practical applications, neural networks
will require large data sets that represent all possible driving environments
and scenarios. We collect a large data set of highway data and apply deep
learning and computer vision algorithms to problems such as car and lane
detection. We show how existing convolutional neural networks (CNNs) can be
used to perform lane and vehicle detection while running at frame rates
required for a real-time system. Our results lend credence to the hypothesis
that deep learning holds promise for autonomous driving.Comment: Added a video for lane detectio
Monocular Vision as a Range Sensor
One of the most important abilities for a mobile robot is detecting obstacles in order to avoid collisions. Building a map of these obstacles is the next logical step. Most robots to date have used sensors such as passive or active infrared, sonar or laser range finders to locate obstacles in their path. In contrast, this work uses a single colour camera as the only sensor, and consequently the robot must obtain range information from the camera images. We propose simple methods for determining the range to the nearest obstacle in any direction in the robot’s field of view, referred to as the Radial Obstacle Profile. The ROP can then be used to determine the amount of rotation between two successive images, which is important for constructing a 360º view of the surrounding environment as part of map construction
Efficient Approach for OS-CFAR 2D Technique Using Distributive Histograms and Breakdown Point Optimal Concept applied to Acoustic Images
In this work, a new approach to improve the algorithmic efficiency of the Order Statistic-Constant False Alarm Rate (OS-CFAR) applied in two dimensions (2D) is presented. OS-CFAR is widely used in radar technology for detecting moving objects as well as in sonar technology for the relevant areas of segmentation and multi-target detection on the seafloor. OS-CFAR rank orders the samples obtained from a sliding window around a test cell to select a representative sample that is used to calculate an adaptive detection threshold maintaining a false alarm probability. Then, the test cell is evaluated to determine the presence or absence of a target based on the calculated threshold. The rank orders allows that OS-CFAR technique to be more robust in multi-target situations and less sensitive than other methods to the presence of the speckle noise, but requires higher computational effort. This is the bottleneck of the technique. Consequently, the contribution of this work is to improve the OS-CFAR 2D with the distributive histograms and the optimal breakdown point optimal concept, mainly from the standpoint of efficient computation. In this way, the OS-CFAR 2D on-line computation was improved, by means of speeding up the samples sorting problem through the improvement in the calculus of the statistics order. The theoretical algorithm analysis is presented to demonstrate the improvement of this approach. Also, this novel efficient OS-CFAR 2D was contrasted experimentally on acoustic images.Fil: Villar, Sebastian Aldo. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones CientÃficas y Técnicas. Centro CientÃfico Tecnológico Conicet - Tandil. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones CientÃficas. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires; Argentina. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de IngenierÃa OlavarrÃa. Departamento de Electromecánica. Grupo INTELYMEC; ArgentinaFil: Menna, Bruno Victorio. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones CientÃficas y Técnicas. Centro CientÃfico Tecnológico Conicet - Tandil. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones CientÃficas. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires; Argentina. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de IngenierÃa OlavarrÃa. Departamento de Electromecánica. Grupo INTELYMEC; ArgentinaFil: Torcida, Sebastián. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de Ciencias Exactas. Departamento de Matemática; ArgentinaFil: Acosta, Gerardo Gabriel. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de IngenierÃa OlavarrÃa. Departamento de Electromecánica. Grupo INTELYMEC; Argentina. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones CientÃficas y Técnicas. Centro CientÃfico Tecnológico Conicet - Tandil. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones CientÃficas. Centro de Investigaciones en FÃsica e IngenierÃa del Centro de la Provincia de Buenos Aires; Argentin
Detecting fish aggregations from reef habitats mapped with high resolution side scan sonar imagery
As part of a multibeam and side scan sonar (SSS) benthic survey of the Marine Conservation District (MCD) south of St. Thomas, USVI and the seasonal closed areas in St. Croix—Lang Bank (LB) for red hind (Epinephelus guttatus) and the Mutton Snapper (MS) (Lutjanus analis) area—we extracted signals from water column targets that represent individual
and aggregated fish over various benthic habitats encountered in the SSS imagery. The survey covered a total of 18 km2 throughout the federal jurisdiction fishery management areas. The complementary set of 28 habitat classification digital maps covered a total of 5,462.3 ha;
MCDW (West) accounted for 45% of that area, and MCDE (East) 26%, LB 17%, and MS the remaining 13%. With the exception
of MS, corals and gorgonians on consolidated habitats were significantly more abundant than submerged aquatic vegetation (SAV) on unconsolidated sediments or unconsolidated sediments. Continuous coral habitat was the most abundant consolidated habitat for both MCDW and MCDE (41% and 43% respectively). Consolidated habitats in LB and MS predominantly consisted of gorgonian plain habitat with 95% and 83% respectively. Coral limestone habitat was more abundant than coral patch habitat; it was found near the shelf break in MS, MCDW, and MCDE. Coral limestone and coral patch habitats only covered LB minimally. The high spatial resolution (0.15 m) of the acquired imagery allowed the detection of differing fish aggregation (FA) types. The
largest FA densities were located at MCDW and MCDE over coral communities that occupy up to 70% of the bottom cover.
Counts of unidentified swimming objects (USOs), likely representing individual fish, were similar among locations and occurred primarily over sand and shelf edge areas. Fish aggregation school sizes were significantly smaller at MS than the other three locations (MCDW, MCDE, and LB). This study shows the advantages of utilizing SSS in determining fish distributions and density
Data handling methods and target detection results for multibeam and sidescan data collected as part of the search for SwissAir Flight 111
The crash of SwissAir Flight 111, off Nova Scotia in September 1998, triggered one of the largest seabed search surveys in Canadian history. The primary search tools used were sidescan sonars (both conventional and focussed types) and multibeam sonars. The processed search data needed to be distributed on a daily basis to other elements of the fleet for precise location of divers and other optical seabed search instruments (including laser linescan and ROV video). As a result of the glacial history of the region, many natural targets, similar in gross nature to aircraft debris were present. These included widespread linear bedrock outcrop patterns together with near ubiquitous glacial erratic boulders. Because of the severely broken-up nature of the remaining aircraft debris, sidescan imaging alone was often insufficient to unambiguously identify targets. The complementary attributes of higher resolution, but poorly located, sidescan imagery together with slightly lower resolution, but excellently navigated multibeam sonar proved to be one of critical factors in the success of the search. It proved necessary to rely heavily on the regional context of the seabed (provided by the multibeam sonar bathymetry and backscatter imagery) to separate natural geomorphic targets from anomalous anthropogenic debris. In order to confidently prove or disprove a potential target, the interpreter required simultaneous access to the full resolution sidescan data in the geographic context of the multibeam framework. Specific software tools had to be adapted or developed shipboard to provide this capability. Whilst developed specifically for this application, these survey tools can provide improved processing speed and confidence as part of more general mine hunting, hydrographic, engineering or scientific surveys
- …