2,624 research outputs found

    Terrain analysis using radar shape-from-shading

    Get PDF
    This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure

    Seen and unseen tidal caustics in the Andromeda galaxy

    Full text link
    Indirect detection of high-energy particles from dark matter interactions is a promising avenue for learning more about dark matter, but is hampered by the frequent coincidence of high-energy astrophysical sources of such particles with putative high-density regions of dark matter. We calculate the boost factor and gamma-ray flux from dark matter associated with two shell-like caustics of luminous tidal debris recently discovered around the Andromeda galaxy, under the assumption that dark matter is its own supersymmetric antiparticle. These shell features could be a good candidate for indirect detection of dark matter via gamma rays because they are located far from the primary confusion sources at the galaxy's center, and because the shapes of the shells indicate that most of the mass has piled up near apocenter. Using a numerical estimator specifically calibrated to estimate densities in N-body representations with sharp features and a previously determined N-body model of the shells, we find that the largest boost factors do occur in the shells but are only a few percent. We also find that the gamma-ray flux is an order of magnitude too low to be detected with Fermi for likely dark matter parameters, and about 2 orders of magnitude less than the signal that would have come from the dwarf galaxy that produces the shells in the N-body model. We further show that the radial density profiles and relative radial spacing of the shells, in either dark or luminous matter, is relatively insensitive to the details of the potential of the host galaxy but depends in a predictable way on the velocity dispersion of the progenitor galaxy.Comment: ApJ accepte

    Near-Surface Interface Detection for Coal Mining Applications Using Bispectral Features and GPR

    Get PDF
    The use of ground penetrating radar (GPR) for detecting the presence of near-surface interfaces is a scenario of special interest to the underground coal mining industry. The problem is difficult to solve in practice because the radar echo from the near-surface interface is often dominated by unwanted components such as antenna crosstalk and ringing, ground-bounce effects, clutter, and severe attenuation. These nuisance components are also highly sensitive to subtle variations in ground conditions, rendering the application of standard signal pre-processing techniques such as background subtraction largely ineffective in the unsupervised case. As a solution to this detection problem, we develop a novel pattern recognition-based algorithm which utilizes a neural network to classify features derived from the bispectrum of 1D early time radar data. The binary classifier is used to decide between two key cases, namely whether an interface is within, for example, 5 cm of the surface or not. This go/no-go detection capability is highly valuable for underground coal mining operations, such as longwall mining, where the need to leave a remnant coal section is essential for geological stability. The classifier was trained and tested using real GPR data with ground truth measurements. The real data was acquired from a testbed with coal-clay, coal-shale and shale-clay interfaces, which represents a test mine site. We show that, unlike traditional second order correlation based methods such as matched filtering which can fail even in known conditions, the new method reliably allows the detection of interfaces using GPR to be applied in the near-surface region. In this work, we are not addressing the problem of depth estimation, rather confining ourselves to detecting an interface within a particular depth range

    A Method to Distinguish Quiescent and Dusty Star-forming Galaxies with Machine Learning

    Get PDF
    Large photometric surveys provide a rich source of observations of quiescent galaxies, including a surprisingly large population at z > 1. However, identifying large, but clean, samples of quiescent galaxies has proven difficult because of their near-degeneracy with interlopers such as dusty, star-forming galaxies. We describe a new technique for selecting quiescent galaxies based upon t-distributed stochastic neighbor embedding (t-SNE), an unsupervised machine-learning algorithm for dimensionality reduction. This t-SNE selection provides an improvement both over UVJ, removing interlopers that otherwise would pass color selection, and over photometric template fitting, more strongly toward high redshift. Due to the similarity between the colors of high- and low-redshift quiescent galaxies, under our assumptions, t-SNE outperforms template fitting in 63% of trials at redshifts where a large training sample already exists. It also may be able to select quiescent galaxies more efficiently at higher redshifts than the training sample

    In-Field Estimation of Orange Number and Size by 3D Laser Scanning

    Get PDF
    The estimation of fruit load of an orchard prior to harvest is useful for planning harvest logistics and trading decisions. The manual fruit counting and the determination of the harvesting capacity of the field results are expensive and time-consuming. The automatic counting of fruits and their geometry characterization with 3D LiDAR models can be an interesting alternative. Field research has been conducted in the province of Cordoba (Southern Spain) on 24 ‘Salustiana’ variety orange trees—Citrus sinensis (L.) Osbeck—(12 were pruned and 12 unpruned). Harvest size and the number of each fruit were registered. Likewise, the unitary weight of the fruits and their diameter were determined (N = 160). The orange trees were also modelled with 3D LiDAR with colour capture for their subsequent segmentation and fruit detection by using a K-means algorithm. In the case of pruned trees, a significant regression was obtained between the real and modelled fruit number (R2 = 0.63, p = 0.01). The opposite case occurred in the unpruned ones (p = 0.18) due to a leaf occlusion problem. The mean diameters proportioned by the algorithm (72.15 ± 22.62 mm) did not present significant differences (p = 0.35) with the ones measured on fruits (72.68 ± 5.728 mm). Even though the use of 3D LiDAR scans is time-consuming, the harvest size estimation obtained in this research is very accurate

    Learning continuous models for estimating intrinsic component images

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Also issued in pages.MIT Rotch Library copy: issued in pages.Includes bibliographical references (leaves 137-144).The goal of computer vision is to use an image to recover the characteristics of a scene, such as its shape or illumination. This is difficult because an image is the mixture of multiple characteristics. For example, an edge in an image could be caused by either an edge on a surface or a change in the surface's color. Distinguishing the effects of different scene characteristics is an important step towards high-level analysis of an image. This thesis describes how to use machine learning to build a system that recovers different characteristics of the scene from a single, gray-scale image of the scene. The goal of the system is to use the observed image to recover images, referred to as Intrinsic Component Images, that represent the scene's characteristics. The development of the system is focused on estimating two important characteristics of a scene, its shading and reflectance, from a single image. From the observed image, the system estimates a shading image, which captures the interaction of the illumination and shape of the scene pictured, and an albedo image, which represents how the surfaces in the image reflect light. Measured both qualitatively and quantitatively, this system produces state-of-the-art estimates of shading and albedo images.(cont.) This system is also flexible enough to be used for the separate problem of removing noise from an image. Building this system requires algorithms for continuous regression and learning the parameters of a Conditionally Gaussian Markov Random Field. Unlike previous work, this system is trained using real-world surfaces with ground-truth shading and albedo images. The learning algorithms are designed to accommodate the large amount of data in this training set.by Marshall Friend Tappen.Ph.D
    • …
    corecore