3,461 research outputs found

    Comparative performance analysis of texture characterization models in DIRSIG

    Get PDF
    The analysis and quantitative measurement of image texture is a complex and intriguing problem that has recently received a considerable amount of attention from the diverse fields of computer graphics, human vision, biomedical imaging, computer science, and remote sensing. In particular, textural feature quantification and extraction are crucial tasks for each of these disciplines, and as such numerous techniques have been developed in order to effectively segment or classify images based on textures, as well as for synthesizing textures. However, validation and performance analysis of these texture characterization models has been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. In this work, four fundamentally different texture modeling algorithms have been implemented as necessary into the Digital Imaging and Remote Sensing Synthetic Image Generation (DIRSIG) model. Two of the models tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed validation and comparative performance analysis of each model was then carried out on several texturally significant regions of two counterpart real and synthetic DIRSIG images which contain differing spatial and spectral resolutions. The quantitative assessment of each model utilized a set of four performance metrics that were derived from spatial Gray Level Co-occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, mean filter (MF) spatial metrics, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. These performance measures in combination attempt to determine which texture characterization model best captures the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the existing texture models achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery

    A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds

    Get PDF
    This paper proposes a segmentation-free, automatic and efficient procedure to detect general geometric quadric forms in point clouds, where clutter and occlusions are inevitable. Our everyday world is dominated by man-made objects which are designed using 3D primitives (such as planes, cones, spheres, cylinders, etc.). These objects are also omnipresent in industrial environments. This gives rise to the possibility of abstracting 3D scenes through primitives, thereby positions these geometric forms as an integral part of perception and high level 3D scene understanding. As opposed to state-of-the-art, where a tailored algorithm treats each primitive type separately, we propose to encapsulate all types in a single robust detection procedure. At the center of our approach lies a closed form 3D quadric fit, operating in both primal & dual spaces and requiring as low as 4 oriented-points. Around this fit, we design a novel, local null-space voting strategy to reduce the 4-point case to 3. Voting is coupled with the famous RANSAC and makes our algorithm orders of magnitude faster than its conventional counterparts. This is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes. Results on synthetic and real datasets support the validity of our method.Comment: Accepted for publication at CVPR 201

    Scattering statistics of rock outcrops: Model-data comparisons and Bayesian inference using mixture distributions

    Get PDF
    The probability density function of the acoustic field amplitude scattered by the seafloor was measured in a rocky environment off the coast of Norway using a synthetic aperture sonar system, and is reported here in terms of the probability of false alarm. Interpretation of the measurements focused on finding appropriate class of statistical models (single versus two-component mixture models), and on appropriate models within these two classes. It was found that two-component mixture models performed better than single models. The two mixture models that performed the best (and had a basis in the physics of scattering) were a mixture between two K distributions, and a mixture between a Rayleigh and generalized Pareto distribution. Bayes' theorem was used to estimate the probability density function of the mixture model parameters. It was found that the K-K mixture exhibits significant correlation between its parameters. The mixture between the Rayleigh and generalized Pareto distributions also had significant parameter correlation, but also contained multiple modes. We conclude that the mixture between two K distributions is the most applicable to this dataset.Comment: 15 pages, 7 figures, Accepted to the Journal of the Acoustical Society of Americ

    Performance Analysis of Improved Methodology for Incorporation of Spatial/Spectral Variability in Synthetic Hyperspectral Imagery

    Get PDF
    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of ground truthed images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three performance metrics that have been derived from spatial Gray Level Co-Occunence Matrix (GLCM) analysis, hyperspectral Signalto- Clutter Ratio (5CR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing ofhyperspectral algorithms in synthetic imagery

    Exploitation of infrared polarimetric imagery for passive remote sensing applications

    Get PDF
    Polarimetric infrared imagery has emerged over the past few decades as a candidate technology to detect manmade objects by taking advantage of the fact that smooth materials emit strong polarized electromagnetic waves, which can be remotely sensed by a specialized camera using a rotating polarizer in front of the focal plate array in order to generate the so-called Stokes parameters: S0, S1, S2, and DoLP. Current research in this area has shown the ability of using such variations of these parameters to detect smooth manmade structures in low contrast contrast scenarios. This dissertation proposes and evaluates novel anomaly detection methods for long-wave infrared polarimetric imagery exploitation suited for surveillance applications requiring automatic target detection capability. The targets considered are manmade structures in natural clutter backgrounds under unknown illumination and atmospheric effects. A method based on mathematical morphology is proposed with the intent to enhance the polarimetric Stokes features of manmade structures found in the scene while minimizing its effects on natural clutter. The method suggests that morphology-based algorithms are capable of enhancing the contrast between manmade objects and natural clutter backgrounds, thus, improving the probability of correct detection of manmade objects in the scene. The second method departs from common practices in the polarimetric research community (i.e., using the Stokes vector parameters as input to algorithms) by using instead the raw polarization component imagery (e.g., 0°, 45°, 90°, and 135°) and employing multivariate mathematical statistics to distinguish the two classes of objects. This dissertation unequivocally shows that algorithms based on this new direction significantly outperform the prior art (algorithms based on Stokes parameters and their variants). To support this claim, this dissertation offers an exhaustive data analysis and quantitative comparative study, among the various competing algorithms, using long-wave infrared polarimetric imagery collected outdoor, over several days, under varying weather conditions, geometry of illumination, and diurnal cycles
    corecore