1,218 research outputs found
Applying machine learning methods for characterization of hexagonal prisms from their 2D scattering patterns – an investigation using modelled scattering data
This document is the Accepted Manuscript version of the following article: Emmanuel Oluwatobi Salawu, Evelyn Hesse, Chris Stopford, Neil Davey, and Yi Sun, 'Applying machine learning methods for characterization of hexagonal prisms from their 2D scattering patterns – an investigation using modelled scattering data', Journal of Quantitative Spectroscopy and Radiative Transfer, Vol. 201, pp. 115-127, first published online 5 July 2017. Under embargo. Embargo end date: 5 July 2019. The Version of Record is available online at doi: https://doi.org/10.1016/j.jqsrt.2017.07.001. © 2017 Elsevier Ltd. All rights reserved.Better understanding and characterization of cloud particles, whose properties and distributions affect climate and weather, are essential for the understanding of present climate and climate change. Since imaging cloud probes have limitations of optical resolution, especially for small particles (with diameter < 25 μm), instruments like the Small Ice Detector (SID) probes, which capture high-resolution spatial light scattering patterns from individual particles down to 1 μm in size, have been developed. In this work, we have proposed a method using Machine Learning techniques to estimate simulated particles’ orientation-averaged projected sizes (PAD) and aspect ratio from their 2D scattering patterns. The two-dimensional light scattering patterns (2DLSP) of hexagonal prisms are computed using the Ray Tracing with Diffraction on Facets (RTDF) model. The 2DLSP cover the same angular range as the SID probes. We generated 2DLSP for 162 hexagonal prisms at 133 orientations for each. In a first step, the 2DLSP were transformed into rotation-invariant Zernike moments (ZMs), which are particularly suitable for analyses of pattern symmetry. Then we used ZMs, summed intensities, and root mean square contrast as inputs to the advanced Machine Learning methods. We created one random forests classifier for predicting prism orientation, 133 orientation-specific (OS) support vector classification models for predicting the prism aspect-ratios, 133 OS support vector regression models for estimating prism sizes, and another 133 OS Support Vector Regression (SVR) models for estimating the size PADs. We have achieved a high accuracy of 0.99 in predicting prism aspect ratios, and a low value of normalized mean square error of 0.004 for estimating the particle’s size and size PADs.Peer reviewe
Shape descriptors and mapping methods for full-field comparison of experimental to simulation data
Validation of computational solid mechanics simulations requires full-field comparison
methodologies between numerical and experimental results. The continuous Zernike and
Chebyshev moment descriptors are applied to decompose data obtained from numerical
simulations and experimental measurements, in order to reduce the high amount of
‘raw’ data to a fairly modest number of features and facilitate their comparisons. As Zernike
moments are defined over a unit disk space, a geometric transformation (mapping) of
rectangular to circular domain is necessary, before Zernike decomposition is applied to
non-circular geometry. Four different mapping techniques are examined and their decomposition/
reconstruction efficiency is assessed. A deep mathematical investigation to the
reasons of the different performance of the four methods has been performed, comprising
the effects of image mapping distortion and the numerical integration accuracy. Special
attention is given to the Schwarz–Christoffel conformal mapping, which in most cases is proven to be highly efficient in image description when combined to Zernike moment descriptors. In cases of rectangular structures, it is demonstrated that despite the fact that Zernike moments are defined on a circular domain, they can be more effective even from Chebyshev moments, which are defined on rectangular domains, provided that appropriate mapping techniques have been applied
Laser Tomography Adaptive Optics (LTAO): A performance study
We present an analytical derivation of the on-axis performance of Adaptive
Optics systems using a given number of guide stars of arbitrary altitude,
distributed at arbitrary angular positions in the sky. The expressions of the
residual error are given for cases of both continuous and discrete turbulent
atmospheric profiles. Assuming Shack-Hartmann wavefront sensing with circular
apertures, we demonstrate that the error is formally described by integrals of
products of three Bessel functions. We compare the performance of Adaptive
Optics correction when using natural, Sodium or Rayleigh laser guide stars. For
small diameter class telescopes (~5m), we show that a few number of Rayleigh
beacons can provide similar performance to that of a single Sodium laser, for a
lower overall cost of the instrument. For bigger apertures, using Rayleigh
stars may not be such a suitable alternative because of the too severe cone
effect that drastically degrades the quality of the correction.Comment: accepted for publication in JOS
A parallel implementation of 3D Zernike moment analysis
Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system
- …