29 research outputs found
Spatial Regression With Multiplicative Errors, and Its Application With Lidar Measurements
Multiplicative errors in addition to spatially referenced observations often
arise in geodetic applications, particularly in surface estimation with light
detection and ranging (LiDAR) measurements. However, spatial regression
involving multiplicative errors remains relatively unexplored in such
applications. In this regard, we present a penalized modified least squares
estimator to handle the complexities of a multiplicative error structure while
identifying significant variables in spatially dependent observations for
surface estimation. The proposed estimator can be also applied to classical
additive error spatial regression. By establishing asymptotic properties of the
proposed estimator under increasing domain asymptotics with stochastic sampling
design, we provide a rigorous foundation for its effectiveness. A comprehensive
simulation study confirms the superior performance of our proposed estimator in
accurately estimating and selecting parameters, outperforming existing
approaches. To demonstrate its real-world applicability, we employ our proposed
method, along with other alternative techniques, to estimate a rotational
landslide surface using LiDAR measurements. The results highlight the efficacy
and potential of our approach in tackling complex spatial regression problems
involving multiplicative errors
Prediction of Tropical Pacific Rain Rates with Over-parameterized Neural Networks
The prediction of tropical rain rates from atmospheric profiles poses
significant challenges, mainly due to the heavy-tailed distribution exhibited
by tropical rainfall. This study introduces over-parameterized neural networks
not only to forecast tropical rain rates, but also to explain their
heavy-tailed distribution. The prediction is separately conducted for three
rain types (stratiform, deep convective, and shallow convective) observed by
the Global Precipitation Measurement satellite radar over the West and East
Pacific regions. Atmospheric profiles of humidity, temperature, and zonal and
meridional winds from the MERRA-2 reanalysis are considered as features.
Although over-parameterized neural networks are well-known for their "double
descent phenomenon," little has been explored about their applicability to
climate data and capability of capturing the tail behavior of data. In our
results, over-parameterized neural networks accurately predict the rain rate
distributions and outperform other machine learning methods. Spatial maps show
that over-parameterized neural networks also successfully describe spatial
patterns of each rain type across the tropical Pacific. In addition, we assess
the feature importance for each over-parameterized neural network to provide
insight into the key factors driving the predictions, with low-level humidity
and temperature variables being the overall most important. These findings
highlight the capability of over-parameterized neural networks in predicting
the distribution of the rain rate and explaining extreme values
A Tool For Automatic Estimation Of The Stage Height For Ungauged River Sites
Recently, River Information Systems that integrate a variety of riverine information have been widely developed, driven by information technologies. The present study attempts to develop a software called HydroConnector that dynamically integrates river-based numerical modeling or post-processing with in situ data, based upon a data searching technique that uses a hydro web service built on top of an ODM-based database, following a CUAHSI standard. It fundamentally differs from the conventional direct access to the database for acquiring a given period of a dataset. Such a hydro web service and ODM-based database were built by utilizing existing real-time stream gaging data, and they are dynamically connected with an HPG model that estimates the stage height for an ungaged site. As a result, the newly developed HydroConnector is very intuitive for the user, due to the user-friendly GUI; it facilitates modeling processes by automatically connecting remotely located data and a specific numerical model, without further laborious data pre- and post-processing. In fact, the HPG model consists of a pre-established diagram based on simulated outputs from one-dimensional river models, such as HEC-RAS, operated for the possible flow conditions, and it is able to estimate the stage height for an ungaged site, driven by the given downstream stage height and upstream flow discharge. The HydroConnector incorporates both the web service and the HPG model, which enables the making of dynamic data pre-processing adjusted for the numerical model, and automatically operates the HPG model, to finally provide the targeted ungaged stage height. Acknowledgement This research was supported by a grant (11-TI-C06) from the Advanced Water Management Research Program, funded by the Ministry of Land, Infrastructure and Transport of the Korean government
Robust estimation of bacterial cell count from optical density
Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data
Architecture-based and target-oriented algorithm optimization of high-order methods via complete-search tensor contraction
Sophisticated solution algorithms, along with complex data structures, are known as the main barriers that hinder high-order methods from being actively embraced by industry and academia. Simultaneously, modern computing machines offer a wide variety of opportunities to enhance the performance of solution algorithms through highly tuned computational kernels. To address this issue, we present an architecture-based and target-oriented algorithm optimization for high-order methods, called completesearch tensor contraction (CsTC). The key idea of CsTC is to convert the tensor operations of a high-order method into an optimization problem, which leads to finding an optimized method to execute tensor contraction (TC). After introducing the general framework of CsTC, it was applied to the discontinuous Galerkin (DG) discretization. An approach based on general matrix multiplication (GEMM) is adopted because of its flexibility to handle the intermediate order of TC and the reusability of state-of-the-art GEMM primitives. By optimizing data structures as well as TC operations, CsTC provides an optimized solution algorithm that performs significantly better than the original non-optimized high-order method. The entire optimization process is automatically completed in a few minutes at a pre-processing step on a computer. The proposed CsTC optimization fully reflects the mesh and solution parameters adopted as well as the computing architecture used, thus, it is completely target-oriented and architecture-based. Various solution parameters and computing architectures are used and compared. All the results indicate that the optimization is essential to extract the best performance from a given computing architecture and that the performance enhancement becomes substantial as the DG approximation order increases and as a more recent processor is employed. Finally, a 3-D viscous flow problem governed by the compressible Navier-Stokes equations is solved. The optimized algorithm yields more than 10 x speedup compared to the algorithm with a nested-loop approach when DG-P3and DG-P5approximations are used. (C) 2021 Elsevier B.V. All rights reserved.N
Development of an Image Registration Technique for Fluvial Hyperspectral Imagery Using an Optical Flow Algorithm
Fluvial remote sensing has been used to monitor diverse riverine properties through processes such as river bathymetry and visual detection of suspended sediment, algal blooms, and bed materials more efficiently than laborious and expensive in-situ measurements. Red–green–blue (RGB) optical sensors have been widely used in traditional fluvial remote sensing. However, owing to their three confined bands, they rely on visual inspection for qualitative assessments and are limited to performing quantitative and accurate monitoring. Recent advances in hyperspectral imaging in the fluvial domain have enabled hyperspectral images to be geared with more than 150 spectral bands. Thus, various riverine properties can be quantitatively characterized using sensors in low-altitude unmanned aerial vehicles (UAVs) with a high spatial resolution. Many efforts are ongoing to take full advantage of hyperspectral band information in fluvial research. Although geo-referenced hyperspectral images can be acquired for satellites and manned airplanes, few attempts have been made using UAVs. This is mainly because the synthesis of line-scanned images on top of image registration using UAVs is more difficult owing to the highly sensitive and heavy image driven by dense spatial resolution. Therefore, in this study, we propose a practical technique for achieving high spatial accuracy in UAV-based fluvial hyperspectral imaging through efficient image registration using an optical flow algorithm. Template matching algorithms are the most common image registration technique in RGB-based remote sensing; however, they require many calculations and can be error-prone depending on the user, as decisions regarding various parameters are required. Furthermore, the spatial accuracy of this technique needs to be verified, as it has not been widely applied to hyperspectral imagery. The proposed technique resulted in an average reduction of spatial errors by 91.9%, compared to the case where the image registration technique was not applied, and by 78.7% compared to template matching
Direct reconstruction method for discontinuous Galerkin methods on higher-order mixed-curved meshes I. Volume integration
This work deals with the development of the direct reconstruction method (DRM) and its application to the volume integration of the discontinuous Galerkin (DG) method on multi-dimensional high-order mixed-curved meshes. The conventional quadrature-based DG methods require the humongous computational cost on high-order curved elements due to their non-linear shape functions. To overcome this issue, the flux function is directly reconstructed in the physical domain using nodal polynomials on a target space in a quadrature-free manner. Regarding the target space and distribution of the nodal points, DRM has two variations: the brute force points (BFP) and shape function points (SFP) methods. In both methods, one nodal point corresponds to one nodal basis function of the target space. The DRM-BFP method uses a set of points that empirically minimizes a condition number of the generalized Vandermonde matrix. In the DRM-SFP method, the conventional nodal points are used to span an enlarged target space of the flux function. It requires a larger number of reconstruction points than DRM-BFP but offers easy extendability to the higher-degree polynomial space and a better de-aliasing effect. A robust way to compute orthonormal polynomials is provided to achieve lower round-off errors. The proposed methods are validated by the 2-D/3-D Navier-Stokes equations on high-order mixed-curved meshes. The numerical results confirm that the DRM volume integration greatly reduces the computational cost and memory overhead of the conventional quadrature-based DG methods on high-order curved meshes while maintaining an optimal order-of-accuracy as well as resolving the flow physics accurately.N
Direct reconstruction method for discontinuous Galerkin methods on higher-order mixed-curved meshes III. Code optimization via tensor contraction
The present study deals with the code optimization and its implementation of the direct reconstruction method (DRM) using the complete-search tensor contraction (CsTC) framework to extract the best performance of high-order methods on modern computing architectures. DRM was originally proposed to overcome severe computational costs of the physical domain-based discontinuous Galerkin (DG) method on mixed-curved meshes. In this work, the performance of DRM is further enhanced through the code optimization via the CsTC technique. Required kernels for tensor operations in the DRM solution algorithm are analyzed and optimized by completely searching all candidates of GEMM (General Matrix Multiplication) subroutines. The computational performance is thoroughly examined by simulating a turbulent flow over a circular cylinder at Re-D = 3900 by DG-P3 and -P5 approximations. Compared to a quadrature-based approach with the full integration, the optimized DRM significantly reduces the memory requirements and the number of floating-point operations to compute the DG residual on a linear mesh as well as high-order curved meshes. On a P3-mesh, the optimized DRM provides 13.74x and 23.03x speed-ups in DG-P3 and -P5, respectively, while the amount of memory required is reduced to 1/16.6 and 1/19.9. On a linear mesh, it even yields 1.25x and 1.12x speed-ups in DG-P3 and -P5, respectively. The memory requirement is reduced to 1/1.27 and 1/1.15, respectively. In particular, it is observed that the optimized DRM on a P3-mesh performs better than the optimized quadrature-based method on a P1-mesh. (C) 2020 Elsevier Ltd. All rights reserved.N
Analysis of the bubble size distribution in a breaking wave using a VOF method and an identification algorithm
A numerical simulation of a three-dimensional breaking wave is conducted using a VOF (volume-of-fluid)
method to investigate the wave breaking dynamics and the bubble size distribution. The wave is initialized using
the third-order Stokes wave solution. The initial wave slope and velocity fields generate turbulent wave breaking.
Various interfacial phenomena are observed including a jet forming, a jet impacting to the free-surface, ejecting
spray, entraining air pocket, and breakup. Bubbles with various sizes are formed from turbulent breakup of the
air pocket during active breaking time. To obtain the bubble size distribution, an identification algorithm is
proposed to accurately count independent bubbles. The proposed algorithm successfully identified independent
bubble structures. A joining algorithm is also introduced to consider bubble structures spanning multiple blocks
for parallel computations. The obtained bubble size distribution averaged during active breaking time is
proportional to r^-10/3 for radii larger than the Hinze scale and shows good agreement with previous experiment
and simulation results as well as the theoretical model.22Ykc
High-order multi-dimensional limiting strategy with subcell resolution I. Two-dimensional mixed meshes
The present paper deals with a new improvement of hierarchical multi-dimensional limiting process for resolving the subcell distribution of high-order methods on two-dimensional mixed meshes. From previous studies, the multi-dimensional limiting process (MLP) was hierarchically extended to the discontinuous Galerkin (DG) method and the flux reconstruction/correction procedure via reconstruction (FR/CPR) method on simplex meshes. It was reported that the hierarchical MLP (hMLP) shows several remarkable characteristics such as the preservation of the formal order-of-accuracy in smooth region and a sharp capturing of discontinuities in an efficient and accurate manner. At the same time, it was also surfaced that such characteristics are valid only on simplex meshes, and numerical Gibbs-Wilbraham oscillations are concealed in subcell distribution in the form of high-order polynomial modes. Subcell Gibbs-Wilbraham oscillations become potentially unstable near discontinuities and adversely affect numerical solutions in the sense of cell-averaged solutions as well as subcell distributions. In order to overcome the two issues, the behavior of the hMLP on mixed meshes is mathematically examined, and the simplex-decomposed Pl-projected MLP condition and smooth extrema detector are derived. Secondly, a troubled-boundary detector is designed by analyzing the behavior of computed solutions across boundary-edges. Finally, hMLP_BD is proposed by combining the simplex-decomposed Pl-projected MLP condition and smooth extrema detector with the troubled-boundary detector. Through extensive numerical tests, it is confirmed that the hMLP_BD scheme successfully eliminates subcell oscillations and provides reliable subcell distributions on two-dimensional triangular grids as well as mixed grids, while preserving the expected order-of-accuracy in smooth region. (C) 2018 Elsevier Inc. All rights reserved.OAIID:RECH_ACHV_DSTSH_NO:T201820764RECH_ACHV_FG:RR00200001ADJUST_YN:EMP_ID:A001138CITE_RATE:2.864DEPT_NM:기계항공공학부EMAIL:[email protected]_YN:YN