394 research outputs found
Probabilistic Mixture Model-Based Spectral Unmixing
Identifying pure components in mixtures is a common yet challenging problem.
The associated unmixing process requires the pure components, also known as
endmembers, to be sufficiently spectrally distinct. Even with this requirement
met, extracting the endmembers from a single mixture is impossible; an ensemble
of mixtures with sufficient diversity is needed. Several spectral unmixing
approaches have been proposed, many of which are connected to hyperspectral
imaging. However, most of them assume highly diverse collections of mixtures
and extremely low-loss spectroscopic measurements. Additionally, non-Bayesian
frameworks do not incorporate the uncertainty inherent in unmixing. We propose
a probabilistic inference approach that explicitly incorporates noise and
uncertainty, enabling us to unmix endmembers in collections of mixtures with
limited diversity. We use a Bayesian mixture model to jointly extract endmember
spectra and mixing parameters while explicitly modeling observation noise and
the resulting inference uncertainties. We obtain approximate distributions over
endmember coordinates for each set of observed spectra while remaining robust
to inference biases from the lack of pure observations and presence of
non-isotropic Gaussian noise. Access to reliable uncertainties on the unmixing
solutions would enable robust solutions as well as informed decision making
The Art and Science in Modeling the Pressure-Velocity Interactions
The objective of this investigation is to develop a single point model for the global effects of pressure in turbulence, while striking a judicious balance between mathematical rigor and empiricism. In this vein, we perform a linear stability analysis of planar quadratic flows to isolate and identify the action of pressure herein. This leads to the identification of the statistically most likely behavior engendered by modal ensembles. Thence, we develop a framework to augment the classical realizability constraints. Herein, we ensure that not only is the statistical state physically permissible, but the stochastic process is realizable as well. These process realizability conditions are applied a posteriori, to evaluate the dynamics predicted by established models and a priori, to develop illustrative models that maximize realizability adherence. This serves to identify the range of possible dynamics of the system. Thence, a set of studied compromises are introduced in the scope and framework of the classical modeling procedure to develop a modeling framework that ensures a high degree of fidelity along with adherence to process realizability. An illustrative model using this paradigm is constructed and its predictions are compared against numerical and experimental data, while being contrasted against established closures. The robustness of the linear analysis is tested via stochastic modeling using a Langevin equation based model. Finally, to extend this paradigm to all homogeneous flows, we carry out a linear stability analysis of general three-dimensional homogeneous flows
A Dynamical Systems Approach Towards Modeling the Rapid Pressure Strain Correlation
In this study, the behavior of pressure in the Rapid Distortion Limit, along with its
concomitant modeling, are addressed. In the first part of the work, the role of pressure in
the initiation, propagation and suppression of flow instabilities for quadratic flows is
analyzed. The paradigm of analysis considers the Reynolds stress transport equations to
govern the evolution of a dynamical system, in a state space composed of the Reynolds
stress tensor components. This dynamical system is scrutinized via the identification of
the invariant sets and the bifurcation analysis. The changing role of pressure in quadratic
flows, viz. hyperbolic, shear and elliptic, is established mathematically and the
underlying physics is explained. Along the maxim of "understanding before prediction", this allows for a deeper insight into the behavior of pressure, thus aiding in its modeling.
The second part of this work deals with Rapid Pressure Strain Correlation modeling in
earnest. Based on the comprehension developed in the preceding section, the classical
pressure strain correlation modeling approaches are revisited. Their shortcomings, along
with their successes, are articulated and explained, mathematically and from the
viewpoint of the governing physics. Some of the salient issues addressed include, but are not limited to, the requisite nature of the model, viz. a linear or a nonlinear structure,
the success of the extant models for hyperbolic flows, their inability to capture elliptic
flows and the use of RDT simulations to validate models. Through this analysis, the
schism between mathematical and physical guidelines and the engineering approach, at
present, is substantiated. Subsequently, a model is developed that adheres to the classical
modeling framework and shows excellent agreement with the RDT simulations. The
performance of this model is compared to that of other nominations prevalent in
engineering simulations. The work concludes with a summary, pertinent observations
and recommendations for future research in the germane field
Machine Learning Based Alignment For LCLS-II-HE Optics
The hard X-ray instruments at the Linac Coherent Light Source are in the
design phase for upgrades that will take full advantage of the high repetition
rates that will become available with LCLS-II-HE. The current X-ray Correlation
Spectroscopy instrument will be converted to the Dynamic X-ray Scattering
instrument, and will feature a meV-scale high-resolution monochromator at its
front end with unprecedented coherent flux. With the new capability come many
engineering and design challenges, not least of which is the sensitivity to
long-term drift of the optics. With this in mind, we have estimated the system
tolerance to angular drift and vibration for all the relevant optics (10
components) in terms of how the central energy out of the monochromator will be
affected to inform the mechanical design. Additionally, we have started
planning for methods to correct for such drifts using available (both invasive
and non-invasive) X-ray beam diagnostics. In simulations, we have demonstrated
the ability of trained Machine Learning models to correct misalignments to
maintain the desired central energy and optical axis within the necessary
tolerances. Additionally, we exhibit the use of Bayesian Optimization to
minimize the impact of thermal deformations of crystals as well as beam
alignment from scratch. The initial results are very promising and efforts to
further extend this work are ongoing
Eigenvector perturbation methodology for uncertainty quantification of turbulence models
Reynolds-averaged Navier-Stokes (RANS) models are the primary numerical recourse to investigate complex engineering turbulent flows in industrial applications. However, to establish RANS models as reliable design tools, it is essential to provide estimates for the uncertainty in their predictions. In the recent past, an uncertainty estimation framework relying on eigenvalue and eigenvector perturbations to the modeled Reynolds stress tensor has been widely applied with satisfactory results. However, the methodology for the eigenvector perturbations is not well established. Evaluations using only eigenvalue perturbations do not provide comprehensive estimates of model form uncertainty, especially in flows with streamline curvature, recirculation, or flow separation. In this article, we outline a methodology for the eigenvector perturbations using a predictor-corrector approach, which uses the incipient eigenvalue perturbations along with the Reynolds stress transport equations to determine the eigenvector perturbations. This approach was applied to benchmark cases of complex turbulent flows. The uncertainty intervals estimated using the proposed framework exhibited substantial improvement over eigenvalue-only perturbations and are able to account for a significant proportion of the discrepancy between RANS predictions and high-fidelity data
Testing the data framework for an AI algorithm in preparation for high data rate X-ray facilities
The advent of next-generation X-ray free electron lasers will be capable of
delivering X-rays at a repetition rate approaching 1 MHz continuously. This
will require the development of data systems to handle experiments at these
type of facilities, especially for high throughput applications, such as
femtosecond X-ray crystallography and X-ray photon fluctuation spectroscopy.
Here, we demonstrate a framework which captures single shot X-ray data at the
LCLS and implements a machine-learning algorithm to automatically extract the
contrast parameter from the collected data. We measure the time required to
return the results and assess the feasibility of using this framework at high
data volume. We use this experiment to determine the feasibility of solutions
for `live' data analysis at the MHz repetition rate
Deep Neural Network Uncertainty Quantification for LArTPC Reconstruction
We evaluate uncertainty quantification (UQ) methods for deep learning applied
to liquid argon time projection chamber (LArTPC) physics analysis tasks. As
deep learning applications enter widespread usage among physics data analysis,
neural networks with reliable estimates of prediction uncertainty and robust
performance against overconfidence and out-of-distribution (OOD) samples are
critical for their full deployment in analyzing experimental data. While
numerous UQ methods have been tested on simple datasets, performance
evaluations for more complex tasks and datasets are scarce. We assess the
application of selected deep learning UQ methods on the task of particle
classification using the PiLArNet [1] monte carlo 3D LArTPC point cloud
dataset. We observe that UQ methods not only allow for better rejection of
prediction mistakes and OOD detection, but also generally achieve higher
overall accuracy across different task settings. We assess the precision of
uncertainty quantification using different evaluation metrics, such as
distributional separation of prediction entropy across correctly and
incorrectly identified samples, receiver operating characteristic curves
(ROCs), and expected calibration error from observed empirical accuracy. We
conclude that ensembling methods can obtain well calibrated classification
probabilities and generally perform better than other existing methods in deep
learning UQ literature
- …