18 research outputs found
Recommended from our members
Discontinuous Galerkin methods for resolving non linear and dispersive near shore waves
textNear shore hydrodynamics has been an important research area dealing with coastal processes. The nearshore coastal region is the region between the shoreline and a fictive offshore limit which usually is defined as the limit where the depth becomes so large that it no longer influences the waves. This spatially limited but highly energetic zone is where water waves shoal, break and transmit energy to the shoreline and are governed by highly dispersive and non-linear effects. An accurate understanding of this phenomena is extremely useful, especially in emergency situations during hurricanes and storms. While the shallow water assumption is valid in regions where the characteristic wavelength exceeds a typical depth by orders of magnitude, Boussinesq-type equations have been used to model near-shore wave motion. Unfortunately these equations are complex system of coupled non-linear and dispersive differential equations that have made the developement of numerical approximations extremely challenging. In this dissertation, a local discontinuous Galerkin method for Boussinesq-Green Naghdi Equations is presented and validated against experimental results. Currently Green-Naghdi equations have many variants. We develop a numerical method in one horizontal dimension for the Green-Naghdi equations based on rotational characteristics in the velocity field. Stability criterion is also established for the linearized Green-Naghdi equations and a careful proof of linear stability of the numerical method is carried out. Verification is done against a linearized standing wave problem in flat bathymetry and h,p (denoted by K in this thesis) error rates are plotted. The numerical method is validated with experimental data from dispersive and non-linear test cases.Aerospace Engineerin
On the Existence of Steady-State Solutions to the Equations Governing Fluid Flow in Networks
The steady-state solution of fluid flow in pipeline infrastructure networks
driven by junction/node potentials is a crucial ingredient in various decision
support tools for system design and operation. While the non-linear system is
known to have a unique solution (when one exists), the absence of a definite
result on existence of solutions hobbles the development of computational
algorithms, for it is not possible to distinguish between algorithm failure and
non-existence of a solution. In this letter we show that a unique solution
exists for such non-linear systems if the term solution is interpreted in terms
of potentials and flows rather than pressures and flows. The existence result
for flow of natural gas in networks also applies to other fluid flow networks
such as water distribution networks or networks that transport carbon dioxide
in carbon capture and sequestration. Most importantly, by giving a complete
answer to the question of existence of solutions, our result enables correct
diagnosis of algorithmic failure, problem stiffness and non-convergence in
computational algorithms.Comment: 5 pages, 2 figure
Semi-supervised Learning of Pushforwards For Domain Translation & Adaptation
Given two probability densities on related data spaces, we seek a map pushing
one density to the other while satisfying application-dependent constraints.
For maps to have utility in a broad application space (including domain
translation, domain adaptation, and generative modeling), the map must be
available to apply on out-of-sample data points and should correspond to a
probabilistic model over the two spaces. Unfortunately, existing approaches,
which are primarily based on optimal transport, do not address these needs. In
this paper, we introduce a novel pushforward map learning algorithm that
utilizes normalizing flows to parameterize the map. We first re-formulate the
classical optimal transport problem to be map-focused and propose a learning
algorithm to select from all possible maps under the constraint that the map
minimizes a probability distance and application-specific regularizers; thus,
our method can be seen as solving a modified optimal transport problem. Once
the map is learned, it can be used to map samples from a source domain to a
target domain. In addition, because the map is parameterized as a composition
of normalizing flows, it models the empirical distributions over the two data
spaces and allows both sampling and likelihood evaluation for both data sets.
We compare our method (parOT) to related optimal transport approaches in the
context of domain adaptation and domain translation on benchmark data sets.
Finally, to illustrate the impact of our work on applied problems, we apply
parOT to a real scientific application: spectral calibration for
high-dimensional measurements from two vastly different environmentsComment: 19 pages, 7 figure
BB-ML: Basic Block Performance Prediction using Machine Learning Techniques
Recent years have seen the adoption of Machine Learning (ML) techniques to
predict the performance of large-scale applications, mostly at a coarse level.
In contrast, we propose to use ML techniques for performance prediction at a
much finer granularity, namely at the Basic Block (BB) level, which are single
entry, single exit code blocks that are used for analysis by the compilers to
break down a large code into manageable pieces. We extrapolate the basic block
execution counts of GPU applications and use them for predicting the
performance for large input sizes from the counts of smaller input sizes. We
train a Poisson Neural Network (PNN) model using random input values as well as
the lowest input values of the application to learn the relationship between
inputs and basic block counts. Experimental results show that the model can
accurately predict the basic block execution counts of 16 GPU benchmarks. We
achieve an accuracy of 93.5% in extrapolating the basic block counts for large
input sets when trained on smaller input sets and an accuracy of 97.7% in
predicting basic block counts on random instances. In a case study, we apply
the ML model to CUDA GPU benchmarks for performance prediction across a
spectrum of applications. We use a variety of metrics for evaluation, including
global memory requests and the active cycles of tensor cores, ALU, and FMA
units. Results demonstrate the model's capability of predicting the performance
of large datasets with an average error rate of 0.85% and 0.17% for global and
shared memory requests, respectively. Additionally, to address the utilization
of the main functional units in Ampere architecture GPUs, we calculate the
active cycles for tensor cores, ALU, FMA, and FP64 units and achieve an average
error of 2.3% and 10.66% for ALU and FMA units while the maximum observed error
across all tested applications and units reaches 18.5%.Comment: Accepted at the 29th IEEE International Conference on Parallel and
Distributed Systems (ICPADS 2023
Hepatitis following famotidine: a case report
H2 receptor antagonists can rarely cause idiosyncratic drug reactions leading to acute hepatitis. Famotidine, however, is considered a relatively safe drug with regards to hepatotoxicity. We report a case of a 47 year old male with a history of hepatitis C who developed acute hepatitis on the third day of hospitalization with a dramatic rise in his liver enzymes from normal values at the time of admission. The acute rise in liver enzymes made us consider an adverse drug reaction and famotidine was discontinued. Subsequently his liver enzymes came back to normal in seven days. Thus, physicians should consider famotidine induced hepatitis as a possible etiology of acute liver dysfunction