18 research outputs found

    On the Existence of Steady-State Solutions to the Equations Governing Fluid Flow in Networks

    Full text link
    The steady-state solution of fluid flow in pipeline infrastructure networks driven by junction/node potentials is a crucial ingredient in various decision support tools for system design and operation. While the non-linear system is known to have a unique solution (when one exists), the absence of a definite result on existence of solutions hobbles the development of computational algorithms, for it is not possible to distinguish between algorithm failure and non-existence of a solution. In this letter we show that a unique solution exists for such non-linear systems if the term solution is interpreted in terms of potentials and flows rather than pressures and flows. The existence result for flow of natural gas in networks also applies to other fluid flow networks such as water distribution networks or networks that transport carbon dioxide in carbon capture and sequestration. Most importantly, by giving a complete answer to the question of existence of solutions, our result enables correct diagnosis of algorithmic failure, problem stiffness and non-convergence in computational algorithms.Comment: 5 pages, 2 figure

    Semi-supervised Learning of Pushforwards For Domain Translation & Adaptation

    Full text link
    Given two probability densities on related data spaces, we seek a map pushing one density to the other while satisfying application-dependent constraints. For maps to have utility in a broad application space (including domain translation, domain adaptation, and generative modeling), the map must be available to apply on out-of-sample data points and should correspond to a probabilistic model over the two spaces. Unfortunately, existing approaches, which are primarily based on optimal transport, do not address these needs. In this paper, we introduce a novel pushforward map learning algorithm that utilizes normalizing flows to parameterize the map. We first re-formulate the classical optimal transport problem to be map-focused and propose a learning algorithm to select from all possible maps under the constraint that the map minimizes a probability distance and application-specific regularizers; thus, our method can be seen as solving a modified optimal transport problem. Once the map is learned, it can be used to map samples from a source domain to a target domain. In addition, because the map is parameterized as a composition of normalizing flows, it models the empirical distributions over the two data spaces and allows both sampling and likelihood evaluation for both data sets. We compare our method (parOT) to related optimal transport approaches in the context of domain adaptation and domain translation on benchmark data sets. Finally, to illustrate the impact of our work on applied problems, we apply parOT to a real scientific application: spectral calibration for high-dimensional measurements from two vastly different environmentsComment: 19 pages, 7 figure

    BB-ML: Basic Block Performance Prediction using Machine Learning Techniques

    Full text link
    Recent years have seen the adoption of Machine Learning (ML) techniques to predict the performance of large-scale applications, mostly at a coarse level. In contrast, we propose to use ML techniques for performance prediction at a much finer granularity, namely at the Basic Block (BB) level, which are single entry, single exit code blocks that are used for analysis by the compilers to break down a large code into manageable pieces. We extrapolate the basic block execution counts of GPU applications and use them for predicting the performance for large input sizes from the counts of smaller input sizes. We train a Poisson Neural Network (PNN) model using random input values as well as the lowest input values of the application to learn the relationship between inputs and basic block counts. Experimental results show that the model can accurately predict the basic block execution counts of 16 GPU benchmarks. We achieve an accuracy of 93.5% in extrapolating the basic block counts for large input sets when trained on smaller input sets and an accuracy of 97.7% in predicting basic block counts on random instances. In a case study, we apply the ML model to CUDA GPU benchmarks for performance prediction across a spectrum of applications. We use a variety of metrics for evaluation, including global memory requests and the active cycles of tensor cores, ALU, and FMA units. Results demonstrate the model's capability of predicting the performance of large datasets with an average error rate of 0.85% and 0.17% for global and shared memory requests, respectively. Additionally, to address the utilization of the main functional units in Ampere architecture GPUs, we calculate the active cycles for tensor cores, ALU, FMA, and FP64 units and achieve an average error of 2.3% and 10.66% for ALU and FMA units while the maximum observed error across all tested applications and units reaches 18.5%.Comment: Accepted at the 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023

    Hepatitis following famotidine: a case report

    Get PDF
    H2 receptor antagonists can rarely cause idiosyncratic drug reactions leading to acute hepatitis. Famotidine, however, is considered a relatively safe drug with regards to hepatotoxicity. We report a case of a 47 year old male with a history of hepatitis C who developed acute hepatitis on the third day of hospitalization with a dramatic rise in his liver enzymes from normal values at the time of admission. The acute rise in liver enzymes made us consider an adverse drug reaction and famotidine was discontinued. Subsequently his liver enzymes came back to normal in seven days. Thus, physicians should consider famotidine induced hepatitis as a possible etiology of acute liver dysfunction
    corecore