276 research outputs found
Robust preconditioners for a new stabilized discretization of the poroelastic equations
In this paper, we present block preconditioners for a stabilized
discretization of the poroelastic equations developed in [45]. The
discretization is proved to be well-posed with respect to the physical and
discretization parameters, and thus provides a framework to develop
preconditioners that are robust with respect to such parameters as well. We
construct both norm-equivalent (diagonal) and field-of-value-equivalent
(triangular) preconditioners for both the stabilized discretization and a
perturbation of the stabilized discretization that leads to a smaller overall
problem after static condensation. Numerical tests for both two- and
three-dimensional problems confirm the robustness of the block preconditioners
with respect to the physical and discretization parameters
Statistical Thermodynamic Isotherm-Based Model for Activity Coefficients in Complex Aqueous Solutions with Atmospheric Aerosol Applications
University of Minnesota M.S.M.E. thesis. May 2015. Major: Mechanical Engineering. Advisor: Cari Dutcher. 1 computer file (PDF); xvii, 126 pages.Aqueous aerosol particles are nearly ubiquitous in the atmosphere and yet there remain large uncertainties in their formation processes and ambient properties. The uncertainty is in part due to the complex nature of the individual particle microenvironment, which can involve a myriad of chemical components and multiple phases. The calculation of gas-liquid-solid equilibrium partitioning of the water, electrolyte, and soluble organic components is critical to accurate determination of atmospheric chemistry properties and processes such as new particle formation and activation to cloud condensation nuclei. Previously, a transformative model for capturing thermodynamic properties of multicomponent aqueous solutions over the entire concentration range (Dutcher et al. J. Phys. Chem 2011, 2012, 2013) was developed using statistical mechanics and multilayer adsorption isotherms. That model needed only a few adsorption energy values to represent the solution thermodynamics of each solute. In the current work, we posit that the adsorption energies are due to dipole-dipole electrostatic forces in solute-solvent and solvent-solvent interactions. This hypothesis was tested in aqueous solutions on (a) thirty-seven 1:1 electrolytes, over a range of cation sizes, from H+ to tetrabutylammonium, for common anions including Cl-, Br-, I-, NO3-, OH-, ClO4-, and (b) twenty water soluble organic molecules including alcohols and polyols. For both electrolytes and organic solutions, the energies of adsorption can be calculated with the dipole moments of the solvent, molecular size of the solvent and solute, and the solvent-solvent and solvent-solute intermolecular bond lengths. Many of these physical properties are available in the literature, with the exception of the solute-solvent intermolecular bond lengths. For those, predictive correlations developed here enable estimation of solute and solvent solution activities for which there are little or no activity data. The model was successfully validated using thirty-seven 1:1 electrolytes and twenty non-dissociating organic solutions (Ohm et al. J. Phys. Chem. 2015). However, careful attention is needed for weakly dissociating semi-volatile organic acids. Dicarboxylic acids such as malonic and glutaric acid are treated here as a mixture of non-dissociated organic species (HA) and dissociated organic species (H+ + A-). It was found that the apparent dissociation was greater than that predicted by known dissociation constants alone, emphasizing the effect of dissociation on activity coefficient predictions. To avoid additional parameterization from the mixture approach, an expression was used to relate the Debye-H�ckel hard-core collision diameter to the adjustable solute-solvent intermolecular distance. This work results in predictive correlations for estimation of solute and solvent solution activities for which there are little or no activity data
Non-invasive multigrid for semi-structured grids
Multigrid solvers for hierarchical hybrid grids (HHG) have been proposed to
promote the efficient utilization of high performance computer architectures.
These HHG meshes are constructed by uniformly refining a relatively coarse
fully unstructured mesh. While HHG meshes provide some flexibility for
unstructured applications, most multigrid calculations can be accomplished
using efficient structured grid ideas and kernels. This paper focuses on
generalizing the HHG idea so that it is applicable to a broader community of
computational scientists, and so that it is easier for existing applications to
leverage structured multigrid components. Specifically, we adapt the structured
multigrid methodology to significantly more complex semi-structured meshes.
Further, we illustrate how mature applications might adopt a semi-structured
solver in a relatively non-invasive fashion. To do this, we propose a formal
mathematical framework for describing the semi-structured solver. This
formalism allows us to precisely define the associated multigrid method and to
show its relationship to a more traditional multigrid solver. Additionally, the
mathematical framework clarifies the associated software design and
implementation. Numerical experiments highlight the relationship of the new
solver with classical multigrid. We also demonstrate the generality and
potential performance gains associated with this type of semi-structured
multigrid
Graph Neural Networks and Applied Linear Algebra
Sparse matrix computations are ubiquitous in scientific computing. With the
recent interest in scientific machine learning, it is natural to ask how sparse
matrix computations can leverage neural networks (NN). Unfortunately,
multi-layer perceptron (MLP) neural networks are typically not natural for
either graph or sparse matrix computations. The issue lies with the fact that
MLPs require fixed-sized inputs while scientific applications generally
generate sparse matrices with arbitrary dimensions and a wide range of nonzero
patterns (or matrix graph vertex interconnections). While convolutional NNs
could possibly address matrix graphs where all vertices have the same number of
nearest neighbors, a more general approach is needed for arbitrary sparse
matrices, e.g. arising from discretized partial differential equations on
unstructured meshes. Graph neural networks (GNNs) are one approach suitable to
sparse matrices. GNNs define aggregation functions (e.g., summations) that
operate on variable size input data to produce data of a fixed output size so
that MLPs can be applied. The goal of this paper is to provide an introduction
to GNNs for a numerical linear algebra audience. Concrete examples are provided
to illustrate how many common linear algebra tasks can be accomplished using
GNNs. We focus on iterative methods that employ computational kernels such as
matrix-vector products, interpolation, relaxation methods, and
strength-of-connection measures. Our GNN examples include cases where
parameters are determined a-priori as well as cases where parameters must be
learned. The intent with this article is to help computational scientists
understand how GNNs can be used to adapt machine learning concepts to
computational tasks associated with sparse matrices. It is hoped that this
understanding will stimulate data-driven extensions of classical sparse linear
algebra tasks
Organic component vapor pressures and hygroscopicities of aqueous aerosol measured by optical tweezers
Measurements of the hygroscopic response of aerosol and the particle-to-gas partitioning of semivolatile organic compounds are crucial for providing more accurate descriptions of the compositional and size distributions of atmospheric aerosol. Concurrent measurements of particle size and composition (inferred from refractive index) are reported here using optical tweezers to isolate and probe individual aerosol droplets over extended timeframes. The measurements are shown to allow accurate retrievals of component vapor pressures and hygroscopic response through examining correlated variations in size and composition for binary droplets containing water and a single organic component. Measurements are reported for a homologous series of dicarboxylic acids, maleic acid, citric acid, glycerol, or 1,2,6-hexanetriol. An assessment of the inherent uncertainties in such measurements when measuring only particle size is provided to confirm the value of such a correlational approach. We also show that the method of molar refraction provides an accurate characterization of the compositional dependence of the refractive index of the solutions. In this method, the density of the pure liquid solute is the largest uncertainty and must be either known or inferred from subsaturated measurements with an error of <±2.5% to discriminate between different thermodynamic treatments. (Chemical Equation Presented)
Diffusion and reactivity in ultraviscous aerosol and the correlation with particle viscosity
Direct comparison of diffusion coefficients and viscosities of ternary component single aerosol particles levitated using optical tweezers.</p
The regulatory network of the White Collar complex during early mushroom development in Schizophyllum commune
Blue light is an important signal for fungal development. In the mushroom-forming basidiomycete Schizophyllum commune, blue light is detected by the White Collar complex, which consists of WC-1 and WC-2. Most of our knowledge on this complex is derived from the ascomycete Neurospora crassa, where both WC-1 and WC-2 contain GATA zinc-finger transcription factor domains. In basidiomycetes, WC-1 is truncated and does not contain a transcription factor domain, but both WC-1 and WC-2 are still important for development. We show that dimerization of WC-1 and WC-2 happens independent of light in S. commune, but that induction by light is required for promoter binding by the White Collar complex. Furthermore, the White Collar complex is a promoter of transcription, but binding of the complex alone is not always sufficient to initiate transcription. For its function, the White Collar complex associates directly with the promoters of structural genes involved in mushroom development, like hydrophobins, but also promotes the expression of other transcription factors that play a role in mushroom development
Influence of Particle Viscosity on Mass Transfer and Heterogeneous Ozonolysis Kinetics in Aqueous-Sucrose-Maleic Acid Aerosol
The ozonolysis kinetics of viscous aerosol particles containing maleic acid are studied. Kinetic fits are constrained by measured particle viscosities.</p
The On-Site Analysis of the Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) observatory will be one of the largest
ground-based very high-energy gamma-ray observatories. The On-Site Analysis
will be the first CTA scientific analysis of data acquired from the array of
telescopes, in both northern and southern sites. The On-Site Analysis will have
two pipelines: the Level-A pipeline (also known as Real-Time Analysis, RTA) and
the level-B one. The RTA performs data quality monitoring and must be able to
issue automated alerts on variable and transient astrophysical sources within
30 seconds from the last acquired Cherenkov event that contributes to the
alert, with a sensitivity not worse than the one achieved by the final pipeline
by more than a factor of 3. The Level-B Analysis has a better sensitivity (not
be worse than the final one by a factor of 2) and the results should be
available within 10 hours from the acquisition of the data: for this reason
this analysis could be performed at the end of an observation or next morning.
The latency (in particular for the RTA) and the sensitivity requirements are
challenging because of the large data rate, a few GByte/s. The remote
connection to the CTA candidate site with a rather limited network bandwidth
makes the issue of the exported data size extremely critical and prevents any
kind of processing in real-time of the data outside the site of the telescopes.
For these reasons the analysis will be performed on-site with infrastructures
co-located with the telescopes, with limited electrical power availability and
with a reduced possibility of human intervention. This means, for example, that
the on-site hardware infrastructure should have low-power consumption. A
substantial effort towards the optimization of high-throughput computing
service is envisioned to provide hardware and software solutions with
high-throughput, low-power consumption at a low-cost.Comment: In Proceedings of the 34th International Cosmic Ray Conference
(ICRC2015), The Hague, The Netherlands. All CTA contributions at
arXiv:1508.0589
- …