6,369 research outputs found
Molecular Mechanisms by which Tetrahydrofuran Affects COâ‚‚ Hydrate Growth: Implications for Carbon Storage
Gas hydrates have attracted siginifcant fundamental and applied interests due to their important role in various technological and enviromental processes. More recently, gas hydrates have shown potential applications for greenhouse gas capture and storage. To facilitate the latter application, introducing chemical additives into clathrate hydrates could help to enhance hydrate formation/growth rates, provided the gas storage capacity is not reduced. Employing equilibrium molecular dynamics, we study the impact of tetrahydrofuran (THF) on the kinetics of carbon dioxide (CO₂) hydrate growth/dissociation and on the CO₂ storage capacity of hydrates. Our simulations reproduce experimental data for CO₂ and CO₂+THF hydrates at selected operating conditions. The simulated results confirm that THF in stoichiometric concentration does reduce CO₂ storage capacity. This is not only due to the shortage of CO₂ trapping in sII hydrate 5^{12} cages, but also because of the favored THF occupancy in hydrate cages due to preferential THF−water hydrogen bonds. An analysis of the dynamical properties for CO₂ and THF at the hydrate-liquid interface reveals that THF can expedite CO₂ diffusion yielding a shift in the conditions conducive to CO₂ hydrate growth and stability to lower pressures and higher temperatures compared to systems without THF. These simulation results augment literature experimental observations, as they provide needed insights into the molecular mechanisms that can be adjusted to achieve optimal CO₂ storage in hydrates
Screen test for cadmium and nickel plates as developed and used within the Aerospace Corporation
A new procedure described here was recently developed to quantify loading uniformity of nickel and cadmium plates and to screen finished electrodes prior to cell assembly. The technique utilizes the initial solubility rates of the active material in a standard chemical deloading solution at fixed conditions. The method can provide a reproducible indication of plate loading uniformity in situations where high surface loading limits the free flow of deloading solution into the internal porosity of the sinter plate. A preliminary study indicates that 'good' cell performance is associated with higher deloading rates
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Pediatric Automatic Sleep Staging: A comparative study of state-of-the-art deep learning methods.
Despite the tremendous progress recently made towards automatic sleep staging in adults, it is currently unknown if the most advanced algorithms generalize to the pediatric population, which displays distinctive characteristics in overnight polysomnography (PSG). To answer the question, in this work, we conduct a large-scale comparative study on the state-of-the-art deep learning methods for pediatric automatic sleep staging. Six different deep neural networks with diverging features are adopted to evaluate a sample of more than 1,200 children across a wide spectrum of obstructive sleep apnea (OSA) severity. Our experimental results show that the individual performance of automated pediatric sleep stagers when evaluated on new subjects is equivalent to the expert-level one reported on adults. Combining the six stagers into ensemble models further boosts the staging accuracy, reaching an overall accuracy of 88.8%, a Cohens kappa of 0.852, and a macro F1-score of 85.8%. At the same time, the ensemble models lead to reduced predictive uncertainty. The results also show that the studied algorithms and their ensembles are robust to concept drift when the training and test data were recorded seven months apart and after clinical intervention. However, we show that the improvements in the staging performance are not necessarily clinically significant although the ensemble models lead to more favorable clinical measures than the six standalone models. Detailed analyses further demonstrate "almost perfect" agreement between the automatic stagers to one another and their similar patterns on the staging errors, suggesting little room for improvement
Exchange Bias Effect in Au-Fe3O4 Nanocomposites
We report exchange bias (EB) effect in the Au-Fe3O4 composite nanoparticle
system, where one or more Fe3O4 nanoparticles are attached to an Au seed
particle forming dimer and cluster morphologies, with the clusters showing much
stronger EB in comparison with the dimers. The EB effect develops due to the
presence of stress in the Au-Fe3O4 interface which leads to the generation of
highly disordered, anisotropic surface spins in the Fe3O4 particle. The EB
effect is lost with the removal of the interfacial stress. Our atomistic
Monte-Carlo studies are in excellent agreement with the experimental results.
These results show a new path towards tuning EB in nanostructures, namely
controllably creating interfacial stress, and open up the possibility of tuning
the anisotropic properties of biocompatible nanoparticles via a controllable
exchange coupling mechanism.Comment: 28 pages, 6 figures, submitted to Nanotechnolog
- …