962 research outputs found
Analysis of a Lennard-Jones fcc structure melting to the corresponding frozen liquid: differences between the bulk and the surface
We computed a Lennard Jones frozen liquid with a free surface using classical
molecular dynamics. The structure factor curves on the free surface of this
sample was calculated for different depths knowing that we have periodic
boundary conditions on the other parts of the sample. The resulting structure
factor curves show an horizontal shift of their first peak depending on how
deep in the sample the curves are computed. We analyze our resulting curves in
the light of spatial correlation functions during melting and at when the
liquid is frozen. The conclusion is that near the free surface the sample is
less dense than in the bulk and that the frozen liquid surface has a spatial
correlation which does not differ very much from that of the bulk. This result
is intrinsic to the melting of the Lennard Jones liquid and does not depend on
any other parameter.Comment: 18 pages 9 figure
Recommended from our members
Obtaining sparse distributions in 2D inverse problems
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1âT2, and diffusion-relaxation, DâT2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.A.R. acknowledges Gates Trust Cambridge for financial support. A.J.S. and L.F.G. would like to acknowledge support from EPSRC (EP/N009304/1)
Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments.
A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization
Recommended from our members
Optimising magnetic resonance sampling patterns for parametric characterisation.
Sampling strategies are often central to experimental design. Choosing efficiently which data to acquire can improve the estimation of parameters and reduce the acquisition time. This work is focused on designing optimal sampling patterns for Nuclear Magnetic Resonance (NMR) applications, illustrated with respect to the best estimate of the parameters characterising a lognormal distribution. Lognormal distributions are commonly used as fitting models for distributions of spin-lattice relaxation time constants, spin-spin relaxation time constants and diffusion coefficients. A method for optimising the choice of points to be sampled is presented which is based on the CramĂ©r-Rao Lower Bound (CRLB) theory. The method's capabilities are demonstrated experimentally by applying it to the problem of estimating the emulsion droplet size distribution from a pulsed field gradient (PFG) NMR diffusion experiment. A difference of <5% is observed between the predictions of CRLB theory and the PFG NMR experimental results. It is shown that CLRB theory is stable down to signal-to-noise ratios of âŒ10. A sensitivity analysis for the CRLB theory is also performed. The method of optimizing sampling patterns is easily adapted to distributions other than lognormal and to other aspects of experimental design; case studies of optimising the sampling scheme for a fixed acquisition time and determining the potential for reduction in acquisition time for a fixed parameter estimation accuracy are presented. The experimental acquisition time is typically reduced by a factor of 3 using the proposed method compared to a constant gradient increment approach that would usually be used
âCome Hell and High Waterâ: The Role of Archivists, Historical Myths, and Activism in Communities Facing Repeated Extreme Flooding Events
While the names Harvey, Sandy, and Katrina ring loudly in the ears of many today â can we still learn valuable lessons in the archives from Diane, Camille, and Agnes? Climate change increasingly contributes to not only more frequent and more violent tropical cyclogenesis, but repeated extreme flooding events caused by unnamed weather systems, supercells, dam failures, and surges from rising oceans. These events have opened questions of survival for communities across the United States, and recent examples show that some communities indeed face pressure to abandon their long-standing ground and forego rebuilding.
In a 2013 article titled âCome Hell and High Water,â activist and author Bill McKibben addressed the dual threat of flood and rising temperatures, and posed the questions âwhat\u27s an appropriate response? What even begins to match the magnitude of the trouble we face? What doesn\u27t seem like spitting in the wind?â
How can archives respond? How can we help educate the public regarding myths around climate, weather, and historical efforts to rebuild after similar events in the past? How can archivists work with activists in the community to educate stakeholders, politicians, and taxpayers regarding the risks and rewards of rebuilding, increased infrastructure investments, or to advocate for revised flood zoning, revamping of insurance programs, and literal rainy day funds? Should archives help shape these community discussions? Where do digital archives fit into the picture? In this session, a group of panelists will provide thematic discussions addressing these questions, followed by a town hall style, participatory discussion. The points of views expressed, ideas for involvement, solutions, and advocacy opportunities suggested, as well as stories from the trenches, will be recorded using Mentimeter and shared following the session
Recommended from our members
Accelerating the estimation of 3D spatially resolved T2 distributions.
Obtaining quantitative, 3D spatially-resolved T2 distributions (T2 maps) from magnetic resonance data is of importance in both medical and porous media applications. Due to the long acquisition time, there is considerable interest in accelerating the experiments by applying undersampling schemes during the acquisition and developing reconstruction techniques for obtaining the 3D T2 maps from the undersampled data. A multi-echo spin echo pulse sequence is used in this work to acquire the undersampled data according to two different sampling patterns: a conventional coherent sampling pattern where the same set of lines in k-space is sampled for all equally-spaced echoes in the echo train, and a proposed incoherent sampling pattern where an independent set of k-space lines is sampled for each echo. The conventional reconstruction technique of total variation regularization is compared to the more recent techniques of nuclear norm regularization and Nuclear Total Generalized Variation (NTGV) regularization. It is shown that best reconstructions are obtained when the data acquired using an incoherent sampling scheme are processed using NTGV regularization. Using an incoherent sampling pattern and NTGV regularization as the reconstruction technique, quantitative results are obtained at sampling percentages as low as 3.1% of k-space, corresponding to a 32-fold decrease in the acquisition time, compared to a fully sampled dataset
Recommended from our members
Optimising sampling patterns for bi-exponentially decaying signals.
A recently reported method, based on the Cramér-Rao Lower Bound theory, for optimising sampling patterns for a wide range of nuclear magnetic resonance (NMR) experiments is applied to the problem of optimising sampling patterns for bi-exponentially decaying signals. Sampling patterns are optimised by minimizing the percentage error in estimating the most difficult to estimate parameter of the bi-exponential model, termed the objective function. The predictions of the method are demonstrated in application to pulsed field gradient NMR data recorded for the two-component diffusion of a binary mixture of methane/ethane in a zeolite. It is shown that the proposed method identifies an optimal sampling pattern with the predicted objective function being within 10% of that calculated from the experiment dataset. The method is used to advise on the number of sampled points and the noise level needed to resolve two-component systems characterised by a range of ratios of populations and diffusion coefficients. It is subsequently illustrated how the method can be used to reduce the experiment acquisition time while still being able to resolve a given two-component system
- âŠ