10 research outputs found
Recommended from our members
The use of CML and CML in Computational Chemistry and Physics Programs
Proceedings of the 2004 e-Science All Hands Meeting, 31st August - 3rd September, Nottingham UKThis work addresses problems associated with data exchange and data representation in the computational
chemistry and physics communities. Recent computational developments, such as Condor and the Grid,
have paved the way for new kinds of simulations that demand more rigorous data handling. To this end,
the paper discusses the use of XML and the Chemical Markup Language (CML) in theoretical chemistry
and physics. Extensions to the core CML language, known as CMLComp, are also discussed. However,
the majority of atomic scale simulation software is written in Fortran. Fortran's lack of XML support
represents a potential barrier to the adoption of CML in these fields. This has prompted the authors to
develop XML and CML processing tools for Fortran, including native SAX and DOM implementations, as
well as libraries for generating well formed XML and CML. These libraries have been used to extend
existing simulation packages to work with the CML and CMLComp languages. Finally, we give a
practical example that highlights how these XML aware applications can be effectively used as workflow
components in complex chemical and physical simulations
The association of microbial activity with Fe, S and trace element distribution in sediment cores within a natural wetland polluted by acid mine drainage
© 2019 Elsevier Ltd Natural recovery and remediation of acid mine drainage (AMD) reduces the generation of acidity and transport of trace elements in the runoff. A natural wetland that receives and remediates AMD from an abandoned copper mine at Parys Mountain (Anglesey, UK) was investigated for better understanding of the remediation mechanisms. Water column concentrations of dissolved Fe and S species, trace metal (loid)s and acidity decreased markedly as the mine drainage stream passed through the wetland. The metal (loid)s were removed from the water column by deposition into the sediment. Fe typically accumulated to higher concentrations in the surface layers of sediment while S and trace metal (loid)s were deposited at higher concentration within deeper (20–50 cm) sediments. High resolution X-ray fluorescence scans of sediment cores taken at three sites along the wetland indicates co–immobilization of Zn, Cu and S with sediment depth as each element showed a similar core profile. To examine the role of bacteria in sediment elemental deposition, marker genes for Fe and S metabolism were quantified. Increased expression of marker genes for S and Fe oxidation was detected at the same location within the middle of the wetland where significant decrease in SO42− and Fe2+ was observed and where generation of particulate Fe occurs. This suggests that the distribution and speciation of Fe and S that mediates the immobilization and deposition of trace elements within the natural wetland sediments is mediated in part by bacterial activity
Simulations of idealised 3D atmospheric flows on terrestrial planets using LFRic-Atmosphere
We demonstrate that LFRic-Atmosphere, a model built using the Met Office's
GungHo dynamical core, is able to reproduce idealised large-scale atmospheric
circulation patterns specified by several widely-used benchmark recipes. This
is motivated by the rapid rate of exoplanet discovery and the ever-growing need
for numerical modelling and characterisation of their atmospheres. Here we
present LFRic-Atmosphere's results for the idealised tests imitating
circulation regimes commonly used in the exoplanet modelling community. The
benchmarks include three analytic forcing cases: the standard Held-Suarez test,
the Menou-Rauscher Earth-like test, and the Merlis-Schneider Tidally Locked
Earth test. Qualitatively, LFRic-Atmosphere agrees well with other numerical
models and shows excellent conservation properties in terms of total mass,
angular momentum and kinetic energy. We then use LFRic-Atmosphere with a more
realistic representation of physical processes (radiation, subgrid-scale
mixing, convection, clouds) by configuring it for the four TRAPPIST-1 Habitable
Atmosphere Intercomparison (THAI) scenarios. This is the first application of
LFRic-Atmosphere to a possible climate of a confirmed terrestrial exoplanet.
LFRic-Atmosphere reproduces the THAI scenarios within the spread of the
existing models across a range of key climatic variables. Our work shows that
LFRic-Atmosphere performs well in the seven benchmark tests for terrestrial
atmospheres, justifying its use in future exoplanet climate studies.Comment: 34 pages, 9(12) figures; Submitted to Geoscientific Model
Development; Comments are welcome (see Discussion tab on the journal's
website: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-647
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
An experimental and theoretical investigation of SnOâ‚‚ as an HF sensor
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Recommended from our members
JUMBO - An XML infrastructure for eScience
Proceedings of the 2004 e-Science All Hands Meeting, 31st August - 3rd September, Nottingham UKJUMBO is an OpenSource toolkit addressing the semantic and ontological impedances that are
major barriers to interoperability in computational chemistry and physics. Users build
XMLSchemas from generic XML components to support particular computational tasks, such as
high-throughput chemistry. JUMBO components provide a complete semantic description of
information to or from a code such as MOPAC or GAMESS. Codes are edited to use JUMBO
libraries as adapters to program-independent XML objects, or output is transduced using a generic
parser, JUMBOMarker. The JUMBO system is designed for flexible collaborative contributions
Hasan, Environment from the molecular level: an escience testbed project, AHM 2003
The testbed project has the ambition to push the practical possibilities of atomistic simulations forward to the point where we can perform realistic calculations on important environmental processes. The project has three components: the science driving the project, the development of the simulation codes, and setting up a grid infrastructure for this work. This paper describes these areas of work and gives a status report on each. 1
Biogeochemistry and community ecology in a spring-fed urban river following a major earthquake
In February 2011 a Mw 6.3 earthquake in Christchurch, New Zealand inundated urban waterways with sediment from liquefaction and triggered sewage spills. The impacts of, and recovery from, this natural disaster on the stream biogeochemistry and biology were assessed over six months along a longitudinal impact gradient in an urban river. The impact of liquefaction was masked by earthquake triggered sewage spills (∼20,000 m³ day⁻¹ entering the river for one month). Within 10 days of the earthquake dissolved oxygen in the lowest reaches was <1 mg l⁻¹, in-stream denitrification accelerated (attenuating 40–80% of sewage nitrogen), microbial biofilm communities changed, and several benthic invertebrate taxa disappeared. Following sewage system repairs, the river recovered in a reverse cascade, and within six months there were no differences in water chemistry, nutrient cycling, or benthic communities between severely and minimally impacted reaches. This study highlights the importance of assessing environmental impact following urban natural disasters
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press