693 research outputs found

    Three-dimensional Deformable Pore Networks

    Get PDF
    Porous structures in materials play a part in many areas of research and development. A couple of examples of this are extraction of water through aquifers and oil through fracking processes. Current understanding of the small scale fluid-fluid interactions in the structure of these porous materials stops at data of the two dimensional interface between the two fluids. This experiment aimed to create three dimensional, transparent, deformable micro-models which are expected allow us to obtain three dimensional data sets of the capillary pressure–saturation–interfacial area per volume relationship. The micro-models were synthesized using a grain deposition technique. Grains were formed using the polymerization of a 5% (v/v) solutions of Irgacur 1173 initiator in poly (ethylene glycol) diacrylate when the solution is exposed to patterns of ultraviolet light (in the range of 435nm to 485nm). These grains are layered in a pre-made plain channel micro-model to create a complex but transparent porous structure. Initial imaging using laser confocal microscopy shows that these micro-models can be used to study three dimensional interactions between fluids in porous structures. Through the creation of these three dimensional micro-models we now have a better way to experimentally model porous materials found in nature which offers many topological possibilities for applications in rock, biology, oil, water, and even food science research

    The Mineralogical and Chemical Case for Habitability at Yellowknife Bay, Gale Crater, Mars

    Get PDF
    Sediments of the Yellowknife Bay formation (Gale crater) include the Sheepbed member, a mudstone cut by light-toned veins. Two drill samples, John Klein and Cumberland, were collected and analyzed by the CheMin XRD/XRF instrument and the Sample Analysis at Mars (SAM) evolved gas and isotopic analysis suite of instruments. Drill cuttings were also analyzed by the Alpha Particle X-ray Spectrometer (APXS) for bulk composition. The CheMin XRD analysis shows that the mudstone contains basaltic minerals (Fe-forsterite, augite, pigeonite, plagioclase), as well as Fe-oxide/hydroxides, Fe-sulfides, amorphous materials, and trioctahedral phyllosilicates. SAM evolved gas analysis of higher-temperature OH matches the CheMin XRD estimate of ~20% clay minerals in the mudstone. The light-toned veins contain Ca-sulfates; anhydrite and bassanite are detected by XRD but gypsum is also indicated from Mastcam spectral mapping. These sulfates appear to be almost entirely restricted to late-diagenetic veins. The sulfate content of the mudstone matrix itself is lower than other sediments analyzed on Mars. The presence of phyllosilicates indicates that the activity of water was high during their formation and/or transport and deposition (should they have been detrital). Lack of chlorite places limits on the maximum temperature of alteration (likely <100 C). The presence of Ca-sulfates rather than Mg- or Fe-sulfates suggests that the pore water pH was near-neutral and of relatively low ionic strength (although x-ray amorphous Mg-and Fe- sulfates could be present and undetectable by CheMin). The presence of Fe and S in both reduced and oxidized states represents chemical disequilibria that could have been utilized by chemolithoautotrophic biota, if present. When compared to the nearby Rocknest sand shadow mineralogy or the normative mineralogy of Martian soil, both John Klein and Cumberland exhibit a near-absence of olivine and a surplus of magnetite (7-9% of the crystalline component). The magnetite is interpreted as an authigenic product formed when olivine was altered to phyllosilicate. Saponitization of olivine (a process analogous to serpentinization) could have produced H2 in situ. Indeed, early diagenetic hollow nodules ("minibowls") present in the Cumberland mudstone are interpreted by some as forming when gas bubbles accumulated in the unconsolidated mudstone. Lastly, all of these early diagenetic features appear to have been preserved with minimal alteration since their formation, as indicated by the ease of drilling (weak lithification, lack of cementing phases), the presence of 20-30% amorphous material, and the late-stage fracturing with emplacement of calcium sulfate veins and minibowl infills, where they were intersected by veins. A rough estimate of the minimum duration of the lacustrine environment is provided by the minimum thickness of the Sheepbed member. Given 1.5 meters, and applying a mean sediment accumulation rate for lacustrine strata of 1 m/1000 yrs yields a duration of 1,500 years. If the aqueous environments represented by overlying strata are considered, such as Gillespie Lake and Shaler, then this duration increases. The Sheepbed mudstone meets all the requirements of a habitable environment: Aqueous deposition at clement conditions of P, T, pH, Eh and ionic strength, plus the availability of sources of chemical energy

    Activation of Methanogenesis in Arid Biological Soil Crusts Despite the Presence of Oxygen

    Get PDF
    Methanogenesis is traditionally thought to occur only in highly reduced, anoxic environments. Wetland and rice field soils are well known sources for atmospheric methane, while aerated soils are considered sinks. Although methanogens have been detected in low numbers in some aerated, and even in desert soils, it remains unclear whether they are active under natural oxic conditions, such as in biological soil crusts (BSCs) of arid regions. To answer this question we carried out a factorial experiment using microcosms under simulated natural conditions. The BSC on top of an arid soil was incubated under moist conditions in all possible combinations of flooding and drainage, light and dark, air and nitrogen headspace. In the light, oxygen was produced by photosynthesis. Methane production was detected in all microcosms, but rates were much lower when oxygen was present. In addition, the δ13C of the methane differed between the oxic/oxygenic and anoxic microcosms. While under anoxic conditions methane was mainly produced from acetate, it was almost entirely produced from H2/CO2 under oxic/oxygenic conditions. Only two genera of methanogens were identified in the BSC-Methanosarcina and Methanocella; their abundance and activity in transcribing the mcrA gene (coding for methyl-CoM reductase) was higher under anoxic than oxic/oxygenic conditions, respectively. Both methanogens also actively transcribed the oxygen detoxifying gene catalase. Since methanotrophs were not detectable in the BSC, all the methane produced was released into the atmosphere. Our findings point to a formerly unknown participation of desert soils in the global methane cycle

    Dissociation of tau pathology and neuronal hypometabolism within the ATN framework of Alzheimer’s disease

    Get PDF
    Alzheimer’s disease (AD) is defined by amyloid (A) and tau (T) pathologies, with T better correlated to neurodegeneration (N). However, T and N have complex regional relationships in part related to non-AD factors that influence N. With machine learning, we assessed heterogeneity in 18F-flortaucipir vs. 18F-fluorodeoxyglucose positron emission tomography as markers of T and neuronal hypometabolism (NM) in 289 symptomatic patients from the Alzheimer’s Disease Neuroimaging Initiative. We identified six T/NM clusters with differing limbic and cortical patterns. The canonical group was defined as the T/NM pattern with lowest regression residuals. Groups resilient to T had less hypometabolism than expected relative to T and displayed better cognition than the canonical group. Groups susceptible to T had more hypometabolism than expected given T and exhibited worse cognitive decline, with imaging and clinical measures concordant with non-AD copathologies. Together, T/NM mismatch reveals distinct imaging signatures with pathobiological and prognostic implications for AD

    Statistical and visual differentiation of subcellular imaging

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Automated microscopy technologies have led to a rapid growth in imaging data on a scale comparable to that of the genomic revolution. High throughput screens are now being performed to determine the localisation of all of proteins in a proteome. Closer to the bench, large image sets of proteins in treated and untreated cells are being captured on a daily basis to determine function and interactions. Hence there is a need for new methodologies and protocols to test for difference in subcellular imaging both to remove bias and enable throughput. Here we introduce a novel method of statistical testing, and supporting software, to give a rigorous test for difference in imaging. We also outline the key questions and steps in establishing an analysis pipeline.</p> <p>Results</p> <p>The methodology is tested on a high throughput set of images of 10 subcellular localisations, and it is shown that the localisations may be distinguished to a statistically significant degree with as few as 12 images of each. Further, subtle changes in a protein's distribution between nocodazole treated and control experiments are shown to be detectable. The effect of outlier images is also examined and it is shown that while the significance of the test may be reduced by outliers this may be compensated for by utilising more images. Finally, the test is compared to previous work and shown to be more sensitive in detecting difference. The methodology has been implemented within the iCluster system for visualising and clustering bio-image sets.</p> <p>Conclusion</p> <p>The aim here is to establish a methodology and protocol for testing for difference in subcellular imaging, and to provide tools to do so. While iCluster is applicable to moderate (<1000) size image sets, the statistical test is simple to implement and will readily be adapted to high throughput pipelines to provide more sensitive discrimination of difference.</p

    The routinisation of management controls in software.

    Get PDF
    Author's post-print version. Final version published by Springer; available online at http://link.springer.com/Our paper aims to explore management control as complex and intertwining process over time, rather than the (mainstream) fixation on rational, optimising tools for ensuring business success. We set out to contribute towards our understanding of why and how particular management controls evolve over time as they do. We discuss how the management control routines of one organisation emerged and reproduced (through software), and moved towards a situation of becoming accepted and generally unquestioned across much of the industry. The creativity and championing of one particular person was found to be especially important in this unfolding change process. Our case study illuminates how management control (software) routines can be an important carrier of organisational knowledge, both as an engine for continuity but also potentially as a catalyst for change. We capture this process by means of exploring the ‘life-story’ of a piece of software that is adopted in the corrugated container industry

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Emerging Technologies for the Detection of Rabies Virus: Challenges and Hopes in the 21st Century

    Get PDF
    The diagnosis of rabies is routinely based on clinical and epidemiological information, especially when exposures are reported in rabies-endemic countries. Diagnostic tests using conventional assays that appear to be negative, even when undertaken late in the disease and despite the clinical diagnosis, have a tendency, at times, to be unreliable. These tests are rarely optimal and entirely dependent on the nature and quality of the sample supplied. In the course of the past three decades, the application of molecular biology has aided in the development of tests that result in a more rapid detection of rabies virus. These tests enable viral strain identification from clinical specimens. Currently, there are a number of molecular tests that can be used to complement conventional tests in rabies diagnosis. Indeed the challenges in the 21st century for the development of rabies diagnostics are not of a technical nature; these tests are available now. The challenges in the 21st century for diagnostic test developers are two-fold: firstly, to achieve internationally accepted validation of a test that will then lead to its acceptance by organisations globally. Secondly, the areas of the world where such tests are needed are mainly in developing regions where financial and logistical barriers prevent their implementation. Although developing countries with a poor healthcare infrastructure recognise that molecular-based diagnostic assays will be unaffordable for routine use, the cost/benefit ratio should still be measured. Adoption of rapid and affordable rabies diagnostic tests for use in developing countries highlights the importance of sharing and transferring technology through laboratory twinning between the developed and the developing countries. Importantly for developing countries, the benefit of molecular methods as tools is the capability for a differential diagnosis of human diseases that present with similar clinical symptoms. Antemortem testing for human rabies is now possible using molecular techniques. These barriers are not insurmountable and it is our expectation that if such tests are accepted and implemented where they are most needed, they will provide substantial improvements for rabies diagnosis and surveillance. The advent of molecular biology and new technological initiatives that combine advances in biology with other disciplines will support the development of techniques capable of high throughput testing with a low turnaround time for rabies diagnosis
    • …
    corecore