1,463 research outputs found

    Analysis of Vesicular Basalts and Lava Emplacement Processes for Application as a Paleobarometer/Paleoaltimeter

    Get PDF
    We have developed a method for determining paleoelevations of highland areas on the basis of the vesicularity of lava flows. Vesicular lavas preserve a record of paleopressure at the time and place of emplacement because the difference in internal pressure in bubbles at the base and top of a lava flow depends on atmospheric pressure and lava flow thickness. At the top of the flow, the pressure is simply atmospheric pressure, while at the base, there is an additional contribution of hydrostatic lava overburden. Thus the modal size of the vesicle (bubble) population is larger at the top than at the bottom. This leads directly to paleoatmospheric pressure because the thickness of the flow can easily be measured in the field, and the vesicle sizes can now be accurately measured in the lab. Because our recently developed technique measures paleoatmospheric pressure, it is not subject to uncertainties stemming from the use of climate‐sensitive proxies, although like all measurements, it has its own sources of potential error. Because measurement of flow thickness presupposes no inflation or deflation of the flow after the size distribution at the top and bottom is “frozen in,” it is essential to identify preserved flows in the field that show clear signs of simple emplacement and solidification. This can be determined by the bulk vesicularity and size distribution as a function of stratigraphic position within the flow. By examining the stratigraphic variability of vesicularity, we can thus reconstruct emplacement processes. It is critical to be able to accurately measure the size distribution in collected samples from the tops and bottoms of flows because our method is based on the modal size of the vesicle population. Previous studies have used laborious and inefficient methods that did not allow for practical analysis of a large number of samples. Our recently developed analytical techniques involving high‐resolution x‐ray computed tomography (HRXCT) allow us to analyze the large number of samples required for reliable interpretations. Based on our ability to measure vesicle size to within 1.7% (by volume), a factor analysis of the sensitivity of the technique to atmospheric pressure provides an elevation to within about ±400 m. If we assume sea level pressure and lapse rate have not changed significantly in Cenozoic time, then the difference between the paleoelevation “preserved” in the lavas and their present elevation reflects the amount of uplift or subsidence. Lava can be well dated, and therefore a suite of samples of various ages will constrain the timing of epeirogenic activity independent of climate, erosion rates, or any other environmental factors. We have tested our technique on basalts emplaced at known elevations at the base, flanks, and summit of Mauna Loa. The results of the analysis accurately reconstruct actual elevations, demonstrating the applicability of the technique. The tool we have developed can subsequently be applied to problematic areas such as the Colorado and Tibetan Plateaus to determine the history of uplift

    Semiclassical Gravity in the Far Field Limit of Stars, Black Holes, and Wormholes

    Get PDF
    Semiclassical gravity is investigated in a large class of asymptotically flat, static, spherically symmetric spacetimes including those containing static stars, black holes, and wormholes. Specifically the stress-energy tensors of massless free spin 0 and spin 1/2 fields are computed to leading order in the asymptotic regions of these spacetimes. This is done for spin 0 fields in Schwarzschild spacetime using a WKB approximation. It is done numerically for the spin 1/2 field in Schwarzschild, extreme Reissner-Nordstrom, and various wormhole spacetimes. And it is done by finding analytic solutions to the leading order mode equations in a large class of asymptotically flat static spherically symmetric spacetimes. Agreement is shown between these various computational methods. It is found that for all of the spacetimes considered, the energy density and pressure in the asymptotic region are proportional to 1/r^5 to leading order. Furthermore, for the spin 1/2 field and the conformally coupled scalar field, the stress-energy tensor depends only on the leading order geometry in the far field limit. This is also true for the minimally coupled scalar field for spacetimes containing either a static star or a black hole, but not for spacetimes containing a wormhole.Comment: 43 pages, 2 figures. Reference added, minor changes, PRD versio

    Stress-Energy Tensor for the Massless Spin 1/2 Field in Static Black Hole Spacetimes

    Full text link
    The stress-energy tensor for the massless spin 1/2 field is numerically computed outside and on the event horizons of both charged and uncharged static non-rotating black holes, corresponding to the Schwarzschild, Reissner-Nordstrom and extreme Reissner-Nordstr\"om solutions of Einstein's equations. The field is assumed to be in a thermal state at the black hole temperature. Comparison is made between the numerical results and previous analytic approximations for the stress-energy tensor in these spacetimes. For the Schwarzschild (charge zero) solution, it is shown that the stress-energy differs even in sign from the analytic approximation. For the Reissner-Nordstrom and extreme Reissner-Nordstrom solutions, divergences predicted by the analytic approximations are shown not to exist.Comment: 5 pages, 4 figures, additional discussio

    Neurogenesis Deep Learning

    Full text link
    Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.Comment: 8 pages, 8 figures, Accepted to 2017 International Joint Conference on Neural Networks (IJCNN 2017

    T-infinity: The Dependency Inversion Principle for Rapid and Sustainable Multidisciplinary Software Development

    Get PDF
    The CFD Vision 2030 Study recommends that, NASA should develop and maintain an integrated simulation and software development infrastructure to enable rapid CFD technology maturation.... [S]oftware standards and interfaces must be emphasized and supported whenever possible, and open source models for noncritical technology components should be adopted. The current paper presents an approach to an open source development architecture, named T-infinity, for accelerated research in CFD leveraging the Dependency Inversion Principle to realize plugins that communicate through collections of functions without exposing internal data structures. Steady state flow visualization, mesh adaptation, fluid-structure interaction, and overset domain capabilities are demonstrated through compositions of plugins via standardized abstract interfaces without the need for source code dependencies between disciplines. Plugins interact through abstract interfaces thereby avoiding N 2 direct code-to-code data structure coupling where N is the number of codes. This plugin architecture enhances sustainable development by controlling the interaction between components to limit software complexity growth. The use of T-infinity abstract interfaces enables multidisciplinary application developers to leverage legacy applications alongside newly-developed capabilities. While rein, a description of interface details is deferred until the are more thoroughly tested and can be closed to modification

    R-PEP-27, a Potent Renin Inhibitor, Decreases Plasma Angiotensin II and Blood Pressure in Normal Volunteers

    Get PDF
    The hemodynamic and humoral effects of the specific human renin inhibitor R-PEP-27 were studied in six normal human subjects on low and high sodium intake diets. An intravenous infusion of R-PEP-27 (0.5 to 16 μg/min/kg body wt) reduced blood pressure in a dose-dependent fashion; the mean arterial blood pressure at the end of the infusion fell from 128 ± 4/83 ± 4 to 119 ± 3/71 ± 3 mm Hg (mean ± SEM) (P < .01) during the low sodium intake diet. R-PEP-27 had no effect on blood pressure during the high sodium intake diet. R-PEP-27 significantly reduced plasma angiotensin II and aldosterone concentrations. The temporal response to R-PEP-27 suggests that it is a shortlived although highly potent competitive inhibitor of renin; this peptide is a valuable and specific physiologic probe of the renin-angiotensin system. Am J Hypertens 1994;7:295-30

    A damage model based on failure threshold weakening

    Full text link
    A variety of studies have modeled the physics of material deformation and damage as examples of generalized phase transitions, involving either critical phenomena or spinodal nucleation. Here we study a model for frictional sliding with long range interactions and recurrent damage that is parameterized by a process of damage and partial healing during sliding. We introduce a failure threshold weakening parameter into the cellular-automaton slider-block model which allows blocks to fail at a reduced failure threshold for all subsequent failures during an event. We show that a critical point is reached beyond which the probability of a system-wide event scales with this weakening parameter. We provide a mapping to the percolation transition, and show that the values of the scaling exponents approach the values for mean-field percolation (spinodal nucleation) as lattice size LL is increased for fixed RR. We also examine the effect of the weakening parameter on the frequency-magnitude scaling relationship and the ergodic behavior of the model

    The Process of Observing Oral Reading Scores

    Get PDF
    Oral reading has a varied history of interpretation (12) and IS presently under scrutiny in terms of characteristics rather than quantity (5). Despite the doubt that controversies generate, the identification and tabulation of oral reading errors dominate decisions generated in practice us,ing informal reading inventories. In practice, informal reading inventories depend on identification, scoring, and interpretation of oral reading errors. Controversies are usually ignored perhaps in the hope that the expert judgment of reading specialists overcomes the difficulties. Beldin (1) explores the controversial history of informal inventories. From early studies to the present, doubt surrounds scoring criteria ( 7, 9, 10). This study examines the process of identification and scoring of oral reading errors by well-qualified reading specialists
    corecore