609 research outputs found

    The latitudinal temperature gradient and its climate dependence as inferred from foraminiferal δ18O over the past 95 million years

    Get PDF
    The latitudinal temperature gradient is a fundamental state parameter of the climate system tied to the dynamics of heat transport and radiative transfer. Thus, it is a primary target for temperature proxy reconstructions and global climate models. However, reconstructing the latitudinal temperature gradient in past climates remains challenging due to the scarcity of appropriate proxy records and large proxy–model disagreements. Here, we develop methods leveraging an extensive compilation of planktonic foraminifera δ18O to reconstruct a continuous record of the latitudinal sea-surface temperature (SST) gradient over the last 95 million years (My). We find that latitudinal SST gradients ranged from 26.5 to 15.3 °C over a mean global SST range of 15.3 to 32.5 °C, with the highest gradients during the coldest intervals of time. From this relationship, we calculate a polar amplification factor (PAF; the ratio of change in >60° S SST to change in global mean SST) of 1.44 ± 0.15. Our results are closer to model predictions than previous proxy-based estimates, primarily because δ18O-based high-latitude SST estimates more closely track benthic temperatures, yielding higher gradients. The consistent covariance of δ18O values in low- and high-latitude planktonic foraminifera and in benthic foraminifera, across numerous climate states, suggests a fundamental constraint on multiple aspects of the climate system, linking deep-sea temperatures, the latitudinal SST gradient, and global mean SSTs across large changes in atmospheric CO2, continental configuration, oceanic gateways, and the extent of continental ice sheets. This implies an important underlying, internally driven predictability of the climate system in vastly different background states

    The enigma of Oligocene climate and global surface temperature evolution.

    Get PDF
    Falling atmospheric CO2 levels led to cooling through the Eocene and the expansion of Antarctic ice sheets close to their modern size near the beginning of the Oligocene, a period of poorly documented climate. Here, we present a record of climate evolution across the entire Oligocene (33.9 to 23.0 Ma) based on TEX86 sea surface temperature (SST) estimates from southwestern Atlantic Deep Sea Drilling Project Site 516 (paleolatitude ∼36°S) and western equatorial Atlantic Ocean Drilling Project Site 929 (paleolatitude ∼0°), combined with a compilation of existing SST records and climate modeling. In this relatively low CO2 Oligocene world (∼300 to 700 ppm), warm climates similar to those of the late Eocene continued with only brief interruptions, while the Antarctic ice sheet waxed and waned. SSTs are spatially heterogenous, but generally support late Oligocene warming coincident with declining atmospheric CO2 This Oligocene warmth, especially at high latitudes, belies a simple relationship between climate and atmospheric CO2 and/or ocean gateways, and is only partially explained by current climate models. Although the dominant climate drivers of this enigmatic Oligocene world remain unclear, our results help fill a gap in understanding past Cenozoic climates and the way long-term climate sensitivity responded to varying background climate states

    2d Gauge Theories and Generalized Geometry

    Get PDF
    We show that in the context of two-dimensional sigma models minimal coupling of an ordinary rigid symmetry Lie algebra g\mathfrak{g} leads naturally to the appearance of the "generalized tangent bundle" TMTMTM\mathbb{T}M \equiv TM \oplus T^*M by means of composite fields. Gauge transformations of the composite fields follow the Courant bracket, closing upon the choice of a Dirac structure DTMD \subset \mathbb{T}M (or, more generally, the choide of a "small Dirac-Rinehart sheaf" D\cal{D}), in which the fields as well as the symmetry parameters are to take values. In these new variables, the gauge theory takes the form of a (non-topological) Dirac sigma model, which is applicable in a more general context and proves to be universal in two space-time dimensions: A gauging of g\mathfrak{g} of a standard sigma model with Wess-Zumino term exists, \emph{iff} there is a prolongation of the rigid symmetry to a Lie algebroid morphism from the action Lie algebroid M×gMM \times \mathfrak{g}\to M into DMD\to M (or the algebraic analogue of the morphism in the case of D\cal{D}). The gauged sigma model results from a pullback by this morphism from the Dirac sigma model, which proves to be universal in two-spacetime dimensions in this sense.Comment: 22 pages, 2 figures; To appear in Journal of High Energy Physic

    Incentive or Habit Learning in Amphibians?

    Get PDF
    Toads (Rhinella arenarum) received training with a novel incentive procedure involving access to solutions of different NaCl concentrations. In Experiment 1, instrumental behavior and weight variation data confirmed that such solutions yield incentive values ranging from appetitive (deionized water, DW, leading to weight gain), to neutral (300 mM slightly hypertonic solution, leading to no net weight gain or loss), and aversive (800 mM highly hypertonic solution leading to weight loss). In Experiment 2, a downshift from DW to a 300 mM solution or an upshift from a 300 mM solution to DW led to a gradual adjustment in instrumental behavior. In Experiment 3, extinction was similar after acquisition with access to only DW or with a random mixture of DW and 300 mM. In Experiment 4, a downshift from DW to 225, 212, or 200 mM solutions led again to gradual adjustments. These findings add to a growing body of comparative evidence suggesting that amphibians adjust to incentive shifts on the basis of habit formation and reorganization

    Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison

    Get PDF
    A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications

    Assessement of tensile strength of graphites by the iosipescu coupon test

    Get PDF
    Polycrystalline graphites are widely used in the metallurgical, nuclear and aerospace industries. Graphites are particulated composites manufactured with a mixture of coke with pitch, and changes in relative proportions of these materials cause modifications in their mechanical properties. Uniaxial tension tests must be avoided for mechanical characterization in this kind of brittle material, due to difficulties in making the relatively long specimens and premature damages caused during testing set-up. On other types of tests, e.g. bending tests, the specimens are submitted to combined stress states (normal and transverse shear stresses). The Iosipescu shear test, is performed in a beam with two 90° opposite notches machined at the mid-length of the specimens, by applying two forces couples, so that a pure and uniform shear stress state is generated at the cross section between the two notches. When a material is isotropic and brittle, a failure at 45° in relation to the beam long axis can take place, i.e., the tensile normal stress acts parallel to the lateral surface of the notches, controls the failure and the result of the shear test is numerically equivalent to the tensile strength. This work has evaluated a graphite of the type used in rocket nozzles by the Iosipescu test and the resulted stress, ~11 MPa, was found to be equal to the tensile strength. Thus, the tensile strength can be evaluated just by a single and simple experiment, thus avoiding complicated machining of specimen and testing set-up

    Dental management considerations for the patient with an acquired coagulopathy. Part 1: Coagulopathies from systemic disease

    Get PDF
    Current teaching suggests that many patients are at risk for prolonged bleeding during and following invasive dental procedures, due to an acquired coagulopathy from systemic disease and/or from medications. However, treatment standards for these patients often are the result of long-standing dogma with little or no scientific basis. The medical history is critical for the identification of patients potentially at risk for prolonged bleeding from dental treatment. Some time-honoured laboratory tests have little or no use in community dental practice. Loss of functioning hepatic, renal, or bone marrow tissue predisposes to acquired coagulopathies through different mechanisms, but the relationship to oral haemostasis is poorly understood. Given the lack of established, science-based standards, proper dental management requires an understanding of certain principles of pathophysiology for these medical conditions and a few standard laboratory tests. Making changes in anticoagulant drug regimens are often unwarranted and/or expensive, and can put patients at far greater risk for morbidity and mortality than the unlikely outcome of postoperative bleeding. It should be recognised that prolonged bleeding is a rare event following invasive dental procedures, and therefore the vast majority of patients with suspected acquired coagulopathies are best managed in the community practice setting

    Sexual Size Dimorphism and Body Condition in the Australasian Gannet

    Get PDF
    Funding: The research was financially supported by the Holsworth Wildlife Research Endowment. Acknowledgments We thank the Victorian Marine Science Consortium, Sea All Dolphin Swim, Parks Victoria, and the Point Danger Management Committee for logistical support. We are grateful for the assistance of the many field volunteers involved in the study.Peer reviewedPublisher PD
    corecore