159 research outputs found

    Field Measurements of Terrestrial and Martian Dust Devils

    Get PDF
    Surface-based measurements of terrestrial and martian dust devils/convective vortices provided from mobile and stationary platforms are discussed. Imaging of terrestrial dust devils has quantified their rotational and vertical wind speeds, translation speeds, dimensions, dust load, and frequency of occurrence. Imaging of martian dust devils has provided translation speeds and constraints on dimensions, but only limited constraints on vertical motion within a vortex. The longer mission durations on Mars afforded by long operating robotic landers and rovers have provided statistical quantification of vortex occurrence (time-of-sol, and recently seasonal) that has until recently not been a primary outcome of more temporally limited terrestrial dust devil measurement campaigns. Terrestrial measurement campaigns have included a more extensive range of measured vortex parameters (pressure, wind, morphology, etc.) than have martian opportunities, with electric field and direct measure of dust abundance not yet obtained on Mars. No martian robotic mission has yet provided contemporaneous high frequency wind and pressure measurements. Comparison of measured terrestrial and martian dust devil characteristics suggests that martian dust devils are larger and possess faster maximum rotational wind speeds, that the absolute magnitude of the pressure deficit within a terrestrial dust devil is an order of magnitude greater than a martian dust devil, and that the time-of-day variation in vortex frequency is similar. Recent terrestrial investigations have demonstrated the presence of diagnostic dust devil signals within seismic and infrasound measurements; an upcoming Mars robotic mission will obtain similar measurement types

    A framework for the probabilistic analysis of meteotsunamis

    Get PDF
    This paper is not subject to U.S. copyright. The definitive version was published in Natural Hazards 74 (2014): 123-142, doi:10.1007/s11069-014-1294-1.A probabilistic technique is developed to assess the hazard from meteotsunamis. Meteotsunamis are unusual sea-level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation is proposed for the probabilistic analysis, based on previous frameworks established for both tsunamis and storm surges, incorporating different sources and source parameters of meteotsunamis. Parameterization of atmospheric disturbances and numerical modeling is performed for the computation of maximum meteotsunami wave amplitudes near the coast. A historical record of pressure disturbances is used to establish a continuous analytic distribution of each parameter as well as the overall Poisson rate of occurrence. A demonstration study is presented for the northeast U.S. in which only isolated atmospheric pressure disturbances from squall lines and derechos are considered. For this study, Automated Surface Observing System stations are used to determine the historical parameters of squall lines from 2000 to 2013. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of squall lines is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a numerical hydrodynamic model. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using multiple synthetic catalogs, resampled from the parent parameter distributions, yield mean and quantile hazard curves. Further refinements and improvements for probabilistic analysis of meteotsunamis are discussed

    Turbulent flow as a cause for underestimating coronary flow reserve measured by Doppler guide wire

    Get PDF
    BACKGROUND: Doppler-tipped coronary guide-wires (FW) are well-established tools in interventional cardiology to quantitatively analyze coronary blood flow. Doppler wires are used to measure the coronary flow velocity reserve (CFVR). The CFVR remains reduced in some patients despite anatomically successful coronary angioplasty. It was the aim of our study to test the influence of changes in flow profile on the validity of intra-coronary Doppler flow velocity measurements in vitro. It is still unclear whether turbulent flow in coronary arteries is of importance for physiologic studies in vivo. METHODS: We perfused glass pipes of defined inner diameters (1.5 – 5.5 mm) with heparinized blood in a pulsatile flow model. Laminar and turbulent flow profiles were achieved by varying the flow velocity. The average peak velocity (APV) was recorded using 0.014 inch FW. Flow velocity measurements were also performed in 75 patients during coronary angiography. Coronary hyperemia was induced by intra-coronary injection of adenosine. The APV maximum was taken for further analysis. The mean luminal diameter of the coronary artery at the region of flow velocity measurement was calculated by quantitative angiography in two orthogonal planes. RESULTS: In vitro, the measured APV multiplied with the luminal area revealed a significant correlation to the given perfusion volumes in all diameters under laminar flow conditions (r(2 )> 0.85). Above a critical Reynolds number of 500 – indicating turbulent flow – the volume calculation derived by FW velocity measurement underestimated the actual rate of perfusion by up to 22.5 % (13 ± 4.6 %). In vivo, the hyperemic APV was measured irrespectively of the inherent deviation towards lower velocities. In 15 of 75 patients (20%) the maximum APV exceeded the velocity of the critical Reynolds number determined by the in vitro experiments. CONCLUSION: Doppler guide wires are a valid tool for exact measurement of coronary flow velocity below a critical Reynolds number of 500. Reaching a coronary flow velocity above the velocity of the critical Reynolds number may result in an underestimation of the CFVR caused by turbulent flow. This underestimation of the flow velocity may reach up to 22.5 % compared to the actual volumetric flow. Cardiologists should consider this phenomena in at least 20 % of patients when measuring CFVR for clinical decision making

    The P2X1 receptor and platelet function

    Get PDF
    Extracellular nucleotides are ubiquitous signalling molecules, acting via the P2 class of surface receptors. Platelets express three P2 receptor subtypes, ADP-dependent P2Y1 and P2Y12 G-protein-coupled receptors and the ATP-gated P2X1 non-selective cation channel. Platelet P2X1 receptors can generate significant increases in intracellular Ca2+, leading to shape change, movement of secretory granules and low levels of αIIbβ3 integrin activation. P2X1 can also synergise with several other receptors to amplify signalling and functional events in the platelet. In particular, activation of P2X1 receptors by ATP released from dense granules amplifies the aggregation responses to low levels of the major agonists, collagen and thrombin. In vivo studies using transgenic murine models show that P2X1 receptors amplify localised thrombosis following damage of small arteries and arterioles and also contribute to thromboembolism induced by intravenous co-injection of collagen and adrenaline. In vitro, under flow conditions, P2X1 receptors contribute more to aggregate formation on collagen-coated surfaces as the shear rate is increased, which may explain their greater contribution to localised thrombosis in arterioles compared to venules within in vivo models. Since shear increases substantially near sites of stenosis, anti-P2X1 therapy represents a potential means of reducing thrombotic events at atherosclerotic plaques

    On the multiscale modeling of heart valve biomechanics in health and disease

    Full text link

    A Meaningful U.S. Cap-and-Trade System to Address Climate Change

    Full text link
    There is growing impetus for a domestic U.S. climate policy that can provide meaningful reductions in emissions of CO2 and other greenhouse gases. In this article, I propose and analyze a scientifically sound, economically rational, and politically feasible approach for the United States to reduce its contributions to the increase in atmospheric concentrations of greenhouse gases. The proposal features an up-stream, economy-wide CO2 cap-and-trade system which implements a gradual trajectory of emissions reductions over time, and includes mechanisms to reduce cost uncertainty. I compare the proposed system with frequently discussed alternatives. In addition, I describe common objections to a cap-and-trade approach to the problem, and provide responses to these objections
    corecore