11,104 research outputs found

    Applicability of Boussinesq approximation in a turbulent fluid with constant properties

    Full text link
    The equations of motion describing buoyant fluids are often simplified using a set of approximations proposed by J. Boussinesq one century ago. To resume, they consist in assuming constant fluid properties, incompressibility and conservation of calories during heat transport. Assuming fulfilment of the first requirement (constant fluid properties), we derive a set of 4 criteria for assessing the validity of the two other requirements in turbulent Rayleigh-B\'enard convection. The first criterion αΔ≪1\alpha \Delta \ll 1 simply results from the incompressibility condition in the thermal boundary layer (α\alpha and Δ\Delta are the thermal expansion coefficient and the temperature difference driving the flow). The 3 other criteria are proportional or quadratic with the density stratification or, equivalently with the temperature difference resulting from the adiabatic gradient across the cell Δh\Delta_{h}. Numerical evaluations with air, water and cryogenic helium show that most laboratory experiments are free from such Boussinesq violation as long as the first criterion is fulfilled. In ultra high Rayleigh numbers (Ra>1016Ra>10^{16}) experiments in He, one of the stratification criteria, scaling with αΔh\alpha \Delta_{h}, could be violated. This criterion garanties that pressure fluctuations have a negligible influence both on the density variation and on the heat transfer equation through compression/expansion cycles. Extrapolation to higher RaRa suggests that strong violation of Boussinesq approximation could occur in atmospheric convection.Comment: Submitted to Phys.Fluids (oct 2007

    Interest rate setting and inflation targeting : evidence of a nonlinear Taylor rule for the United Kingdom

    Get PDF
    We examine potential nonlinear behaviour in the conduct of monetary policy by the Bank of England. We find significant nonlinearity in this policy setting, and in particular that the standard Taylor rule really only begins to bite once expected inflation is significantly above its target. This suggests, for example, that while the stated objective of the Bank of England is to pursue a symmetric inflation target, in practice some degree of asymmetry has crept into interest-rate setting. We argue that, nevertheless, the very predictability of the policy rule, especially when set out in a highly plausible and intuitive nonlinear framework, is perhaps one reason why the United Kingdom has, since the early 1990s, enjoyed price stability combined with relatively strong growth

    Conference Summary: HI Science in the Next Decade

    Full text link
    The atomic hydrogen (HI) 21cm line measures the gas content within and around galaxies, traces the dark matter potential and probes volumes and objects that other surveys do not. Over the next decade, 21cm line science will exploit new technologies, especially focal plane and aperture arrays, and will see the deployment of Epoch of Reionization/Dark Age detection experiments and Square Kilometer Array (SKA) precursor instruments. Several experiments designed to detect and eventually to characterize the reionization history of the intergalactic medium should deliver first results within two-three years time. Although "precision cosmology" surveys of HI in galaxies at z ~ 1 to 3 require the full collecting area of the SKA, a coherent program of HI line science making use of the unique capabilities of both the existing facilities and the novel ones demonstrated by the SKA precursors will teach us how many gas rich galaxies there really are and where they reside and will yield fundamental insight into how galaxies accrete gas, form stars and interact with their environment.Comment: To appear in AIP Conference Proceedings, "The Evolution of Galaxies through the Neutral Hydrogen Window", Feb 1-3 2008, Arecibo, Puerto Rico, eds. R. Minchin & E. Momjian. 8 page

    On the targeting and redistributive efficiencies of alternative transfer instruments

    Get PDF
    The paper shows how the so-called distributional characteristic of a policy instrument can be additively decomposed into two components; one that captures the targeting efficiency of the instrument, the other its redistributive efficiency. Using these measures, the paper provides an interpretation of the commonly used leakage and undercoverage rates (and other indices based on these concepts) within standard welfare theory. An empirical application of the decomposition approach to Mexican data is presented.Welfare economics Mathematical models ,Mexico ,

    On the targeting and redistributive efficiencies of alternative transfer instruments

    Get PDF
    The paper shows how the so-called distributional characteristic of a policy instrument can be additively decomposed into two components; one that captures the targeting efficiency of the instrument, the other its redistributive efficiency. Using these measures, the paper provides an interpretation of the commonly used leakage and undercoverage rates (and other indices based on these concepts) within standard welfare theory. An empirical application of the decomposition approach to Mexican data is presented.Welfare economics Mathematical models ,Mexico ,

    Root optimization of polynomials in the number field sieve

    Get PDF
    The general number field sieve (GNFS) is the most efficient algorithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the chosen polynomials in polynomial selection can be modelled in terms of size and root properties. In this paper, we describe some algorithms for selecting polynomials with very good root properties.Comment: 16 pages, 18 reference

    Are the welfare losses from imperfect targeting important?

    Get PDF
    The authors evaluate the size of the welfare losses from using alternative “imperfect” welfare indicators as substitutes for the conventionally preferred consumption indicator. They find that whereas the undercoverage and leakage welfare indices always suggest substantial losses, and the poverty indices suggest substantial losses for the worst performing indices, their preferred welfare index based on standard welfare theory suggests much smaller welfare losses. They also find that one cannot reject the hypothesis that the welfare losses associated with using the better performing alternative indicators are zero. In the case of their preferred welfare index, this reflects the fact that most of the targeting errors, i.e., exclusion and inclusion errors, are highly concentrated around the poverty line so that the differences in welfare weights between those receiving and not receiving the transfers are insufficient to make a difference to the overall welfare impact.Welfare economics. ,Poverty. ,Consumption (Economic theory). ,
    • …
    corecore