1,437 research outputs found

    The Atacama Cosmology Telescope: Lensing of CMB Temperature and Polarization Derived from Cosmic Infrared Background Cross-Correlation

    Get PDF
    We present a measurement of the gravitational lensing of the Cosmic Microwave Background (CMB) temperature and polarization fields obtained by cross-correlating the reconstructed convergence signal from the first season of Atacama Cosmology Telescope Polarimeter data at 146 GHz with Cosmic Infrared Background (CIB) fluctuations measured using the Planck satellite. Using an effective overlap area of 92.7 square degrees, we detect gravitational lensing of the CMB polarization by large-scale structure at a statistical significance of 4.5 sigma. Combining both CMB temperature and polarization data gives a lensing detection at 9.1 sigma significance. A B-mode polarization lensing signal is present with a significance of 3.2 sigma. We also present the first measurement of CMB lensing-CIB correlation at small scales corresponding to l \u3e 2000. Null tests and systematic checks show that our results are not significantly biased by astrophysical or instrumental systematic effects, including Galactic dust. Fitting our measurements to the best-fit lensing-CIB cross-power spectrum measured in Planck data, scaled by an amplitude A, gives A = 1.02(-0.08)(+0.12)(stat.) +/- 0.06(syst.), consistent with the Planck results

    Determining the Hubble Constant without the Sound Horizon: A 3.6%3.6\% Constraint on H0H_0 from Galaxy Surveys, CMB Lensing and Supernovae

    Full text link
    Many theoretical resolutions to the so-called "Hubble tension" rely on modifying the sound horizon at recombination, rsr_s, and thus the acoustic scale used as a standard ruler in the cosmic microwave background (CMB) and large scale structure (LSS) datasets. As shown in a number of recent works, these observables can also be used to compute rsr_s-independent constraints on H0H_0 by making use of the horizon scale at matter-radiation equality, keqk_{\rm eq}, which has different sensitivity to high redshift physics than rsr_s. In this work, we present the tightest keqk_{\rm eq}-based constraints on the expansion rate from current data, finding H0=64.8−2.5+2.2H_0=64.8^{+2.2}_{-2.5} at 68%\% CL from a combination of BOSS galaxy power spectra, Planck CMB lensing, and the newly released Pantheon+ supernova constraints, as well as physical priors on the baryon density, neutrino mass, and spectral index (in km s−1Mpc−1\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1} units). The BOSS and Planck measurements have different degeneracy directions, leading to the improved combined constraints, with a bound of H0=67.1−2.9+2.5H_0 = 67.1^{+2.5}_{-2.9} (63.6−3.6+2.963.6^{+2.9}_{-3.6}) from BOSS (Planck) alone. The results show some dependence on the neutrino mass bounds, with the constraint broadening to H0=68.0−3.2+2.9H_0 = 68.0^{+2.9}_{-3.2} if we instead impose a weak prior on ∑mν\sum m_\nu from terrestrial experiments rather than assuming ∑mν<0.26 eV\sum m_\nu<0.26\,\mathrm{eV}, or shifting to H0=64.6±2.4H_0 = 64.6\pm2.4 if the neutrino mass is fixed to its minimal value. Even without any dependence on the sound horizon, our results are in ≈3σ\approx 3\sigma tension with those obtained from the Cepheid-calibrated distance ladder, providing evidence against new physics models that vary H0H_0 by changing acoustic physics or the expansion history immediately prior to recombination.Comment: 11 pages, 3 figures, submitted to Phys. Rev.

    Stability of Projection Methods for Incompressible Flows Using High Order Pressure-Velocity Pairs of Same Degree: Continuous and Discontinuous Galerkin Formulations

    Get PDF
    Abstract. This paper presents limits for stability of projection type schemes when using high order pressure-velocity pairs of same degree. Two high order h/p varia-tional methods encompassing continuous and discontinuous Galerkin formulations are used to explain previously observed lower limits on the time step for projection type schemes to be stable [18], when h- or p-refinement strategies are considered. In addition, the analysis included in this work shows that these stability limits do not depend only on the time step but on the product of the latter and the kinematic vis-cosity, which is of particular importance in the study of high Reynolds number flows. We show that high order methods prove advantageous in stabilising the simulations when small time steps and low kinematic viscosities are used. Drawing upon this analysis, we demonstrate how the effects of this instability can be reduced in the discontinuous scheme by introducing a stabilisation term into the global system. Finally, we show that these lower limits are compatible with Courant-Friedrichs-Lewy (CFL) type restrictions, given that a sufficiently high polynomial or

    Making the user more efficient: Design for sustainable behaviour

    Get PDF
    User behaviour is a significant determinant of a product’s environmental impact; while engineering advances permit increased efficiency of product operation, the user’s decisions and habits ultimately have a major effect on the energy or other resources used by the product. There is thus a need to change users’ behaviour. A range of design techniques developed in diverse contexts suggest opportunities for engineers, designers and other stakeholders working in the field of sustainable innovation to affect users’ behaviour at the point of interaction with the product or system, in effect ‘making the user more efficient’. Approaches to changing users’ behaviour from a number of fields are reviewed and discussed, including: strategic design of affordances and behaviour-shaping constraints to control or affect energyor other resource-using interactions; the use of different kinds of feedback and persuasive technology techniques to encourage or guide users to reduce their environmental impact; and context-based systems which use feedback to adjust their behaviour to run at optimum efficiency and reduce the opportunity for user-affected inefficiency. Example implementations in the sustainable engineering and ecodesign field are suggested and discussed

    Transient growth analysis of the flow past a circular cylinder

    Get PDF
    We apply direct transient growth analysis in complex geometries to investigate its role in the primary and secondary bifurcation/transition process of the flow past a circular cylinder. The methodology is based on the singular value decomposition of the Navier-Stokes evolution operator linearized about a two-dimensional steady or periodic state which leads to the optimal growth modes. Linearly stable and unstable steady flow at Re=45 and 50 is considered first, where the analysis demonstrates that strong two-dimensional transient growth is observed with energy amplifications of order of 10(3) at U-infinity tau/D approximate to 30. Transient growth at Re=50 promotes the linear instability which ultimately saturates into the well known von-Kaacutermaacuten street. Subsequently we consider the transient growth upon the time-periodic base state corresponding to the von-Kaacutermaacuten street at Re=200 and 300. Depending upon the spanwise wavenumber the flow at these Reynolds numbers are linearly unstable due to the so-called mode A and B instabilities. Once again energy amplifications of order of 10(3) are observed over a time interval of tau/T=2, where T is the time period of the base flow shedding. In all cases the maximum energy of the optimal initial conditions are located within a diameter of the cylinder in contrast to the spatial distribution of the unstable eigenmodes which extend far into the downstream wake. It is therefore reasonable to consider the analysis as presenting an accelerator to the existing modal mechanism. The rapid amplification of the optimal growth modes highlights their importance in the transition process for flow past circular cylinder, particularly when comparing with experimental results where these types of convective instability mechanisms are likely to be activated. The spatial localization, close to the cylinder, of the optimal initial condition may be significant when considering strategies to promote or control shedding

    Spin-based quantum information processing with semiconductor quantum dots and cavity QED

    Get PDF
    A quantum information processing scheme is proposed with semiconductor quantum dots located in a high-Q single mode QED cavity. The spin degrees of freedom of one excess conduction electron of the quantum dots are employed as qubits. Excitonic states, which can be produced ultrafastly with optical operation, are used as auxiliary states in the realization of quantum gates. We show how properly tailored ultrafast laser pulses and Pauli-blocking effects, can be used to achieve a universal encoded quantum computing.Comment: RevTex, 2 figure

    Detecting and quantifying methane emissions from oil and gas production: algorithm development with ground-truth calibration based on Sentinel-2 satellite imagery

    Get PDF
    Sentinel-2 satellite imagery has been shown by studies to be capable of detecting and quantifying methane emissions from oil and gas production. However, current methods lack performance calibration with ground-truth testing. This study developed a multi-band–multi-pass–multi-comparison-date methane retrieval algorithm that enhances Sentinel-2 sensitivity to methane plumes. The method was calibrated using data from a large-scale controlled-release test in Ehrenberg, Arizona, in fall 2021, with three algorithm parameters tuned based on the true emission rates. Tuned parameters are the pixel-level concentration upper-bound threshold during extreme value removal, the number of comparison dates, and the pixel-level methane concentration percentage threshold when determining the spatial extent of a plume. We found that a low value of the upper-bound threshold during extreme value removal can result in false negatives. A high number of comparison dates helps enhance the algorithm sensitivity to the plumes in the target date, but values in excess of 12 d are neither necessary nor computationally efficient. A high percentage threshold when determining the spatial extent of a plume helps enhance the quantification accuracy, but it may harm the yes/no detection accuracy. We found that there is a trade-off between quantification accuracy and detection accuracy. In a scenario with the highest quantification accuracy, we achieved the lowest quantification error and had zero false-positive detections; however, the algorithm missed three true plumes, which reduced the yes/no detection accuracy. In contrast, all of the true plumes were detected in the highest detection accuracy scenario, but the emission rate quantification had higher errors. We illustrated a two-step method that updates the emission rate estimates in an interim step, which improves quantification accuracy while keeping high yes/no detection accuracy. We also validated the algorithm's ability to detect true positives and true negatives in two application studies.</p
    • …
    corecore