13,818 research outputs found

    Constraining gravity at large scales with the 2MASS Photometric Redshift catalogue and Planck lensing

    Full text link
    We present a new measurement of structure growth at z0.08z \simeq 0.08 obtained by correlating the cosmic microwave background (CMB) lensing potential map from the \textit{Planck} satellite with the angular distribution of the 2MASS Photometric Redshift galaxies. After testing for, and finding no evidence for systematic effects, we calculate the angular auto- and cross-power spectra. We combine these spectra to estimate the amplitude of structure growth using the bias-independent DGD_G estimator introduced by Giannantonio et al. 2016. We find that the relative amplitude of DGD_G with respect to the predictions based on \textit{Planck} cosmology is AD(z=0.08)=1.00±0.21A_D(z=0.08) = 1.00 \pm 0.21, fully consistent with the expectations for the standard cosmological model. Considering statistical errors only, we forecast that a joint analysis between an LSST-like photometric galaxy sample and lensing maps from upcoming ground-based CMB surveys like the Simons Observatory and CMB-S4 can yield sub-percent constraints on the growth history and differentiate between different models of cosmic acceleration.Comment: 14 pages, 8 figures, 1 table, updated to match published version on Ap

    Critical Theory of Two-Dimensional Mott Transition: Integrability and Hilbert Space Mapping

    Get PDF
    We reconsider the Mott transition in the context of a two-dimensional fermion model with density-density coupling. We exhibit a Hilbert space mapping between the original model and the Double Lattice Chern-Simons theory at the critical point by use of the representation theory of the q-oscillator and Weyl algebras. The transition is further characterized by the ground state modification. The explicit mapping provides a new tool to further probe and test the detailed physical properties of the fermionic lattice model considered here and to enhance our understanding of the Mott transition(s)

    Joint statistics of acceleration and vorticity in fully developed turbulence

    Full text link
    We report results from a high resolution numerical study of fluid particles transported by a fully developed turbulent flow. Single particle trajectories were followed for a time range spanning more than three decades, from less than a tenth of the Kolmogorov time-scale up to one large-eddy turnover time. We present results concerning acceleration statistics and the statistics of trapping by vortex filaments conditioned to the local values of vorticity and enstrophy. We distinguish two different behaviors between the joint statistics of vorticity and centripetal acceleration or vorticity and longitudinal acceleration.Comment: 8 pages, 6 figure

    Fourier's Law for a Harmonic Crystal with Self-consistent Stochastic Reservoirs

    Full text link
    We consider a d-dimensional harmonic crystal in contact with a stochastic Langevin type heat bath at each site. The temperatures of the "exterior" left and right heat baths are at specified values T_L and T_R, respectively, while the temperatures of the "interior" baths are chosen self-consistently so that there is no average flux of energy between them and the system in the steady state. We prove that this requirement uniquely fixes the temperatures and the self consistent system has a unique steady state. For the infinite system this state is one of local thermal equilibrium. The corresponding heat current satisfies Fourier's law with a finite positive thermal conductivity which can also be computed using the Green-Kubo formula. For the harmonic chain (d=1) the conductivity agrees with the expression obtained by Bolsterli, Rich and Visscher in 1970 who first studied this model. In the other limit, d>>1, the stationary infinite volume heat conductivity behaves as 1/(l_d*d) where l_d is the coupling to the intermediate reservoirs. We also analyze the effect of having a non-uniform distribution of the heat bath couplings. These results are proven rigorously by controlling the behavior of the correlations in the thermodynamic limit.Comment: 33 page

    Divesting power

    Get PDF
    We study alternative market power mitigation measures in a model where a dominant producer faces a competitive fringe with the same cost structure. We characterise the asset divestment by the dominant firm which achieves the greatest reduction in prices. This divestment entails the sale of marginal assets whose cost range encompasses the post-divestment price. A divestment of this type can be several times more effective in reducing prices than divestments of baseload (or low-cost) assets. We also establish that financial contracts (modeled as Virtual Power Plant schemes) are at best equivalent to baseload divestments in terms of consumer welfare.Divestments; Virtual power plants; contracts; market power; electricity; antitrust remedies;

    Imprints of gravitational lensing in the Planck CMB data at the location of WISExSCOS galaxies

    Full text link
    We detect weak gravitational lensing of the cosmic microwave background (CMB) at the location of the WISExSCOS (WxS) galaxies using the publicly available Planck lensing convergence map. By stacking the lensing convergence map at the position of 12.4 million galaxies in the redshift range 0.1z0.3450.1\le z \le 0.345, we find the average mass of the galaxies to be M200crit_{200_{\rm crit}} = 6.25 ±\pm 0.6 ×1012 M\times 10^{12}\ M_{\odot}. The null hypothesis of no-lensing is rejected at a significance of 17σ17\sigma. We split the galaxy sample into three redshift slices each containing \sim4.1 million objects and obtain lensing masses in each slice of 4.18 ±\pm 0.8, 6.93 ±\pm 0.9, and 18.84 ±\pm 1.2 \times\ 10^{12}\ \mbox{M}_{\odot}. Our results suggest a redshift evolution of the galaxy sample masses but this apparent increase might be due to the preferential selection of intrinsically luminous sources at high redshifts. The recovered mass of the stacked sample is reduced by 28% when we remove the galaxies in the vicinity of galaxy clusters with mass M200crit_{200_{\rm crit}} = 2 \times 10^{14}\ \mbox{M}_{\odot}. We forecast that upcoming CMB surveys can achieve 5% galaxy mass constraints over sets of 12.4 million galaxies with M200crit_{200_{\rm crit}} = 1×1012 M1 \times 10^{12}\ M_{\odot} at z=1z=1.Comment: 7 pages, 2 figures, 2 tables: updates: correlations between z-bins included: accepted for publication in PR

    Dynamic Sampling from a Discrete Probability Distribution with a Known Distribution of Rates

    Get PDF
    In this paper, we consider a number of efficient data structures for the problem of sampling from a dynamically changing discrete probability distribution, where some prior information is known on the distribution of the rates, in particular the maximum and minimum rate, and where the number of possible outcomes N is large. We consider three basic data structures, the Acceptance-Rejection method, the Complete Binary Tree and the Alias Method. These can be used as building blocks in a multi-level data structure, where at each of the levels, one of the basic data structures can be used. Depending on assumptions on the distribution of the rates of outcomes, different combinations of the basic structures can be used. We prove that for particular data structures the expected time of sampling and update is constant, when the rates follow a non-decreasing distribution, log-uniform distribution or an inverse polynomial distribution, and show that for any distribution, an expected time of sampling and update of O(loglogrmax/rmin)O\left(\log\log{r_{max}}/{r_{min}}\right) is possible, where rmaxr_{max} is the maximum rate and rminr_{min} the minimum rate. We also present an experimental verification, highlighting the limits given by the constraints of a real-life setting

    Compensating inaccurate annotations to train 3D facial landmark localisation models

    Get PDF
    In this paper we investigate the impact of inconsistency in manual annotations when they are used to train automatic models for 3D facial landmark localization. We start by showing that it is possible to objectively measure the consistency of annotations in a database, provided that it contains replicates (i.e. repeated scans from the same person). Applying such measure to the widely used FRGC database we find that manual annotations currently available are suboptimal and can strongly impair the accuracy of automatic models learnt therefrom. To address this issue, we present a simple algorithm to automatically correct a set of annotations and show that it can help to significantly improve the accuracy of the models in terms of landmark localization errors. This improvement is observed even when errors are measured with respect to the original (not corrected) annotations. However, we also show that if errors are computed against an alternative set of manual annotations with higher consistency, the accuracy of the models constructed using the corrections from the presented algorithm tends to converge to the one achieved by building the models on the alternative,more consistent set
    corecore