92 research outputs found

    Interpretation of radio continuum and molecular line observations of Sgr B2: free-free and synchrotron emission, and implications for cosmic rays

    Get PDF
    Recent ammonia (1,1) inversion line data on the Galactic star forming region Sgr B2 show that the column density is consistent with a radial Gaussian density profile with a standard deviation of 2.75 pc. Deriving a formula for the virial mass of spherical Gaussian clouds, we obtain a virial mass of 1.9 million solar masses for Sgr B2. For this matter distribution, a reasonable magnetic field and an impinging flux of cosmic rays of solar neighbourhood intensity, we predict the expected synchrotron emission from the Sgr B2 giant molecular cloud due to secondary electrons and positrons resulting from cosmic ray interactions, including effects of losses due to pion production collisions during diffusive propagation into the cloud complex. We assemble radio continuum data at frequencies between 330 MHz and 230 GHz. From the spectral energy distribution the emission appears to be thermal at all frequencies. Before using these data to constrain the predicted synchrotron flux, we first model the spectrum as free-free emission from the known ultra compact HII regions plus emission from an envelope or wind with a radial density gradient. This severely constrains the possible synchrotron emission by secondary electrons to quite low flux levels. The absence of a significant contribution by secondary electrons is almost certainly due to multi-GeV energy cosmic rays being unable to penetrate far into giant molecular clouds. This would also explain why 100 MeV--GeV gamma-rays (from neutral pion decay or bremsstrahlung by secondary electrons) were not observed from Sgr B2 by EGRET, while TeV energy gamma-rays were observed, being produced by higher energy cosmic rays which more readily penetrate giant molecular clouds.Comment: 11 pages, 10 figures. New section on diffusion of primary and secondary cosmic ray electrons into and within the Sgr B2 Giant Molecular Cloud added. Main corrections to proofs made in this versio

    How flat can you get? A model comparison perspective on the curvature of the Universe

    Get PDF
    The question of determining the spatial geometry of the Universe is of greater relevance than ever, as precision cosmology promises to verify inflationary predictions about the curvature of the Universe. We revisit the question of what can be learnt about the spatial geometry of the Universe from the perspective of a three-way Bayesian model comparison. We show that, given current data, the probability that the Universe is spatially infinite lies between 67% and 98%, depending on the choice of priors. For the strongest prior choice, we find odds of order 50:1 (200:1) in favour of a flat Universe when compared with a closed (open) model. We also report a robust, prior-independent lower limit to the number of Hubble spheres in the Universe, N_U > 5 (at 99% confidence). We forecast the accuracy with which future CMB and BAO observations will be able to constrain curvature, finding that a cosmic variance limited CMB experiment together with an SKA-like BAO observation will constrain curvature with a precision of about sigma ~ 4.5x10^{-4}. We demonstrate that the risk of 'model confusion' (i.e., wrongly favouring a flat Universe in the presence of curvature) is much larger than might be assumed from parameter errors forecasts for future probes. We argue that a 5-sigma detection threshold guarantees a confusion- and ambiguity-free model selection. Together with inflationary arguments, this implies that the geometry of the Universe is not knowable if the value of the curvature parameter is below |Omega_curvature| ~ 10^{-4}, a bound one order of magnitude larger than the size of curvature perturbations, ~ 10^{-5}. [abridged]Comment: Added discussion on the impact of adopting (wa, w0) dark energy model, main conclusions unchanged. Version accepted by MNRA

    Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    Full text link
    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood, and numerical inverse extra-regularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener-filtering, Tikhonov regularization, Ridge regression, Maximum Entropy, and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman, and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The structures of the up-to-date highest-performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener-filter in the novel ARGO-software package, the different numerical schemes are benchmarked with 1-, 2-, and 3-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark-matter density field, the peculiar velocity field, and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure

    Sequential Quasi-Monte Carlo

    Full text link
    We derive and study SQMC (Sequential Quasi-Monte Carlo), a class of algorithms obtained by introducing QMC point sets in particle filtering. SQMC is related to, and may be seen as an extension of, the array-RQMC algorithm of L'Ecuyer et al. (2006). The complexity of SQMC is O(NlogN)O(N \log N), where NN is the number of simulations at each iteration, and its error rate is smaller than the Monte Carlo rate OP(N1/2)O_P(N^{-1/2}). The only requirement to implement SQMC is the ability to write the simulation of particle xtnx_t^n given xt1nx_{t-1}^n as a deterministic function of xt1nx_{t-1}^n and a fixed number of uniform variates. We show that SQMC is amenable to the same extensions as standard SMC, such as forward smoothing, backward smoothing, unbiased likelihood evaluation, and so on. In particular, SQMC may replace SMC within a PMCMC (particle Markov chain Monte Carlo) algorithm. We establish several convergence results. We provide numerical evidence that SQMC may significantly outperform SMC in practical scenarios.Comment: 55 pages, 10 figures (final version

    Insights into the content and spatial distribution of dust from the integrated spectral properties of galaxies

    Get PDF
    [Abridged] We present a new approach to investigate the content and spatial distribution of dust in structurally unresolved star-forming galaxies from the observed dependence of integrated spectral properties on galaxy inclination. We develop an innovative combination of generic models of radiative transfer (RT) in dusty media with a prescription for the spectral evolution of galaxies, via the association of different geometric components of galaxies with stars in different age ranges. We show that a wide range of RT models all predict a quasi-universal relation between slope of the attenuation curve at any wavelength and V-band attenuation optical depth in the diffuse interstellar medium (ISM), at all galaxy inclinations. This relation predicts steeper (shallower) dust attenuation curves than both the Calzetti and MW curves at small (large) attenuation optical depths, which implies that geometry and orientation effects have a stronger influence on the shape of the attenuation curve than changes in the optical properties of dust grains. We use our combined RT and spectral evolution model to interpret the observed dependence of the H\alpha/H\beta\ ratio and ugrizYJH attenuation curve on inclination in a sample of ~23 000 nearby star-forming galaxies. From a Bayesian MCMC fit, we measure the central face-on B-band optical depth of this sample to be tau_B\perp~1.8\pm0.2. We also quantify the enhanced optical depth towards newly formed stars in their birth clouds, finding this to be significantly larger in galaxies with bulges than in disc-dominated galaxies, while tau_B\perp is roughly similar in both cases. Finally, we show that neglecting the effect of geometry and orientation on attenuation can severely bias the interpretation of galaxy spectral energy distributions, as the impact on broadband colours can reach up to 0.3-0.4 mag at optical wavelengths and 0.1 mag at near-infrared ones.Comment: 32 pages, 3 tables, 41 figures, MNRAS in-pres

    Dark matter interpretations of ATLAS searches for the electroweak production of supersymmetric particles in s√=8 s=8 TeV proton-proton collisions

    Get PDF
    A selection of searches by the ATLAS experiment at the LHC for the electroweak production of SUSY particles are used to study their impact on the constraints on dark matter candidates. The searches use 20 fb−1 of proton-proton collision data at s √ =8 s=8 TeV. A likelihood-driven scan of a five-dimensional effective model focusing on the gaugino-higgsino and Higgs sector of the phenomenological minimal supersymmetric Standard Model is performed. This scan uses data from direct dark matter detection experiments, the relic dark matter density and precision flavour physics results. Further constraints from the ATLAS Higgs mass measurement and SUSY searches at LEP are also applied. A subset of models selected from this scan are used to assess the impact of the selected ATLAS searches in this five-dimensional parameter space. These ATLAS searches substantially impact those models for which the mass m(χ ~ 0 1 ) m(χ~10) of the lightest neutralino is less than 65 GeV, excluding 86% of such models. The searches have limited impact on models with larger m(χ ~ 0 1 ) m(χ~10) due to either heavy electroweakinos or compressed mass spectra where the mass splittings between the produced particles and the lightest supersymmetric particle is small

    New test of modulated electron capture decay of hydrogen-like 142Pm ions: Precision measurement of purely exponential decay

    Get PDF
    An experiment addressing electron capture (EC) decay of hydrogen-like 142Pm60+ions has been conducted at the experimental storage ring (ESR) at GSI. The decay appears to be purely exponential and no modulations were observed. Decay times for about 9000 individual EC decays have been measured by applying the single-ion decay spectroscopy method. Both visually and automatically analysed data can be described by a single exponential decay with decay constants of 0.0126(7)s−1for automatic analysis and 0.0141(7)s−1for manual analysis. If a modulation superimposed on the exponential decay curve is assumed, the best fit gives a modulation amplitude of merely 0.019(15), which is compatible with zero and by 4.9 standard deviations smaller than in the original observation which had an amplitude of 0.23(4)

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)

    Modular Urbanism: Combining modular and multi-scalar design strategies in creating sustainable landscape architecture design and construction processes

    No full text
    In the continued effort to fulfill its professional mandate to build sustainably, the discipline of landscape architecture has begun the transition from emphasizing site-specific design and construction (a “one-off” approach) towards more expansive methods that better address material efficiencies, life cycle performance, and end of life building practices through redevelopment, adaptive re-use and retrofitting. Within this context, this thesis asks how modular design thinking could offer an alternative approach, especially when combined with the multi-scalar techniques and principles of tactical urbanism and placemaking in the (re)design and construction of sustainable urban spaces. Often thought of as generic, repetitive, and monotonous, with regard to the built environment, this thesis will suggest that modular design thinking, at the site scale, has direct application to landscape architecture in not only (re)activating urban spaces, but in creating meaningful sense of place. Highlights will include three interdisciplinary design case studies, that engaged community, and municipal stakeholders. This thesis will touch on the importance of interdisciplinary practice in the development of novel, specific yet scalable, adaptable yet economical forms of urbanism, and in doing so, develop possible alternative design processes in generating normative practices in landscape architecture design and construction
    corecore