239 research outputs found

    Cosmology with two compactification scales

    Get PDF
    We consider a (4+d)-dimensional spacetime broken up into a (4-n)-dimensional Minkowski spacetime (where n goes from 1 to 3) and a compact (n+d)-dimensional manifold. At the present time the n compactification radii are of the order of the Universe size, while the other d compactification radii are of the order of the Planck length.Comment: 16 pages, Latex2e, 7 figure

    The damping width of giant dipole resonances of cold and hot nuclei: a macroscopic model

    Get PDF
    A phenomenological macroscopic model of the Giant Dipole Resonance (GDR) damping width of cold- and hot-nuclei with ground-state spherical and near-spherical shapes is developed. The model is based on a generalized Fermi Liquid model which takes into account the nuclear surface dynamics. The temperature dependence of the GDR damping width is accounted for in terms of surface- and volume-components. Parameter-free expressions for the damping width and the effective deformation are obtained. The model is validated with GDR measurements of the following nuclides, 39,40^{39,40}K, 42^{42}Ca, 45^{45}Sc, 59,63^{59,63}Cu, 109−120^{109-120}Sn,147^{147}Eu, 194^{194}Hg, and 208^{208}Pb, and is compared with the predictions of other models.Comment: 10 pages, 5 figure

    Possible black universes in a brane world

    Full text link
    A black universe is a nonsingular black hole where, beyond the horizon, there is an expanding, asymptotically isotropic universe. Such spherically symmetric configurations have been recently found as solutions to the Einstein equations with phantom scalar fields (with negative kinetic energy) as sources of gravity. They have a Schwarzschild-like causal structure but a de Sitter infinity instead of a singularity. It is attempted to obtain similar configurations without phantoms, in the framework of an RS2 type brane world scenario, considering the modified Einstein equations that describe gravity on the brane. By building an explicit example, it is shown that black-universe solutions can be obtained there in the presence of a scalar field with positive kinetic energy and a nonzero potential.Comment: 8 pages, 5 figures, gc styl

    Constraining primordial non-Gaussianity with cosmological weak lensing: shear and flexion

    Full text link
    We examine the cosmological constraining power of future large-scale weak lensing surveys on the model of \emph{Euclid}, with particular reference to primordial non-Gaussianity. Our analysis considers several different estimators of the projected matter power spectrum, based on both shear and flexion, for which we review the covariances and Fisher matrices. The bounds provided by cosmic shear alone for the local bispectrum shape, marginalized over σ8\sigma_8, are at the level of ΔfNL∼100\Delta f_\mathrm{NL} \sim 100. We consider three additional bispectrum shapes, for which the cosmic shear constraints range from ΔfNL∼340\Delta f_\mathrm{NL}\sim 340 (equilateral shape) up to ΔfNL∼500\Delta f_\mathrm{NL}\sim 500 (orthogonal shape). The competitiveness of cosmic flexion constraints against cosmic shear ones depends on the galaxy intrinsic flexion noise, that is still virtually unconstrained. Adopting the very high value that has been occasionally used in the literature results in the flexion contribution being basically negligible with respect to the shear one, and for realistic configurations the former does not improve significantly the constraining power of the latter. Since the flexion noise decreases with decreasing scale, by extending the analysis up to ℓmax=20,000\ell_\mathrm{max} = 20,000 cosmic flexion, while being still subdominant, improves the shear constraints by ∼10\sim 10% when added. However on such small scales the highly non-linear clustering of matter and the impact of baryonic physics make any error estimation uncertain. By considering lower, and possibly more realistic, values of the flexion intrinsic shape noise results in flexion constraining power being a factor of ∼2\sim 2 better than that of shear, and the bounds on σ8\sigma_8 and fNLf_\mathrm{NL} being improved by a factor of ∼3\sim 3 upon their combination. (abridged)Comment: 30 pages, 4 figures, 4 tables. To appear on JCA

    Dark matter searches in the gamma-ray extragalactic background via cross-correlations with galaxy catalogs

    Get PDF
    We compare the measured angular cross-correlation between the Fermi-Large Area Telescope gamma-ray sky and catalogs of extragalactic objects with the expected signal induced by weakly interacting massive particle (WIMP) dark matter (DM). We include a detailed description of the contribution of astrophysical gamma-ray emitters such as blazars, misaligned active galactic nucleus (AGN), and star-forming galaxies, and perform a global fit to the measured cross-correlation. Five catalogs are considered: Sloan Digital Sky Survey (SDSS)-DR6 quasars, Two Micron All Sky Survey galaxies, NRAO VLA Sky Survey radio galaxies, SDSS-DR8 Luminous Red Galaxies, and the SDSS-DR8 main galaxy sample. To model the cross-correlation signal, we use the halo occupation distribution formalism to estimate the number of galaxies of a given catalog in DM halos and their spatial correlation properties. We discuss uncertainties in the predicted cross-correlation signal arising from the DM clustering and WIMP microscopic properties, which set the DM gamma-ray emission. The use of different catalogs probing objects at different redshifts significantly. reduces, though not completely, the degeneracy among the different.-ray components. We find that the presence of a significant WIMP DM signal is allowed by the data but not significantly preferred by the fit, although this is mainly due to a degeneracy with the misaligned AGN component. With modest substructure boost, the sensitivity of this method excludes thermal annihilation cross sections at 95% level for WIMP masses up to few tens of GeV. Constraining the low-redshift properties of astrophysical populations with future data will further improve the sensitivity to DM

    Tomography of the Fermi-LAT \u3b3-Ray Diffuse Extragalactic Signal via Cross Correlations with Galaxy Catalogs

    Get PDF
    Building on our previous cross-correlation analysis (Xia et al. 2011) between the isotropic \u3b3-ray background (IGRB) and different tracers of the large-scale structure of the universe, we update our results using 60 months of data from the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope (Fermi). We perform a cross-correlation analysis both in configuration and spherical harmonics space between the IGRB and objects that may trace the astrophysical sources of the IGRB: QSOs in the Sloan Digital Sky Survey (SDSS) DR6, the SDSS DR8 Main Galaxy Sample, luminous red galaxies (LRGs) in the SDSS catalog, infrared-selected galaxies in the Two Micron All Sky Survey (2MASS), and radio galaxies in the NRAO VLA Sky Survey (NVSS). The benefit of correlating the Fermi-LAT signal with catalogs of objects at various redshifts is to provide tomographic information on the IGRB, which is crucial in separating the various contributions and clarifying its origin. The main result is that, unlike in our previous analysis, we now observe a significant (>3.5\u3c3) cross-correlation signal on angular scales smaller than 1\ub0in the NVSS, 2MASS, and QSO cases and, at lower statistical significance ( 3c3.0\u3c3), with SDSS galaxies. The signal is stronger in two energy bands, E > 0.5 GeV and E > 1 GeV, but it is also seen at E > 10 GeV. No cross-correlation signal is detected between Fermi data and the LRGs. These results are robust against the choice of the statistical estimator, estimate of errors, map cleaning procedure, and instrumental effects. Finally, we test the hypothesis that the IGRB observed by Fermi-LAT originates from the summed contributions of three types of unresolved extragalactic sources: BL Lacertae objects (BL Lacs), flat spectrum radio quasars (FSRQs), and star-forming galaxies (SFGs). We find that a model in which the IGRB is mainly produced by SFGs (% with 2\u3c3 errors), with BL Lacs and FSRQs giving a minor contribution, provides a good fit to the data. We also consider a possible contribution from misaligned active galactic nuclei, and we find that, depending on the details of the model and its uncertainty, they can also provide a substantial contribution, partly degenerate with the SFG one. \ua9 2015. The American Astronomical Society

    Modeling the drug release from hydrogel-based matrices

    Get PDF
    In this work the behavior of hydrogel-based matrices, the most widespread systems for oral controlled release of pharmaceuticals, has been mathematically described. In addition, the calculations of the model have been validated against a rich set of experimental data obtained working with tablets made of hydroxypropyl methylcellulose (a hydrogel) and theophylline (a model drug). The model takes into account water uptake, hydrogel swelling, drug release, and polymer erosion. The model was obtained as an improvement of a previous code, describing the diffusion in concentrated systems, and obtaining the erosion front (which is a moving boundary) from the polymer mass balance (in this way, the number of fitting parameters was also reduced by one). The proposed model was found able to describe all the observed phenomena, and then it can be considered a tool with predictive capabilities, useful in design and testing of new dosage systems based on hydrogels

    Euclid preparation. XXIX. Water ice in spacecraft part I:The physics of ice formation and contamination

    Get PDF
    Molecular contamination is a well-known problem in space flight. Water is the most common contaminant and alters numerous properties of a cryogenic optical system. Too much ice means that Euclid's calibration requirements and science goals cannot be met. Euclid must then be thermally decontaminated, a long and risky process. We need to understand how iced optics affect the data and when a decontamination is required. This is essential to build adequate calibration and survey plans, yet a comprehensive analysis in the context of an astrophysical space survey has not been done before. In this paper we look at other spacecraft with well-documented outgassing records, and we review the formation of thin ice films. A mix of amorphous and crystalline ices is expected for Euclid. Their surface topography depends on the competing energetic needs of the substrate-water and the water-water interfaces, and is hard to predict with current theories. We illustrate that with scanning-tunnelling and atomic-force microscope images. Industrial tools exist to estimate contamination, and we must understand their uncertainties. We find considerable knowledge errors on the diffusion and sublimation coefficients, limiting the accuracy of these tools. We developed a water transport model to compute contamination rates in Euclid, and find general agreement with industry estimates. Tests of the Euclid flight hardware in space simulators did not pick up contamination signals; our in-flight calibrations observations will be much more sensitive. We must understand the link between the amount of ice on the optics and its effect on Euclid's data. Little research is available about this link, possibly because other spacecraft can decontaminate easily, quenching the need for a deeper understanding. In our second paper we quantify the various effects of iced optics on spectrophotometric data

    Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process

    Get PDF
    Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein–Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein–Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential
    • …
    corecore