6,477 research outputs found

    Hints for families of GRBs improving the Hubble diagram

    Full text link
    As soon as their extragalactic origins were established, the hope to make Gamma - Ray Bursts (GRBs) standardizeable candles to probe the very high - z universe has opened the search for scaling relations between redshift independent observable quantities and distance dependent ones. Although some remarkable success has been achieved, the empirical correlations thus found are still affected by a significant intrinsic scatter which downgrades the precision in the inferred GRBs Hubble diagram. We investigate here whether this scatter may come from fitting together objects belonging to intrinsically different classes. To this end, we rely on a cladistics analysis to partition GRBs in homogenous families according to their rest frame properties. Although the poor statistics prevent us from drawing a definitive answer, we find that both the intrinsic scatter and the coefficients of the EpeakE_{peak}\,-\,EisoE_{iso} and EpeakE_{peak}\,-\,LL correlations significantly change depending on which subsample is fitted. It turns out that the fit to the full sample leads to a scaling relation which approximately follows the diagonal of the region delimited by the fits to each homogenous class. We therefore argue that a preliminary identification of the class a GRB belongs to is necessary in order to select the right scaling relation to be used in order to not bias the distance determination and hence the Hubble diagram.Comment: 10 pages, 6 figures, 4 tables, accepted for publication on MNRA

    Infinite average lifetime of an unstable bright state in the green fluorescent protein

    Full text link
    The time evolution of the fluorescence intensity emitted by well-defined ensembles of Green Fluorescent Proteins has been studied by using a standard confocal microscope. In contrast with previous results obtained in single molecule experiments, the photo-bleaching of the ensemble is well described by a model based on Levy statistics. Moreover, this simple theoretical model allows us to obtain information about the energy-scales involved in the aging process.Comment: 4 pages, 4 figure

    The OMPS Limb Profiler Instrument: Two-Dimensional Retrieval Algorithm

    Get PDF
    The upcoming Ozone Mapper and Profiler Suite (OMPS), which will be launched on the NPOESS Preparatory Project (NPP) platform in early 2011, will continue monitoring the global distribution of the Earth's middle atmosphere ozone and aerosol. OMPS is composed of three instruments, namely the Total Column Mapper (heritage: TOMS, OMI), the Nadir Profiler (heritage: SBUV) and the Limb Profiler (heritage: SOLSE/LORE, OSIRIS, SCIAMACHY, SAGE III). The ultimate goal of the mission is to better understand and quantify the rate of stratospheric ozone recovery. The focus of the paper will be on the Limb Profiler (LP) instrument. The LP instrument will measure the Earth's limb radiance (which is due to the scattering of solar photons by air molecules, aerosol and Earth surface) in the ultra-violet (UV), visible and near infrared, from 285 to 1000 nm. The LP simultaneously images the whole vertical extent of the Earth's limb through three vertical slits, each covering a vertical tangent height range of 100 km and each horizontally spaced by 250 km in the cross-track direction. Measurements are made every 19 seconds along the orbit track, which corresponds to a distance of about 150km. Several data analysis tools are presently being constructed and tested to retrieve ozone and aerosol vertical distribution from limb radiance measurements. The primary NASA algorithm is based on earlier algorithms developed for the SOLSE/LORE and SAGE III limb scatter missions. All the existing retrieval algorithms rely on a spherical symmetry assumption for the atmosphere structure. While this assumption is reasonable in most of the stratosphere, it is no longer valid in regions of prime scientific interest, such as polar vortex and UTLS regions. The paper will describe a two-dimensional retrieval algorithm whereby the ozone distribution is simultaneously retrieved vertically and horizontally for a whole orbit. The retrieval code relies on (1) a forward 2D Radiative Transfer code (to model limb radiances within a non-uniform atmosphere and evaluate 2D analytical partial derivatives) and (2) an optimal estimator inversion routine. The algorithm uses the typically sparse nature of the kernel matrices as well as fast matrix inversion techniques to allow for fast inversion of limb data with efficient memory management (as was done for MIPAS data processing). While the method has so far only been developed in the context of Single Scatter, the paper will show how the CPU intensive Multiple Scatter modeling can be implemented using parallel CPU processing. Initial results will be presented in terms of retrieved ozone profiles and code performance

    Power consumption evaluation of circuit-switched versus packet-switched optical backbone networks

    Get PDF
    While telecommunication networks have historically been dominated by a circuit-switched paradigm, the last decades have seen a clear trend towards packet-switched networks. In this paper we evaluate how both paradigms perform in optical backbone networks from a power consumption point of view, and whether the general agreement of circuit switching being more power-efficient holds. We consider artificially generated topologies of various sizes, mesh degrees and not yet previously explored in this context transport linerates. We cross-validate our findings with a number of realistic topologies. Our results show that, as a generalization, packet switching can become preferable when the traffic demands are lower than half the transport linerate. We find that an increase in the network node count does not consistently increase the energy savings of circuit switching over packet switching, but is heavily influenced by the mesh degree and (to a minor extent) by the average link length

    A Multistage Method for SCMA Codebook Design Based on MDS Codes

    Get PDF
    Sparse Code Multiple Access (SCMA) has been recently proposed for the future generation of wireless communication standards. SCMA system design involves specifying several parameters. In order to simplify the procedure, most works consider a multistage design approach. Two main stages are usually emphasized in these methods: sparse signatures design (equivalently, resource allocation) and codebook design. In this paper, we present a novel SCMA codebook design method. The proposed method considers SCMA codebooks structured with an underlying vector space obtained from classical block codes. In particular, when using maximum distance separable (MDS) codes, our proposed design provides maximum signal-space diversity with a relatively small alphabet. The use of small alphabets also helps to maintain desired properties in the codebooks, such as low peak-to-average power ratio and low-complexity detection.Comment: Submitted to IEEE Wireless Communication Letter

    Bath's law Derived from the Gutenberg-Richter law and from Aftershock Properties

    Get PDF
    The empirical Bath's law states that the average difference in magnitude between a mainshock and its largest aftershock is 1.2, regardless of the mainshock magnitude. Following Vere-Jones [1969] and Console et al. [2003], we show that the origin of Bath's law is to be found in the selection procedure used to define mainshocks and aftershocks rather than in any difference in the mechanisms controlling the magnitude of the mainshock and of the aftershocks. We use the ETAS model of seismicity, which provides a more realistic model of aftershocks, based on (i) a universal Gutenberg-Richter (GR) law for all earthquakes, and on (ii) the increase of the number of aftershocks with the mainshock magnitude. Using numerical simulations of the ETAS model, we show that this model is in good agreement with Bath's law in a certain range of the model parameters.Comment: major revisions, in press in Geophys. Res. Let

    Analysis of indentation size effect in copper and its alloys

    Get PDF
    For describing the indentation size effect (ISE), numerous models, which relate the load or hardness to the indent dimensions, have been proposed. Unfortunately, it is still difficult to associate the different parameters involved in such relationships with physical or mechanical properties of the material. This is an unsolved problem since the ISE can be associated with various causes such as workhardening, roughness, piling-up, sinking-in, indenter tip geometry, surface energy, varying composition and crystal anisotropy. For interpreting the change in hardness with indent size, an original approach is proposed on the basis of composite hardness modelling together with the use of a simple model, which allows the determination of the hardness–depth profile. Applied to copper and copper alloys, it is shown that it is possible to determine the maximum hardness value reached at the outer surface of the material and the distance over which both the ISE and the workhardening take place
    • 

    corecore