12,275 research outputs found

    Is MS1054-03 an exceptional cluster? A new investigation of ROSAT/HRI X-ray data

    Get PDF
    We reanalyzed the ROSAT/HRI observation of MS1054-03, optimizing the channel HRI selection and including a new exposure of 68 ksec. From a wavelet analysis of the HRI image we identify the main cluster component and find evidence for substructure in the west, which might either be a group of galaxies falling onto the cluster or a foreground source. Our 1-D and 2-D analysis of the data show that the cluster can be fitted well by a classical betamodel centered only 20arcsec away from the central cD galaxy. The core radius and beta values derived from the spherical model(beta = 0.96_-0.22^+0.48) and the elliptical model (beta = 0.73+/-0.18) are consistent. We derived the gas mass and total mass of the cluster from the betamodel fit and the previously published ASCA temperature (12.3^{+3.1}_{-2.2} keV). The gas mass fraction at the virial radius is fgas = (14[-3,+2.5]+/-3)% for Omega_0=1, where the errors in brackets come from the uncertainty on the temperature and the remaining errors from the HRI imaging data. The gas mass fraction computed for the best fit ASCA temperature is significantly lower than found for nearby hot clusters, fgas=20.1pm 1.6%. This local value can be matched if the actual virial temperature of MS1054-032 were close to the lower ASCA limit (~10keV) with an even lower value of 8 keV giving the best agreement. Such a bias between the virial and measured temperature could be due to the presence of shock waves in the intracluster medium stemming from recent mergers. Another possibility, that reconciles a high temperature with the local gas mass fraction, is the existence of a non zero cosmological constant.Comment: 12 pages, 5 figures, accepted for publication in Ap

    An SZ/X-ray galaxy cluster model and the X-ray follow-up of the Planck clusters

    Full text link
    Sunyaev-Zel'dovich (SZ) cluster surveys will become an important cosmological tool over next few years, and it will be essential to relate these new surveys to cluster surveys in other wavebands. We present an empirical model of cluster SZ and X-ray observables constructed to address this question and to motivate, dimension and guide X-ray follow-up of SZ surveys. As an example application of the model, we discuss potential XMM-Newton follow-up of Planck clusters.Comment: 4 pages, 5 figures. To appear in the proceedings of the XXXXIIIrd Rencontres de Morion

    The Absolute Abundance of Iron in the Solar Corona

    Get PDF
    We present a measurement of the abundance of Fe relative to H in the solar corona using a technique which differs from previous spectroscopic and solar wind measurements. Our method combines EUV line data from the CDS spectrometer on SOHO with thermal bremsstrahlung radio data from the VLA. The coronal Fe abundance is derived by equating the thermal bremsstrahlung radio emission calculated from the EUV Fe line data to that observed with the VLA, treating the Fe/H abundance as the sole unknown. We apply this technique to a compact cool active region and find Fe/H = 1.56 x 10^{-4}, or about 4 times its value in the solar photosphere. Uncertainties in the CDS radiometric calibration, the VLA intensity measurements, the atomic parameters, and the assumptions made in the spectral analysis yield net uncertainties of order 20%. This result implies that low first ionization potential elements such as Fe are enhanced in the solar corona relative to photospheric values.Comment: Astrophysical Journal Letters, in pres

    Extracting galactic binary signals from the first round of Mock LISA Data Challenges

    Full text link
    We report on the performance of an end-to-end Bayesian analysis pipeline for detecting and characterizing galactic binary signals in simulated LISA data. Our principal analysis tool is the Blocked-Annealed Metropolis Hasting (BAM) algorithm, which has been optimized to search for tens of thousands of overlapping signals across the LISA band. The BAM algorithm employs Bayesian model selection to determine the number of resolvable sources, and provides posterior distribution functions for all the model parameters. The BAM algorithm performed almost flawlessly on all the Round 1 Mock LISA Data Challenge data sets, including those with many highly overlapping sources. The only misses were later traced to a coding error that affected high frequency sources. In addition to the BAM algorithm we also successfully tested a Genetic Algorithm (GA), but only on data sets with isolated signals as the GA has yet to be optimized to handle large numbers of overlapping signals.Comment: 13 pages, 4 figures, submitted to Proceedings of GWDAW-11 (Berlin, Dec. '06

    The hot gas content of fossil galaxy clusters

    Full text link
    We investigate the properties of the hot gas in four fossil galaxy systems detected at high significance in the Planck Sunyaev-Zeldovich (SZ) survey. XMM-Newton observations reveal overall temperatures of kT ~ 5-6 keV and yield hydrostatic masses M500,HE > 3.5 x 10e14 Msun, confirming their nature as bona fide massive clusters. We measure the thermodynamic properties of the hot gas in X-rays (out to beyond R500 in three cases) and derive their individual pressure profiles out to R ~ 2.5 R500 with the SZ data. We combine the X-ray and SZ data to measure hydrostatic mass profiles and to examine the hot gas content and its radial distribution. The average Navarro-Frenk-White (NFW) concentration parameter, c500 = 3.2 +/- 0.4, is the same as that of relaxed `normal' clusters. The gas mass fraction profiles exhibit striking variation in the inner regions, but converge to approximately the cosmic baryon fraction (corrected for depletion) at R500. Beyond R500 the gas mass fraction profiles again diverge, which we interpret as being due to a difference in gas clumping and/or a breakdown of hydrostatic equilibrium in the external regions. Overall our observations point to considerable radial variation in the hot gas content and in the gas clumping and/or hydrostatic equilibrium properties in these fossil clusters, at odds with the interpretation of their being old, evolved and undisturbed. At least some fossil objects appear to be dynamically young.Comment: 4 pages, 2 figures. Accepted for publication in A&

    Chandra Observations of low velocity dispersion groups

    Full text link
    Deviations of galaxy groups from cluster scaling relations can be understood in terms of an excess of entropy in groups. The main effect of this excess is to reduce the density and thus luminosity of the intragroup gas. Given this, groups should also should show a steep relationship between X-ray luminosity and velocity dispersion. However, previous work suggests that this is not the case with many measuring slopes flatter than the cluster relation. Examining the group L_X:\sigma relation shows that much of the flattening is caused by a small subset of groups which show very high X-ray luminosities for their velocity dispersions (or vice versa). Detailed Chandra study of two such groups shows that earlier ROSAT results were subject to significant (~30-40%) point source contamination, but confirm that a significant hot IGM is present in these groups, although these are two of the coolest systems in which intergalactic X-ray emission has been detected. Their X-ray properties are shown to be broadly consistent with those of other galaxy groups, although the gas entropy in NGC 1587 is unusually low, and its X-ray luminosity correspondingly high for its temperature, compared to most groups. This leads us to suggest that the velocity dispersion in these systems has been reduced in some way, and we consider how this might have come about.Comment: Accepted for publication in Ap

    Amélioration des performances d'un modèle stochastique de génération de hyétogrammes horaires: application au pourtour méditerranéen français

    Get PDF
    Depuis quelques années, un modèle stochastique de génération de hyétogrammes horaires est développé au groupement d'Aix-en-Provence du Cemagref, pour être couplé à une modélisation de la pluie en débit, fournissant ainsi une multitude de scénarios de crues analysés statistiquement et utilisés en prédétermination des débits de crues. L'extension de la zone d'application du modèle de pluies horaires au-delà de sa zone de conception, a fait apparaître une hétérogénéité dans les résultats. Ce constat a entraîné certaines modifications du modèle comme : la recherche d'une loi de probabilité théorique peu sensible aux problèmes d'échantillonnage pour une variable du modèle (intensité d'une averse), la prise en compte originale de la dépendance observée entre deux variables du modèle (durée et intensité d'une averse), et la modélisation de la persistance des averses au sein d'une même période pluvieuse. Ces différentes modifications apportées au modèle initial ont entraîné une très nette amélioration de ses performances sur la cinquantaine de postes pluviographiques du pourtour méditerranéen français. On obtient ainsi un outil beaucoup plus robuste et validé sur une zone étendue, capable de fournir de multiples formes de hyétogrammes, couvrant toute la gamme des fréquences, permettant ainsi de s'affranchir des pluies de projet uniques. On aborde aussi une nouvelle approche du comportement à l'infini des distributions de fréquences des pluies qui semble parfois supérieur à une tendance strictement exponentielle. De plus, l'étude de plusieurs événements par an dont chacun présente plusieurs réalisations des différentes variables du modèle augmente la taille des échantillons analysés, semblant rendre la méthode plus rapidement fiable qu'une approche statistique classique basée par exemple sur l'ajustement de valeurs maximales annuelles.A stochastic model for generating hourly hyetographs has been recently developed, in the Cemagref of Aix-en-Provence, to be coupled with a rainfall runoff conversion modelling. Thus, by simulation of very long periods (1000 years for example), we obtain a large number of hourly hyetographs and flood scenarios that are statistically studied and used in flood predetermination problems. The rainfall model studied is based on the theory that rainfall can be linked to a random and intermittent process whose evolution is described by stochastic laws. It is also based on the hypothesis of independence between variables describing hyetographs and on the hypothesis of the stationary nature of the phenomenon studied. Generating a rainfall time series involves two steps : descriptive study of the phenomenon (nine independent variables are chosen to describe the phenomenon and these variables are defined by a theoretical law of probability fitted to the observations) and creation of a rainfall time series using descriptive variables generated randomly from their law of probability. Initially developed on the Réal Collobrier watershed data, the model has been applied to fifty raingauges located on the Mediterranean French seaboard. The extension of the model applying area has shown heterogeneousness in the results. Therefore, modifications have been made to the model to improve its performances. Among these modifications, three of them have presented notable improvements. A study of the sensitivity of the parameters has been made. Parameters of shape variables and of some other variables had only a slight influence on depth of generated rainfalls. But, the law of mean rainfall intensities clearly differentiates the stations. Then, a theoretical probability distribution for the storm intensity variable, less sensitive to the sampling problems, has been searched. An exponential distribution is fitted to the value smaller than four times the mean of the variable. A slope breakage was then introduced to generate all the values beyond this limit. The breakage at the value four times the mean of the variable and modelling this breakage were based on a study of so-called "regional" distributions of the storm intensity variable. These distributions were designed by clustering the variable's homogenized values for all 50 studied stations. A second modification has been made to develop new model for the observed dependence between two variables (duration and intensity of the storm). The study of this dependence has been considered directly based on the cumulative frequency of the two variables. Then, an additional parameter was defined to model the dependence between the probabilities of the two variables. This parameter characterises the cumulative frequency curve of the sum of the probabilities of the two variables. This point, neglected during a long time, has been very important in the improvement of the model. Finally, the modelling of storm persistence in a same rainfall episode has been studied to generate some high 24 hours maximum rainfalls. Persistence modelling is entirely justified by the fact that "ordinary storms" cluster together around the "main storm" (the "main storm" is the greatest storm of an episode and the "ordinary storms" are the other storms of the episode). When the study of this phenomenon is extended, it can be observed that there is a certain positive dependency between occurrence probability of the "main storm" and occurrence probability of storms which come before or after it. Two combined effects occur : within one rainy episode, the strongest "ordinary storms" are preferentially clustered together around the "main storm", and considering the number of "ordinary storms" throughout all the episodes, the strongest storms close to the "main storm" are preferentially associated with the strongest "main storms" and vice versa. This modification improves the performances of the altitude raingauges, which are characterised by high daily rainfall accumulations. The different modifications added to the initial model, give very important improvements on the calibration of the fifty raingauges studied on the French Mediterranean seaboard. Its aptitude to generate rains observed in Mediterranean climate, strongly variables, consolidates us in the idea of its application on a zone much larger. The generation of hyetographs makes it possible to use the maximum the temporal information of the rain. Thus, we obtain a reliable tool, validated on a large area, for simulating hyetographs and hourly flood scenarios at all frequencies, and used instead of a unique design storm and design flood. The approach allows a new cumulative probability curve extrapolation, which seems sometimes greater than an exponential behaviour. Moreover, the study of many events per year, with many occurrences of the different variables of the model, increase the analysed sample size and seems to make the method more reliable than a statistical approach simply based, for example, on the fitting of annual maximum values

    Time-Varying Graphs and Dynamic Networks

    Full text link
    The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems -- delay-tolerant networks, opportunistic-mobility networks, social networks -- obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe; and the formal models proposed so far to express some specific concepts are components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms, and results found in the literature into a unified framework, which we call TVG (for time-varying graphs). Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are a-temporal (as in the majority of existing studies) or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.Comment: A short version appeared in ADHOC-NOW'11. This version is to be published in Internation Journal of Parallel, Emergent and Distributed System
    • …
    corecore