526,692 research outputs found

    A New Local Temperature Distribution Function for X-ray Clusters: Cosmological Applications

    Get PDF
    (abridged) We present a new determination of the local temperature function of X-ray clusters. We use a new sample comprising fifty clusters for which temperature information is now available, making it the largest complete sample of its kind. It is therefore expected to significantly improve the estimation of the temperature distribution function of moderately hot clusters. We find that the resulting temperature function is higher than previous estimations, but agrees well with the temperature distribution function inferred from the BCS and RASS luminosity function. We have used this sample to constrain the amplitude of the matter fluctuations on cluster's scale of 8Ω031h18\sqrt[3]{\Omega_0}^{-1}h^{-1}Mpc, assuming a mass-temperature relation based on recent numerical simulations. We find σ8=0.6±0.02\sigma_8 = 0.6\pm 0.02 for an Ω0=1\Omega_0 = 1 model. Our sample provides an ideal reference at z0z \sim 0 to use in the application of the cosmological test based on the evolution of X-ray cluster abundance (Oukbir & Blanchard 1992, 1997). Using Henry's sample, we find that the abundance of clusters at z=0.33z = 0.33 is significantly smaller, by a factor larger than 2, which shows that the EMSS sample provides strong evidence for evolution of the cluster abundance. A likelihood analysis leads to a rather high value of the mean density parameter of the universe: Ω=0.92±0.22\Omega =0.92 \pm 0.22 (open case) and Ω=0.86±0.25\Omega =0.86 \pm 0.25 (flat case), which is consistent with a previous, independent estimation based on the full EMSS sample by Sadat et al.(1998). Some systematic uncertainties which could alter this result are briefly discussed.Comment: 31 pages, 12 figures, mathches the version published in Astronomy and Astrophysic

    The form factors of South African trees: is it possible to classify them easily using field measurements and photographs?

    Get PDF
    A research report submitted to the Faculty of Sciences, University of the Witwatersrand, Johannesburg in partial fulfilment of the requirements for the degree of Masters of Science in Environmental Sciences, 2017Modern tree biomass allometry makes use of “form factor”, which is the ratio of the true volume to the apparent volume. However, there is no database of form factors of South African trees, hence this study was undertaken to assess the possibility of assigning form factors to trees in a quick and easy way, either by visual assessment of an image of the tree or by simple field measurements. Stem diameter, taper and node length data for 112 trees was collected using both in situ and in-lab measurements from photos taken of the same trees in the field. The data were used to model tree volume using the fractal properties of branching architecture. The estimated tree volume was then used along with basal diameter and tree height to calculate the form factor. Results showed that measurements taken off images underestimated stem diameter and node length by 4% and 5% respectively, but the fractal allometry relationships developed using either the manual in-field or image analysis approach were not statistically different. This proves that dry season photography is sufficiently accurate for establishing relationships needed to construct a fractal model of tree volume. The image analysis approach requires a clear unobstructed view of the sample tree. This requirement made the approach less effective as when trees were in close proximity and when branches overlapped. The time taken using the photographic approach was twice the amount taken for the manual in-field. Form factor varied between species, but the variation was not statistically significant (p=0.579). The mean form factor per species ranged from 0.43 to 0.69. Form factors were negatively correlated with wood density (-0.177), basal diameter (-0.547) and height (-0.649). Due to the unavailability of an independent tree biomass dataset, it was impossible to validate the allometric equations based on estimated form factors and wood density. The inclusion of form factor was shown to improve the accuracy of biomass estimation by 11%. Principal component analysis showed that form factors can be assigned using tree height and the form quotient.XL201

    Ant-Inspired Density Estimation via Random Walks

    Full text link
    Many ant species employ distributed population density estimation in applications ranging from quorum sensing [Pra05], to task allocation [Gor99], to appraisal of enemy colony strength [Ada90]. It has been shown that ants estimate density by tracking encounter rates -- the higher the population density, the more often the ants bump into each other [Pra05,GPT93]. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides new tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing propertieslocal\ mixing\ properties of the underlying graph. Our results extend beyond the grid to more general graphs and we discuss applications to size estimation for social networks and density estimation for robot swarms

    Metamodel-based importance sampling for structural reliability analysis

    Full text link
    Structural reliability methods aim at computing the probability of failure of systems with respect to some prescribed performance functions. In modern engineering such functions usually resort to running an expensive-to-evaluate computational model (e.g. a finite element model). In this respect simulation methods, which may require 103610^{3-6} runs cannot be used directly. Surrogate models such as quadratic response surfaces, polynomial chaos expansions or kriging (which are built from a limited number of runs of the original model) are then introduced as a substitute of the original model to cope with the computational cost. In practice it is almost impossible to quantify the error made by this substitution though. In this paper we propose to use a kriging surrogate of the performance function as a means to build a quasi-optimal importance sampling density. The probability of failure is eventually obtained as the product of an augmented probability computed by substituting the meta-model for the original performance function and a correction term which ensures that there is no bias in the estimation even if the meta-model is not fully accurate. The approach is applied to analytical and finite element reliability problems and proves efficient up to 100 random variables.Comment: 20 pages, 7 figures, 2 tables. Preprint submitted to Probabilistic Engineering Mechanic

    Measuring the growth of matter fluctuations with third-order galaxy correlations

    Full text link
    Measurements of the linear growth factor DD at different redshifts zz are key to distinguish among cosmological models. One can estimate the derivative dD(z)/dln(1+z)dD(z)/d\ln(1+z) from redshift space measurements of the 3D anisotropic galaxy two-point correlation ξ(z)\xi(z), but the degeneracy of its transverse (or projected) component with galaxy bias bb, i.e. ξ(z) D2(z)b2(z)\xi_{\perp}(z) \propto\ D^2(z) b^2(z), introduces large errors in the growth measurement. Here we present a comparison between two methods which break this degeneracy by combining second- and third-order statistics. One uses the shape of the reduced three-point correlation and the other a combination of third-order one- and two-point cumulants. These methods use the fact that, for Gaussian initial conditions and scales larger than 2020 h1h^{-1}Mpc, the reduced third-order matter correlations are independent of redshift (and therefore of the growth factor) while the third-order galaxy correlations depend on bb. We use matter and halo catalogs from the MICE-GC simulation to test how well we can recover b(z)b(z) and therefore D(z)D(z) with these methods in 3D real space. We also present a new approach, which enables us to measure DD directly from the redshift evolution of second- and third-order galaxy correlations without the need of modelling matter correlations. For haloes with masses lower than 101410^{14} h1h^{-1}M_\odot, we find 1010% deviations between the different estimates of DD, which are comparable to current observational errors. At higher masses we find larger differences that can probably be attributed to the breakdown of the bias model and non-Poissonian shot noise.Comment: 24 pages, 20 figures, 2 tables, accepted for publication in MNRA

    Portfolio optimization when risk factors are conditionally varying and heavy tailed

    Get PDF
    Assumptions about the dynamic and distributional behavior of risk factors are crucial for the construction of optimal portfolios and for risk assessment. Although asset returns are generally characterized by conditionally varying volatilities and fat tails, the normal distribution with constant variance continues to be the standard framework in portfolio management. Here we propose a practical approach to portfolio selection. It takes both the conditionally varying volatility and the fat-tailedness of risk factors explicitly into account, while retaining analytical tractability and ease of implementation. An application to a portfolio of nine German DAX stocks illustrates that the model is strongly favored by the data and that it is practically implementable. Klassifizierung: C13, C32, G11, G14, G18Die Bewertung von Risiken und die optimale Zusammensetzung von Wertpapier-Portfolios hängt insbesondere von den für die Risikofaktoren gemachten Annahmen bezüglich der zugrunde liegenden Dynamik und den Verteilungseigenschaften ab. In der empirischen Finanzmarkt-Analyse ist weitestgehend akzeptiert, daß die Renditen von Finanzmarkt-Zeitreihen zeitvariierende Volatilität (HeteroskedastizitÄat) zeigen und daß die bedingte Verteilung der Renditen von der Normalverteilung abweichende Eigenschaften aufweisen. Insbesondere die Enden der Verteilung weisen eine gegenüber der Normalverteilung höhere Wahrscheinlichkeitsdichte auf ('fat-tails') und häufig ist die beobachtete Verteilung nicht symmetrisch. Trotzdem stellt die Normalverteilungs-Annahme mit konstanter Varianz weiterhin die Basis für den Mittelwert-Varianz Ansatz zur Portfolio-Optimierung dar. In der vorliegenden Studie schlagen wir einen praktikablen Ansatz zur Portfolio-Selektion mit einem Mittelwert-Skalen Ansatz vor, der sowohl die bedingte Heteroskedastizität der Renditen, als auch die von der Normalverteilung abweichenden Eigenschaften zu berücksichtigen in der Lage ist. Wir verwenden dazu eine dem GARCH Modellähnliche Dynamik der Risikofaktoren und verwenden stabile Verteilungen anstelle der Normalverteilung. Dabei gewährleistet das von uns vorgeschlagene Faktor-Modell sowohl gute analytische Eigenschaften und ist darüberhinaus auch einfach zu implementieren. Eine beispielhafte Anwendung des vorgeschlagenen Modells mit neun Aktien aus dem Deutschen Aktienindex veranschaulicht die bessere Anpassung des vorgeschlagenen Modells an die Daten und demonstriert die Anwendbarkeit zum Zwecke der Portfolio-Optimierung
    corecore