4,452 research outputs found

    Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    Get PDF
    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned

    Integrating and Ranking Uncertain Scientific Data

    Get PDF
    Mediator-based data integration systems resolve exploratory queries by joining data elements across sources. In the presence of uncertainties, such multiple expansions can quickly lead to spurious connections and incorrect results. The BioRank project investigates formalisms for modeling uncertainty during scientific data integration and for ranking uncertain query results. Our motivating application is protein function prediction. In this paper we show that: (i) explicit modeling of uncertainties as probabilities increases our ability to predict less-known or previously unknown functions (though it does not improve predicting the well-known). This suggests that probabilistic uncertainty models offer utility for scientific knowledge discovery; (ii) small perturbations in the input probabilities tend to produce only minor changes in the quality of our result rankings. This suggests that our methods are robust against slight variations in the way uncertainties are transformed into probabilities; and (iii) several techniques allow us to evaluate our probabilistic rankings efficiently. This suggests that probabilistic query evaluation is not as hard for real-world problems as theory indicates

    K^+ -> pi^+ nu nu-bar and K_L -> pi^0 nu nu-bar Decays in the General MSSM

    Full text link
    We reanalyze the rare decays K^+ -> pi^+ nu nu-bar and K_L -> pi^0 nu nu-bar in a general MSSM with conserved R-parity. Working in the mass eigenstate basis and performing adaptive scanning of a large space of supersymmetric parameters, 16 parameters in the constrained scan and 63 in the extended scan, we find that large departures from the Standard Model expectations are possible while satisfying all existing constraints. Both branching ratios can be as large as a few times 10^{-10} with Br(K_L -> pi^0 nu nu-bar) often larger than Br(K^+ -> pi^+ nu nu-bar) and close to its model independent upper bound. We give examples of supersymmetric parameters for which large departures from the SM expectations can be found and emphasize that the present 90% C.L. experimental upper bound on Br(K^+ -> pi^+ nu nu-bar) gives a non trivial constraint on the MSSM parameter space. Unlike previous analyses, we find that chargino box diagrams can give, already for moderately light charged sleptons, a significant contribution. As a byproduct we find that the ranges for the angles beta and gamma in the unitarity triangle are relaxed due to the presence of new CP-violating phases in K^0 - K^0-bar and B^0_d - B^0_d-bar mixing to 12 degrees <= beta <= 27 degrees and 20 degrees <= gamma <= 110 degrees.Comment: 36 pages, 27 figures, latex, uses axodraw.st

    Cosmological parameter inference with Bayesian statistics

    Full text link
    Bayesian statistics and Markov Chain Monte Carlo (MCMC) algorithms have found their place in the field of Cosmology. They have become important mathematical and numerical tools, especially in parameter estimation and model comparison. In this paper, we review some fundamental concepts to understand Bayesian statistics and then introduce MCMC algorithms and samplers that allow us to perform the parameter inference procedure. We also introduce a general description of the standard cosmological model, known as the Λ\LambdaCDM model, along with several alternatives, and current datasets coming from astrophysical and cosmological observations. Finally, with the tools acquired, we use an MCMC algorithm implemented in python to test several cosmological models and find out the combination of parameters that best describes the Universe.Comment: 30 pages, 17 figures, 5 tables; accepted for publication in Universe; references adde

    Rotational Velocities of Individual Components in Very Low Mass Binaries

    Get PDF
    We present rotational velocities for individual components of 11 very low mass (VLM) binaries with spectral types between M7 and L7.5. These results are based on observations taken with the near-infrared spectrograph, NIRSPEC, and the Keck II laser guide star adaptive optics system. We find that the observed sources tend to be rapid rotators (v sin i > 10 km s^(–1)), consistent with previous seeing-limited measurements of VLM objects. The two sources with the largest v sin i, LP 349–25B and HD 130948C, are rotating at ~30% of their break-up speed, and are among the most rapidly rotating VLM objects known. Furthermore, five binary systems, all with orbital semimajor axes ≾3.5 AU, have component v sin i values that differ by greater than 3σ. To bring the binary components with discrepant rotational velocities into agreement would require the rotational axes to be inclined with respect to each other, and that at least one component is inclined with respect to the orbital plane. Alternatively, each component could be rotating at a different rate, even though they have similar spectral types. Both differing rotational velocities and inclinations have implications for binary star formation and evolution. We also investigate possible dynamical evolution in the triple system HD 130948A–BC. The close binary brown dwarfs B and C have significantly different v sin i values. We demonstrate that components B and C could have been torqued into misalignment by the primary star, A, via orbital precession. Such a scenario can also be applied to another triple system in our sample, GJ 569A–Bab. Interactions such as these may play an important role in the dynamical evolution of VLM binaries. Finally, we note that two of the binaries with large differences in component v sin i, LP 349–25AB and 2MASS 0746+20AB, are also known radio sources

    Search for the neutrinoless double ß-decay in Gerda Phase I using a Pulse Shape Discrimination technique

    Get PDF
    The Germanium Detector Array (Gerda) experiment, located underground at the INFN Laboratori Nazionali del Gran Sasso (LNGS) in Italy, deploys high-purity germanium detectors to search for the neutrinoless double β-decay (0vββ) of 76Ge. An observation of this lepton number violating process, which is expected by many extensions of the Standard Model, would not only generate a fundamental shift in our understanding of particle physics, but also unambiguously prove the neutrino to have a non-vanishing Majorana mass component. A ifrst phase of data recording lasted from November 2011 to May 2013 - resulting in a total exposure (defined as the product of detector mass and measurement time) of 21.6 kg . yr. Within this thesis a thorough study of this data with special emphasis on the development and scrutiny of an active background suppression technique by means of a signal shape analysis has been performed. Among several investigated multivariate approaches, particularly a selection algorithm based on an artificial neural network is found to yield the best performance; i.a. the background index close to the Q-value of the 0vββ - decay could be suppressed by 45% to 1. 102 cts=(keV. kg.yr), while still retaining a considerably high signal survival fraction of (83 ± 3)% leading to a significant improvement of the experimental sensitivity. The efficiency is derived by a simulation and further validated by substantiated consistency checks availing themselves of measurements taken with different calibration sources and physics data. No signal is observed and a new lower limit of T0v½ (90% C.L.) > 2.2. 1025 yr for the half-life of neutrinoless double β-decay of 76Ge is established

    Computer-aided circuit analysis Annual report, May 15, 1965 - May 14, 1966

    Get PDF
    Research on digital computer aided analysis of electric circuit

    Fitting the Phenomenological MSSM

    Full text link
    We perform a global Bayesian fit of the phenomenological minimal supersymmetric standard model (pMSSM) to current indirect collider and dark matter data. The pMSSM contains the most relevant 25 weak-scale MSSM parameters, which are simultaneously fit using `nested sampling' Monte Carlo techniques in more than 15 years of CPU time. We calculate the Bayesian evidence for the pMSSM and constrain its parameters and observables in the context of two widely different, but reasonable, priors to determine which inferences are robust. We make inferences about sparticle masses, the sign of the μ\mu parameter, the amount of fine tuning, dark matter properties and the prospects for direct dark matter detection without assuming a restrictive high-scale supersymmetry breaking model. We find the inferred lightest CP-even Higgs boson mass as an example of an approximately prior independent observable. This analysis constitutes the first statistically convergent pMSSM global fit to all current data.Comment: Added references, paragraph on fine-tunin

    Statistical approaches to viral phylodynamics

    Get PDF
    The recent years have witnessed a rapid increase in the quantity and quality of genomic data collected from human and animal pathogens, viruses in particular. When coupled with mathematical and statistical models, these data allow us to combine evolutionary theory and epidemiology to understand pathogen dynamics. While these developments led to important epidemiological questions being tackled, it also exposed the need for improved analytical methods. In this thesis I employ modern statistical techniques to address two pressing issues in phylodynamics: (i) computational tools for Bayesian phylogenetics and (ii) data integration. I detail the development and testing of new transition kernels for Markov Chain Monte Carlo (MCMC) for time-calibrated phylogenetics in Chapter 2 and show that an adaptive kernel leads to improved MCMC performance in terms of mixing for a range of data sets, in particular for a challenging Ebola virus phylogeny with 1610 taxa/sequences. As a trade-off, I also found that the new adaptive kernels have longer warm up times in general, suggesting room for improvement. Chapter 3 shows how to apply state-of-the-art techniques to visualise and analyse phylogenetic space and MCMC for time-calibrated phylogenies, which are crucial to the viral phylodynamics analysis pipeline. I describe a pipeline for a typical phylodynamic analysis which includes convergence diagnostics for continuous parameters and in phylogenetic space, extending existing methods to deal with large time-calibrated phylogenies. In addition I investigate different representations of phylogenetic space through multi-dimensional scaling (MDS) or univariate distributions of distances to a focal tree and show that even for the simplest toy examples phylogenetic space remains complex and in particular not all metrics lead to desirable or useful representations. On the data integration front, Chapters 4 and 5 detail the use data from the 2013-2016 Ebola virus disease (EVD) epidemic in West Africa to show how one can combine phylogenetic and epidemiological data to tackle epidemiological questions. I explore the determinants of the Ebola epidemic in Chapter 4 through a generalised linear model framework coupled with Bayesian stochastic search variable selection (BSSVS) to assess the relative importance climatic and socio-economic variables on EVD number of cases. In Chapter 5 I tackle the question of whether a particular glycoprotein mutation could lead to increased human mortality from EVD. I show that a principled analysis of the available data that accounts for several sources of uncertainty as well as shared ancestry between samples does not allow us to ascertain the presence of such effect of a viral mutation on mortality. Chapter 6 attempts to bring the findings of the thesis together and discuss how the field of phylodynamics, in special its methodological aspect, might move forward
    • …
    corecore