10,290 research outputs found

    A Kolmogorov-Smirnov test for the molecular clock on Bayesian ensembles of phylogenies

    Get PDF
    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ\lambda of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides again of power.Comment: 14 pages, 9 figures, 8 tables. Minor revision, additin of a new example and new title. Software: https://github.com/FernandoMarcon/PKS_Test.gi

    Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    Get PDF
    The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program

    Software Reliability Models

    Get PDF
    The problem considered here is the building of Non-homogeneous Poisson Process (NHPP) model. Currently existing popular NHPP process models like Goel-Okumoto (G-O) and Yamada et al models suffer from the drawback that the probability density function of the inter-failure times is an improper density function. This is because the event no failure in (0, oo] is allowed in these models. In real life situations we cannot draw sample(s) from such a population and also none of the moments of inter-failure times exist. Therefore, these models are unsuitable for modelling real software error data. On the other hand if the density function of the inter-failure times is made proper by multiplying with a constant, then we cannot assume finite number of expected faults in the system which is the basic assumption in building the software reliability models. Taking these factors into consideration, we have introduced an extra parameter, say c, in both the G -0 and Yamada et al models in order to get a new model. We find that a specific value of this new parameter gives rise to a proper density for inter-failure times. The G -0 and Yamada et al models are special cases of these models corresponding to c = 0. This raises the question - “Can we do better than existing G -0 and Yamada et al models when 0 \u3c c \u3c 1 ?”. The answer is ‘yes’. With this objective, the behavior of the software failure counting process { N ( t ) , t \u3e 0} has been studied. Several measures, such as the number of failures by some prespecified time, the number of errors remaining in the system at a future time, distribution of remaining number of faults in the system and reliability during a mission have been proposed in this research. Maximum likelihood estimation method was used to estimate the parameters. Sufficient conditions for the existence of roots of the ML equations were derived. Some of the important statistical aspects of G -0 and Yamada et al models, like conditions for the existence and uniqueness of the ML equations, were not worked out so far in the literature. We have derived these conditions and proved uniqueness of the roots for these models. Finally four different sets of actual failure time data were analyzed. i

    Feasibility of diffusion and probabilistic white matter analysis in patients implanted with a deep brain stimulator.

    Get PDF
    Deep brain stimulation (DBS) for Parkinson\u27s disease (PD) is an established advanced therapy that produces therapeutic effects through high frequency stimulation. Although this therapeutic option leads to improved clinical outcomes, the mechanisms of the underlying efficacy of this treatment are not well understood. Therefore, investigation of DBS and its postoperative effects on brain architecture is of great interest. Diffusion weighted imaging (DWI) is an advanced imaging technique, which has the ability to estimate the structure of white matter fibers; however, clinical application of DWI after DBS implantation is challenging due to the strong susceptibility artifacts caused by implanted devices. This study aims to evaluate the feasibility of generating meaningful white matter reconstructions after DBS implantation; and to subsequently quantify the degree to which these tracts are affected by post-operative device-related artifacts. DWI was safely performed before and after implanting electrodes for DBS in 9 PD patients. Differences within each subject between pre- and post-implantation FA, MD, and RD values for 123 regions of interest (ROIs) were calculated. While differences were noted globally, they were larger in regions directly affected by the artifact. White matter tracts were generated from each ROI with probabilistic tractography, revealing significant differences in the reconstruction of several white matter structures after DBS. Tracts pertinent to PD, such as regions of the substantia nigra and nigrostriatal tracts, were largely unaffected. The aim of this study was to demonstrate the feasibility and clinical applicability of acquiring and processing DWI post-operatively in PD patients after DBS implantation. The presence of global differences provides an impetus for acquiring DWI shortly after implantation to establish a new baseline against which longitudinal changes in brain connectivity in DBS patients can be compared. Understanding that post-operative fiber tracking in patients is feasible on a clinically-relevant scale has significant implications for increasing our current understanding of the pathophysiology of movement disorders, and may provide insights into better defining the pathophysiology and therapeutic effects of DBS

    El aprendizaje de audacity para la edición y producción de contenidos didácticos digitales

    Get PDF
    En la sociedad actual resulta indispensable proporcionar una capacitación adecuada a los futuros docentes para que puedan desarrollar metodologías innovadoras, donde las TIC y los recursos didácticos digitales desempeñan un papel clave y permiten que los conocimientos y habilidades del alumno tengan un desarrollo exitoso. Esta investigación se aborda desde una metodología cuantitativa, mediante el uso de un cuestionario creado ad hoc sobre aprendizaje y la evaluación de la herramienta Audacity para la creación de recursos didácticos digitales en el Grado de Educación Infantil de la Universidad de Córdoba. Los resultados muestran una valoración positiva de la experiencia vivida, así como de la herramienta estudiada.In present society it is essential to provide the necessary training to future teachers so they can accomplish innovative teaching and learning methodologies, where ICT and digital learning resources play a key role that will enable the student’s knowledge and skills to be successfully developed. This research is approached from a quantitative methodology, by using a questionnaire created ad hoc about the learning and the assessment of the Audacity software tool for creating digital didactic resources, in the Early Childhood Education Degree from the University of Cordoba. The results obtained show a positive assessment of the tool studied and its subsequent use for audiovisual productions

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    A methodology for deriving performance measures from spatio-temporal traffic contour maps using digital image analysis procedures

    Get PDF
    The main focus of this study is to improve the data analysis tools used in performance monitoring and level of service assessment of freeway systems. The proposed study presents a methodology to develop new second-order statistical measures that are derived from texture characterization techniques in the field of digital image analysis. The new measures are capable of extracting properties such as smoothness, homogeneity, regularity, and randomness in traffic behavior from the spatio-temporal traffic contour maps. To study the new performance measures a total of 14270, 15-min traffic contour maps were generated for a section of 3.4 miles of I-4 in Orlando, Florida for 24 hours over a period of 5 weekdays. A correlation matrix was examined using the obtained measures for all the constructed maps, which is used to check for information redundancy. This resulted in retaining a set of three second-order statistical measures: angular second moment (ASM), contrast (CON), and entropy (ENT). The retained measures were analyzed to examine their sensitivity to various traffic conditions, expressed by the overall mean speed of each contour map. The measures were also used to evaluate level of service for each contour map. The sensitivity analysis and level of service criteria can be implemented in real time using a stand-alone module that was developed in this study. The study also presents a methodology to compare the traffic characteristics of various congested conditions. To examine the congestion characteristics, a total of 10,290 traffic contour maps were generated from a 7.5-mile section of the freeway for a period of 5 weekdays
    corecore