438,478 research outputs found

    Evidence accumulation models with R: A practical guide to hierarchical Bayesian methods

    Get PDF
    Evidence accumulation models are a useful tool to allow researchers to investigate the latent cognitive variables that underlie response time and response accuracy. However, applying evidence accumulation models can be difficult because they lack easily computable forms. Numerical methods are required to determine the parameters of evidence accumulation that best correspond to the fitted data. When applied to complex cognitive models, such numerical methods can require substantial computational power which can lead to infeasibly long compute times. In this paper, we provide efficient, practical software and a step-by-step guide to fit evidence accumulation models with Bayesian methods. The software, written in C++, is provided in an R package: 'ggdmc'. The software incorporates three important ingredients of Bayesian computation, (1) the likelihood functions of two common response time models, (2) the Markov chain Monte Carlo (MCMC) algorithm (3) a population-based MCMC sampling method. The software has gone through stringent checks to be hosted on the Comprehensive R Archive Network (CRAN) and is free to download. We illustrate its basic use and an example of fitting complex hierarchical Wiener diffusion models to four shooting-decision data sets

    Analysis of Software Aging in a Web Server

    Get PDF
    A number of recent studies have reported the phenomenon of “software aging”, characterized by progressive performance degradation and/or an increased occurrence rate of hang/crash failures of a software system due to the exhaustion of operating system resources or the accumulation of errors. To counteract this phenomenon, a proactive technique called 'software rejuvenation' has been proposed. It essentially involves stopping the running software, cleaning its internal state and/or its environment and then restarting it. Software rejuvenation, being preventive in nature, begs the question as to when to schedule it. Periodic rejuvenation, while straightforward to implement, may not yield the best results, because the rate at which software ages is not constant, but it depends on the time-varying system workload. Software rejuvenation should therefore be planned and initiated in the face of the actual system behavior. This requires the measurement, analysis and prediction of system resource usage. In this paper, we study the development of resource usage in a web server while subjecting it to an artificial workload. We first collect data on several system resource usage and activity parameters. Non-parametric statistical methods are then applied for detecting and estimating trends in the data sets. Finally, we fit time series models to the data collected. Unlike the models used previously in the research on software aging, these time series models allow for seasonal patterns, and we show how the exploitation of the seasonal variation can help in adequately predicting the future resource usage. Based on the models employed here, proactive management techniques like software rejuvenation triggered by actual measurements can be built. --Software aging,software rejuvenation,Linux,Apache,web server,performance monitoring,prediction of resource utilization,non-parametric trend analysis,time series analysis

    The effect of accumulation reservoir on dynamic behaviour of concrete gravity dam Suhorka

    Get PDF
    The present thesis deals with the influence of accumulation reservoir on dynamic behavior of concrete gravity dam Suhorka. In the first part of the thesis a theoretical framework is given for analyzing the influence of accumulation reservoir on natural vibration periods of dams by two different software tools, i.e. DIN3D and CADAM. In the second part the parametrical analysis of dam – reservoir system natural vibration is performed on multiple different models by both software tools and purpose of this type of analysis is explained based on output results. By comparing the results from both tools, adequacy of more sophisticated modeling approach in DIN3D and simplified method in CADAM is evaluated and further assessments on deficiencies and practical applicability of them are made

    Quantifying cancer progression with conjunctive Bayesian networks

    Get PDF
    Motivation: Cancer is an evolutionary process characterized by accumulating mutations. However, the precise timing and the order of genetic alterations that drive tumor progression remain enigmatic. Results: We present a specific probabilistic graphical model for the accumulation of mutations and their interdependencies. The Bayesian network models cancer progression by an explicit unobservable accumulation process in time that is separated from the observable but error-prone detection of mutations. Model parameters are estimated by an Expectation-Maximization algorithm and the underlying interaction graph is obtained by a simulated annealing procedure. Applying this method to cytogenetic data for different cancer types, we find multiple complex oncogenetic pathways deviating substantially from simplified models, such as linear pathways or trees. We further demonstrate how the inferred progression dynamics can be used to improve genetics-based survival predictions which could support diagnostics and prognosis. Availability: The software package ct-cbn is available under a GPL license on the web site cbg.ethz.ch/software/ct-cbn Contact: [email protected]

    Sediment accumulation rates in subarctic lakes: Insights into age-depth modeling from 22 dated lake records from the Northwest Territories, Canada

    Get PDF
    Age-depth modeling using Bayesian statistics requires well-informed prior information about the behavior of sediment accumulation. Here we present average sediment accumulation rates (represented as deposition times, DT, in yr/cm) for lakes in an Arctic setting, and we examine the variability across space (intra- and inter-lake) and time (late Holocene). The dataset includes over 100 radiocarbon dates, primarily on bulk sediment, from 22 sediment cores obtained from 18 lakes spanning the boreal to tundra ecotone gradients in subarctic Canada. There are four to twenty-five radiocarbon dates per core, depending on the length and character of the sediment records. Deposition times were calculated at 100-year intervals from age-depth models constructed using the 'classical' age-depth modeling software Clam. Lakes in boreal settings have the most rapid accumulation (mean DT 20±10 yr/cm), whereas lakes in tundra settings accumulate at moderate (mean DT 70±10 yr/cm) to very slow rates, (>100yr/cm). Many of the age-depth models demonstrate fluctuations in accumulation that coincide with lake evolution and post-glacial climate change. Ten of our sediment cores yielded sediments as old as c. 9000cal BP (BP=years before AD 1950). From between c. 9000cal BP and c. 6000cal BP, sediment accumulation was relatively rapid (DT of 20-60yr/cm). Accumulation slowed between c. 5500 and c. 4000cal BP as vegetation expanded northward in response to warming. A short period of rapid accumulation occurred near 1200cal BP at three lakes. Our research will help inform priors in Bayesian age modeling

    dMODELS. A free software package to model volcanic deformation

    Get PDF
    Shallow magma accumulation in the crust often results in slight movements of the ground surface that can be measured using standard land-surveying techniques or satellite geodesy. Volcano geodesy uses measurements of crustal deformation to investigate volcano unrest and to search for magma reservoirs beneath active volcanic areas. A key assumption behind geodetic monitoring is that ground deformation of the Earth’s surface reflects tectonic and volcanic processes at depth (e.g., fault slip and/or mass transport) transmitted to the surface through the mechanical properties of the crust. Measurements and modeling of ground deformation are an indispensable component for any volcano monitoring strategy. The critical questions that emerge when monitoring volcanoes are how to (a) constrain the source of unrest, (b) improve the assessment of hazards associated with the unrest and (c) refine our ability to forecast volcanic activity. A number of analytical and numerical mathematical models are available in the literature that can be used to fit ground deformation to infer source location, geometry, depth and volume change. Analytical models offer a closed-form description of the volcanic source. This allows us, in principle, to readily infer the relative importance of any of the source parameters. The careful use of analytical models, together with high quality data sets can provide valuable insights into the nature of the deformation source (e.g., Battaglia and Hill, 2009). The simplifications that make analytical models tractable, however, may result in misleading interpretations. Sources are approximated by pressurized cavities in homogenous, elastic half-spaces filled with fluids. Although actual magmatic sources are certainly more complex, this approach can mimic the stress or potential field of the magma or other fluid sources beneath a volcano. The use of numerical models (e.g., finite element models) allows for evaluation of more realistic source characteristics and crustal properties (e.g., vertical and lateral mechanical discontinuities, complex source geometries, topography) but may require expensive proprietary software and powerful computers

    MONOD, a Collaborative Tool for Manipulating Biological Knowledge

    Get PDF
    Research article written in 2004 describing MONOD, an early biological knowledge management systemWe describe an open source software tool called MONOD, for Modeler’s Notebook and Datastore, designed to capture and communicate knowledge generated during the process of building models of many-component biological systems. We used MONOD to construct a model of the pheromone response signaling pathway of Saccharomyces cerevisiae. MONOD allowed the accumulation, documentation, and exchange of data, valuations, assumptions, and decisions generated during the model building process. MONOD thus helped preserve a record of the steps taken on the path between from the experimental data to the computable model. We believe that MONOD and its successors may streamline the processes of building models, communicating with other researchers, and managing and manipulating biological knowledge. "Collaborative annotation"-- fine-grained, structured, searchable communication enabled by software tools of this type-- could positively affect the practice of biological research

    Proactive Scalability and Management of Resources in Hybrid Clouds via Machine Learning

    Get PDF
    In this paper, we present a novel framework for supporting the management and optimization of application subject to software anomalies and deployed on large scale cloud architectures, composed of different geographically distributed cloud regions. The framework uses machine learning models for predicting failures caused by accumulation of anomalies. It introduces a novel workload balancing approach and a proactive system scale up/scale down technique. We developed a prototype of the framework and present some experiments for validating the applicability of the proposed approache

    Using species accumulation curves to study change through time in a diverse butterfly fauna along an elevational gradient

    Get PDF
    The motivation for this thesis comes from ecological questions about the variability in the population dynamics of butterfly species across geographically diverse locations within Northern California. The goal of this thesis can be summarized in the following parts: i) to parameterize and fit the skewed log-logistic model to the observed species accumulation curves at each location; ii) to develop associations between the estimated parameters of the accumulation curves (response) and the weather variables (predictors) for each site, and iii) to analyze the fit of the models and interpret the findings in statistical and ecological terms. Ten locations were analyzed. Annual butterfly species data were available for the period from 1973 to 2016 with small site to site variation. Weather variables considered for the models were seasonal and annual precipitation totals and maximum/minimum seasonal temperatures. We found that a majority of inter-annual variation in weather was explained by variation in precipitation. Associations between the parameters of the species accumulation curves were modeled using linear and polynomial regression tools. Fit was assessed using the mean squared error. Models were developed using each SLL parameter as the response variable and the seasonal weather variables as the predictors. Through the use of step-wise regression model selection, an optimal model was developed from this initial model for each site analyzed. All computing work was done using R software

    Using species accumulation curves to study change through time in a diverse butterfly fauna along an elevational gradient

    Get PDF
    The motivation for this thesis comes from ecological questions about the variability in the population dynamics of butterfly species across geographically diverse locations within Northern California. The goal of this thesis can be summarized in the following parts: i) to parameterize and fit the skewed log-logistic model to the observed species accumulation curves at each location; ii) to develop associations between the estimated parameters of the accumulation curves (response) and the weather variables (predictors) for each site, and iii) to analyze the fit of the models and interpret the findings in statistical and ecological terms. Ten locations were analyzed. Annual butterfly species data were available for the period from 1973 to 2016 with small site to site variation. Weather variables considered for the models were seasonal and annual precipitation totals and maximum/minimum seasonal temperatures. We found that a majority of inter-annual variation in weather was explained by variation in precipitation. Associations between the parameters of the species accumulation curves were modeled using linear and polynomial regression tools. Fit was assessed using the mean squared error. Models were developed using each SLL parameter as the response variable and the seasonal weather variables as the predictors. Through the use of step-wise regression model selection, an optimal model was developed from this initial model for each site analyzed. All computing work was done using R software
    corecore