140,281 research outputs found

    Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for healthcare policy and decision making

    Get PDF
    Objective: Network meta-analyses have extensively been used to compare the effectiveness of multiple interventions for healthcare policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Study design and setting: Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov Chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. Results: We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whilst MMSE at threshold <25/30 appeared to have the best true negative rate. Conclusion: The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making

    The mortality of the Italian population: Smoothing techniques on the Lee--Carter model

    Full text link
    Several approaches have been developed for forecasting mortality using the stochastic model. In particular, the Lee-Carter model has become widely used and there have been various extensions and modifications proposed to attain a broader interpretation and to capture the main features of the dynamics of the mortality intensity. Hyndman-Ullah show a particular version of the Lee-Carter methodology, the so-called Functional Demographic Model, which is one of the most accurate approaches as regards some mortality data, particularly for longer forecast horizons where the benefit of a damped trend forecast is greater. The paper objective is properly to single out the most suitable model between the basic Lee-Carter and the Functional Demographic Model to the Italian mortality data. A comparative assessment is made and the empirical results are presented using a range of graphical analyses.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS394 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Factor analysis modelling for speaker verification with short utterances

    Get PDF
    This paper examines combining both relevance MAP and subspace speaker adaptation processes to train GMM speaker models for use in speaker verification systems with a particular focus on short utterance lengths. The subspace speaker adaptation method involves developing a speaker GMM mean supervector as the sum of a speaker-independent prior distribution and a speaker dependent offset constrained to lie within a low-rank subspace, and has been shown to provide improvements in accuracy over ordinary relevance MAP when the amount of training data is limited. It is shown through testing on NIST SRE data that combining the two processes provides speaker models which lead to modest improvements in verification accuracy for limited data situations, in addition to improving the performance of the speaker verification system when a larger amount of available training data is available

    A Framework for Evaluating Model-Driven Self-adaptive Software Systems

    Get PDF
    In the last few years, Model Driven Development (MDD), Component-based Software Development (CBSD), and context-oriented software have become interesting alternatives for the design and construction of self-adaptive software systems. In general, the ultimate goal of these technologies is to be able to reduce development costs and effort, while improving the modularity, flexibility, adaptability, and reliability of software systems. An analysis of these technologies shows them all to include the principle of the separation of concerns, and their further integration is a key factor to obtaining high-quality and self-adaptable software systems. Each technology identifies different concerns and deals with them separately in order to specify the design of the self-adaptive applications, and, at the same time, support software with adaptability and context-awareness. This research studies the development methodologies that employ the principles of model-driven development in building self-adaptive software systems. To this aim, this article proposes an evaluation framework for analysing and evaluating the features of model-driven approaches and their ability to support software with self-adaptability and dependability in highly dynamic contextual environment. Such evaluation framework can facilitate the software developers on selecting a development methodology that suits their software requirements and reduces the development effort of building self-adaptive software systems. This study highlights the major drawbacks of the propped model-driven approaches in the related works, and emphasise on considering the volatile aspects of self-adaptive software in the analysis, design and implementation phases of the development methodologies. In addition, we argue that the development methodologies should leave the selection of modelling languages and modelling tools to the software developers.Comment: model-driven architecture, COP, AOP, component composition, self-adaptive application, context oriented software developmen

    Biological processes and links to the physics

    Get PDF
    Analysis of the temporal and spatial variability of biological processes and identification of the main variables that drive the dynamic regime of marine ecosystems is complex. Correlation between physical variables and long-term changes in ecosystems has routinely been identified, but the specific mechanisms involved remain often unclear. Reasons for this could be various: the ecosystem can be very sensitive to the seasonal timing of the anomalous physical forcing; the ecosystem can be contemporaneously influenced by many physical variables and the ecosystem can generate intrinsic variability on climate time scales. Marine ecosystems are influenced by a variety of physical factors, e.g., light, temperature, transport, turbulence. Temperature has a fundamental forcing function in biology, with direct influences on rate processes of organisms and on the distribution of mobile species that have preferred temperature ranges. Light and transport also affect the physiology and distribution of marine organisms. Small-scale turbulence determines encounter between larval fish and their prey and additionally influences the probability of successful pursuit and ingestion. The impact of physical forcing variations on biological processes is studied through long-term observations, process studies, laboratory experiments, retrospective analysis of existing data sets and modelling. This manuscript reviews the diversity of physical influences on biological processes, marine organisms and ecosystems and their variety of responses to physical forcing with special emphasis on the dynamics of zooplankton and fish stocks

    EQUIPT: protocol of a comparative effectiveness research study evaluating cross-context transferability of economic evidence on tobacco control

    Get PDF
    This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial.This article has been made available through the Brunel Open Access Publishing Fund.Tobacco smoking claims 700 000 lives every year in Europe and the cost of tobacco smoking in the EU is estimated between €98 and €130 billion annually; direct medical care costs and indirect costs such as workday losses each represent half of this amount. Policymakers all across Europe are in need of bespoke information on the economic and wider returns of investing in evidence-based tobacco control, including smoking cessation agendas. EQUIPT is designed to test the transferability of one such economic evidence base-the English Tobacco Return on Investment (ROI) tool-to other EU member states
    corecore