206,093 research outputs found

    How much Detail is needed in Cost Estimation in an Economic Evaluation alongside a Clinical Trial to Optimise Evidence for Decisions?

    Get PDF
    Acquiring evidence to support decision making is expensive. Collecting resource use data alongside a randomised controlled clinical trial is particularly so due to the multidimensional nature of costs: different costs are incurred by different agencies with varying methods and systems to account for these. Trialists are faced with decisions over how to collect such data, in particular different ‘levels’ of detail are possible. For example, hospitalisations can be costed (1) on a top-down, per admission basis multiplied by a representative unit cost, (2) a bottom-up basis measuring every component of care such as nursing and medic time, investigations and other procedures and drugs used which are each multiplied by relevant unit costs, or (3) some intermediate level of aggregation. The top-down data will be less expensive to obtain but may be less accurate (biased and/or over- or under-estimation of uncertainty) compared with the bottom-up. I refer to these alternative methods as ‘data collection processes’. Currently such decisions are based on the judgement of the trialist(s). However, formal quantification of the added value of one data collection process versus another compared with the added cost would inform the efficient allocation of research resources. In this thesis I extend the use of value of information analysis to compare the incremental cost and benefit of one data process with another, further extending this to estimate the optimal mix of observations between two processes. Using an example dataset I find that the method is workable, requiring prior information on the relationship between the two processes which can be obtained from either a pilot or feasibility study or expert opinion. When incorporated with other concurrent developments in value of information analysis, the method has the potential to provide a decision analytic approach to the complete design of clinical trials

    Machine Learning-Based Data and Model Driven Bayesian Uncertanity Quantification of Inverse Problems for Suspended Non-structural System

    Get PDF
    Inverse problems involve extracting the internal structure of a physical system from noisy measurement data. In many fields, the Bayesian inference is used to address the ill-conditioned nature of the inverse problem by incorporating prior information through an initial distribution. In the nonparametric Bayesian framework, surrogate models such as Gaussian Processes or Deep Neural Networks are used as flexible and effective probabilistic modeling tools to overcome the high-dimensional curse and reduce computational costs. In practical systems and computer models, uncertainties can be addressed through parameter calibration, sensitivity analysis, and uncertainty quantification, leading to improved reliability and robustness of decision and control strategies based on simulation or prediction results. However, in the surrogate model, preventing overfitting and incorporating reasonable prior knowledge of embedded physics and models is a challenge. Suspended Nonstructural Systems (SNS) pose a significant challenge in the inverse problem. Research on their seismic performance and mechanical models, particularly in the inverse problem and uncertainty quantification, is still lacking. To address this, the author conducts full-scale shaking table dynamic experiments and monotonic & cyclic tests, and simulations of different types of SNS to investigate mechanical behaviors. To quantify the uncertainty of the inverse problem, the author proposes a new framework that adopts machine learning-based data and model driven stochastic Gaussian process model calibration to quantify the uncertainty via a new black box variational inference that accounts for geometric complexity measure, Minimum Description length (MDL), through Bayesian inference. It is validated in the SNS and yields optimal generalizability and computational scalability

    A Survey on Economic-driven Evaluations of Information Technology

    Get PDF
    The economic-driven evaluation of information technology (IT) has become an important instrument in the management of IT projects. Numerous approaches have been developed to quantify the costs of an IT investment and its assumed profit, to evaluate its impact on business process performance, and to analyze the role of IT regarding the achievement of enterprise objectives. This paper discusses approaches for evaluating IT from an economic-driven perspective. Our comparison is based on a framework distinguishing between classification criteria and evaluation criteria. The former allow for the categorization of evaluation approaches based on their similarities and differences. The latter, by contrast, represent attributes that allow to evaluate the discussed approaches. Finally, we give an example of a typical economic-driven IT evaluation

    Development of a geovisual analytics environment using parallel coordinates with applications to tropical cyclone trend analysis

    Get PDF
    A global transformation is being fueled by unprecedented growth in the quality, quantity, and number of different parameters in environmental data through the convergence of several technological advances in data collection and modeling. Although these data hold great potential for helping us understand many complex and, in some cases, life-threatening environmental processes, our ability to generate such data is far outpacing our ability to analyze it. In particular, conventional environmental data analysis tools are inadequate for coping with the size and complexity of these data. As a result, users are forced to reduce the problem in order to adapt to the capabilities of the tools. To overcome these limitations, we must complement the power of computational methods with human knowledge, flexible thinking, imagination, and our capacity for insight by developing visual analysis tools that distill information into the actionable criteria needed for enhanced decision support. In light of said challenges, we have integrated automated statistical analysis capabilities with a highly interactive, multivariate visualization interface to produce a promising approach for visual environmental data analysis. By combining advanced interaction techniques such as dynamic axis scaling, conjunctive parallel coordinates, statistical indicators, and aerial perspective shading, we provide an enhanced variant of the classical parallel coordinates plot. Furthermore, the system facilitates statistical processes such as stepwise linear regression and correlation analysis to assist in the identification and quantification of the most significant predictors for a particular dependent variable. These capabilities are combined into a unique geovisual analytics system that is demonstrated via a pedagogical case study and three North Atlantic tropical cyclone climate studies using a systematic workflow. In addition to revealing several significant associations between environmental observations and tropical cyclone activity, this research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets

    Development of a geovisual analytics environment using parallel coordinates with applications to tropical cyclone trend analysis

    Get PDF
    A global transformation is being fueled by unprecedented growth in the quality, quantity, and number of different parameters in environmental data through the convergence of several technological advances in data collection and modeling. Although these data hold great potential for helping us understand many complex and, in some cases, life-threatening environmental processes, our ability to generate such data is far outpacing our ability to analyze it. In particular, conventional environmental data analysis tools are inadequate for coping with the size and complexity of these data. As a result, users are forced to reduce the problem in order to adapt to the capabilities of the tools. To overcome these limitations, we must complement the power of computational methods with human knowledge, flexible thinking, imagination, and our capacity for insight by developing visual analysis tools that distill information into the actionable criteria needed for enhanced decision support. In light of said challenges, we have integrated automated statistical analysis capabilities with a highly interactive, multivariate visualization interface to produce a promising approach for visual environmental data analysis. By combining advanced interaction techniques such as dynamic axis scaling, conjunctive parallel coordinates, statistical indicators, and aerial perspective shading, we provide an enhanced variant of the classical parallel coordinates plot. Furthermore, the system facilitates statistical processes such as stepwise linear regression and correlation analysis to assist in the identification and quantification of the most significant predictors for a particular dependent variable. These capabilities are combined into a unique geovisual analytics system that is demonstrated via a pedagogical case study and three North Atlantic tropical cyclone climate studies using a systematic workflow. In addition to revealing several significant associations between environmental observations and tropical cyclone activity, this research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets

    Early Quantitative Assessment of Non-Functional Requirements

    Get PDF
    Non-functional requirements (NFRs) of software systems are a well known source of uncertainty in effort estimation. Yet, quantitatively approaching NFR early in a project is hard. This paper makes a step towards reducing the impact of uncertainty due to NRF. It offers a solution that incorporates NFRs into the functional size quantification process. The merits of our solution are twofold: first, it lets us quantitatively assess the NFR modeling process early in the project, and second, it lets us generate test cases for NFR verification purposes. We chose the NFR framework as a vehicle to integrate NFRs into the requirements modeling process and to apply quantitative assessment procedures. Our solution proposal also rests on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. We extend its use for NFR testing purposes, which is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the advantages of our approach and the open questions related to its design as well

    Non-functional requirements: size measurement and testing with COSMIC-FFP

    Get PDF
    The non-functional requirements (NFRs) of software systems are well known to add a degree of uncertainty to process of estimating the cost of any project. This paper contributes to the achievement of more precise project size measurement through incorporating NFRs into the functional size quantification process. We report on an initial solution proposed to deal with the problem of quantitatively assessing the NFR modeling process early in the project, and of generating test cases for NFR verification purposes. The NFR framework has been chosen for the integration of NFRs into the requirements modeling process and for their quantitative assessment. Our proposal is based on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. Also in this paper, we extend the use of COSMIC-FFP for NFR testing purposes. This is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the merits of the proposed approach and the open questions related to its design

    Real-time and Probabilistic Temporal Logics: An Overview

    Full text link
    Over the last two decades, there has been an extensive study on logical formalisms for specifying and verifying real-time systems. Temporal logics have been an important research subject within this direction. Although numerous logics have been introduced for the formal specification of real-time and complex systems, an up to date comprehensive analysis of these logics does not exist in the literature. In this paper we analyse real-time and probabilistic temporal logics which have been widely used in this field. We extrapolate the notions of decidability, axiomatizability, expressiveness, model checking, etc. for each logic analysed. We also provide a comparison of features of the temporal logics discussed

    Needs and challenges for assessing the environmental impacts of engineered nanomaterials (ENMs).

    Get PDF
    The potential environmental impact of nanomaterials is a critical concern and the ability to assess these potential impacts is top priority for the progress of sustainable nanotechnology. Risk assessment tools are needed to enable decision makers to rapidly assess the potential risks that may be imposed by engineered nanomaterials (ENMs), particularly when confronted by the reality of limited hazard or exposure data. In this review, we examine a range of available risk assessment frameworks considering the contexts in which different stakeholders may need to assess the potential environmental impacts of ENMs. Assessment frameworks and tools that are suitable for the different decision analysis scenarios are then identified. In addition, we identify the gaps that currently exist between the needs of decision makers, for a range of decision scenarios, and the abilities of present frameworks and tools to meet those needs

    Bayesian data assimilation to support informed decision-making in individualized chemotherapy

    Get PDF
    An essential component of therapeutic drug/biomarker monitoring (TDM) is to combine patient data with prior knowledge for model-based predictions of therapy outcomes. Current Bayesian forecasting tools typically rely only on the most probable model parameters (maximum a-posteriori (MAP) estimate). This MAP-based approach, however, does neither necessarily predict the most probable outcome nor does it quantify the risks of treatment inefficacy or toxicity. Bayesian data assimilation (DA) methods overcome these limitations by providing a comprehensive uncertainty quantification. We compare DA methods with MAP-based approaches and show how probabilistic statements about key markers related to chemotherapy-induced neutropenia can be leveraged for more informative decision support in individualized chemotherapy. Sequential Bayesian DA proved to be most computational efficient for handling interoccasion variability and integrating TDM data. For new digital monitoring devices enabling more frequent data collection, these features will be of critical importance to improve patient care decisions in various therapeutic areas
    corecore