1,129 research outputs found

    ANALISIS SENSITIVITAS MODEL INSPEKSI 100% DENGAN KLASIFIKASI TERHADAP CONFORMING ITEM ( Studi Kasus PT ABN Padalarang )

    Get PDF
    Analisis terhadap model dilakukan guna mengetahui perilaku, kelebihan dan kekurangan model Pada penelitian ini dilakukan analisis sensitivitas terhadap model ekonomi untuk prosedur inspeksi 100%. Prosedur inspeksi dilakukan dalam dua tahap dengan dua variabel karakteristik yang saling independen. Hasil dari inspeksi adalah sebuah keputusan untuk mengelompokkan produk yang diinspeksi tidak hanya sebagai produk yang conform dan nonconform, tapi juga mempertimbangkan adanya klasifikasi terhadap produk yang conform. Hasil penelitian menunjukkan bahwa perubahan ongkos inspeksi di tahap 1 sebesar 10%, 20% dan 30% tidak akan mempengaruhi nilai spesifikasi bawah produk kualitas 2A (L2) dan spesifikasi bawah untuk produk kualitas 2B tidak sensitif terhadap perubahan nilai parameter input sebesar 10%, 20% dan 30%.Ā Kata Kunci: model inspeksi 100%,Ā  klasifikasi conforming item, analisis sensitivitas

    Bayesian approaches for the analysis of sequential parallel comparison design in clinical trials

    Full text link
    Placebo response, an apparent improvement in the clinical condition of patients randomly assigned to the placebo treatment, is a major issue in clinical trials on psychiatric and pain disorders. Properly addressing the placebo response is critical to an accurate assessment of the efficacy of a therapeutic agent. The Sequential Parallel Comparison Design (SPCD) is one approach for addressing the placebo response. A SPCD trial runs in two stages, re-randomizing placebo patients in the second stage. Analysis pools the data from both stages. In this thesis, we propose a Bayesian approach for analyzing SPCD data. Our primary proposed model overcomes some of the limitations of existing methods and offers greater flexibility in performing the analysis. We find that our model is either on par or, under certain conditions, better, in preserving the type I error and minimizing mean square error than existing methods. We further develop our model in two ways. First, through prior specification we provide three approaches to model the relationship between the treatment effects from the two stages, as opposed to arbitrarily specifying the relationship as was done in previous studies. Under proper specification these approaches have greater statistical power than the initial analysis and give accurate estimates of this relationship. Second, we revise the model to treat the placebo response as a continuous rather than a binary characteristic. The binary classification, which groups patients into ā€œplacebo-respondersā€ or ā€œplacebo non-respondersā€, can lead to misclassification, which can adversely impact the estimate of the treatment effect. As an alternative, we propose to view the placebo response in each patient as an unknown continuous characteristic. This characteristic is estimated and then used to measure the contribution (or the weight) of each patient to the treatment effect. Building upon this idea, we propose two different models which weight the contribution of placebo patients to the estimated second stage treatment effect. We show that this method is more robust against the potential misclassification of responders than previous methods. We demonstrate our methodology using data from the ADAPT-A SPCD trial

    Statistical Methods for Monte-Carlo based Multiple Hypothesis Testing

    Get PDF
    Statistical hypothesis testing is a key technique to perform statistical inference. The main focus of this work is to investigate multiple testing under the assumption that the analytical p-values underlying the tests for all hypotheses are unknown. Instead, we assume that they can be approximated by drawing Monte Carlo samples under the null. The first part of this thesis focuses on the computation of test results with a guarantee on their correctness, that is decisions on multiple hypotheses which are identical to the ones obtained with the unknown p-values. We present MMCTest, an algorithm to implement a multiple testing procedure which yields correct decisions on all hypotheses (up to a pre-specified error probability) based solely on Monte Carlo simulation. MMCTest offers novel ways to evaluate multiple hypotheses as it allows to obtain the (previously unknown) correct decision on hypotheses (for instance, genes) in real data studies (again up to an error probability pre-specified by the user). The ideas behind MMCTest are generalised in a framework for Monte Carlo based multiple testing, demonstrating that existing methods giving no guarantees on their test results can be modified to yield certain theoretical guarantees on the correctness of their outputs. The second part deals with multiple testing from a practical perspective. We assume that in practice, it might also be desired to sacrifice the additional computational effort needed to obtain guaranteed decisions and to invest it instead in the computation of a more accurate ad-hoc test result. This is attempted by QuickMMCTest, an algorithm which adaptively allocates more samples to hypotheses whose decisions are more prone to random fluctuations, thereby achieving an improved accuracy. This work also derives the optimal allocation of a finite number of samples to finitely many hypotheses under a normal approximation, where the optimal allocation is understood as the one minimising the expected number of erroneously classified hypotheses (with respect to the classification based on the analytical p-values). An empirical comparison of the optimal allocation of samples to the one computed by QuickMMCTest indicates that the behaviour of QuickMMCTest might not be too far away from being optimal.Open Acces

    Buy-Sell Dependence and Classification Error in Market Microstructure Time-Series Models : A Markov Switching Regression Approach

    Get PDF
    This paper conducts an empirical test of a market microstructure model using a new econometric approach. I treat the direction of a trade as a discrete latent variable following a stationary Markov chain. By overlaying a three-state Markov chain on a familiar market microstructure model, I can extract information on the directions of trades efficiently from time-series data. An analysis of 100 large and 100 small firms for the year 1990 yields several important results: (1) Order types (sale, cross, purchase) are serially correlated, and the mean transition probability matrix is very similar for large and small firms. (2) Information asymmetry is greater for smaller firms. (3) The per share order processing cost is greater for larger firms. (4) When trades are classified by the bid-ask test supplemented by the tick test, the estimated misclassification probabilities are typically small for sales and purchases, but they are often fairly large for crosses. (5) Buy-sell classification error results in systematic biases for regression coefficients

    Predictive Modelling Approach to Data-Driven Computational Preventive Medicine

    Get PDF
    This thesis contributes novel predictive modelling approaches to data-driven computational preventive medicine and offers an alternative framework to statistical analysis in preventive medicine research. In the early parts of this research, this thesis presents research by proposing a synergy of machine learning methods for detecting patterns and developing inexpensive predictive models from healthcare data to classify the potential occurrence of adverse health events. In particular, the data-driven methodology is founded upon a heuristic-systematic assessment of several machine-learning methods, data preprocessing techniques, modelsā€™ training estimation and optimisation, and performance evaluation, yielding a novel computational data-driven framework, Octopus. Midway through this research, this thesis advances research in preventive medicine and data mining by proposing several new extensions in data preparation and preprocessing. It offers new recommendations for data quality assessment checks, a novel multimethod imputation (MMI) process for missing data mitigation, a novel imbalanced resampling approach, and minority pattern reconstruction (MPR) led by information theory. This thesis also extends the area of model performance evaluation with a novel classification performance ranking metric called XDistance. In particular, the experimental results show that building predictive models with the methods guided by our new framework (Octopus) yields domain experts' approval of the new reliable modelsā€™ performance. Also, performing the data quality checks and applying the MMI process led healthcare practitioners to outweigh predictive reliability over interpretability. The application of MPR and its hybrid resampling strategies led to better performances in line with experts' success criteria than the traditional imbalanced data resampling techniques. Finally, the use of the XDistance performance ranking metric was found to be more effective in ranking several classifiers' performances while offering an indication of class bias, unlike existing performance metrics The overall contributions of this thesis can be summarised as follow. First, several data mining techniques were thoroughly assessed to formulate the new Octopus framework to produce new reliable classifiers. In addition, we offer a further understanding of the impact of newly engineered features, the physical activity index (PAI) and biological effective dose (BED). Second, the newly developed methods within the new framework. Finally, the newly accepted developed predictive models help detect adverse health events, namely, visceral fat-associated diseases and advanced breast cancer radiotherapy toxicity side effects. These contributions could be used to guide future theories, experiments and healthcare interventions in preventive medicine and data mining

    Patterns of Lung Cancer Care and Associated Health Outcomes Among Elderly Medicare Fee For Service Beneficiaries in West Virginia and in the United States

    Get PDF
    The elderly carry a disproportionate burden of lung cancer in the US. Although significant improvements have been made during the past decade in cancer treatment, substantial disparities still exist in guideline-based lung cancer care and outcomes. Such variation in lung cancer care is a cause for major concern in rural areas like West Virginia (WV). The purpose of this study was to do a comprehensive evaluation of variations in lung cancer care and associated health outcomes in the elderly. This retrospective study was conducted using SEER-Medicare and WVCR--Medicare linked data files for the years 2002-2007. As part of the project, three studies were conducted. In the first study, we compared geographic variations in clinical guideline-based lung cancer care and associated health outcomes among elderly Medicare Fee-for-service (FFS) beneficiaries. The study found disparities in receipt of minimally appropriate care in both the WV and US populations. Receipt of minimally appropriate care was found to be associated with longer survival times. In the second study, we compared geographic variations in timeliness of lung cancer care and found significant variation in delays in diagnosis and treatment in both the WV and US populations. However, non-timely care was not associated with poorer prognosis. The third study determined the patterns of receipt of tobacco-use cessation counseling services and found such services to be received by more than half of all beneficiaries. Overall, the findings highlight the critical need to address disparities in receipt of guideline-based appropriate and timely lung cancer care among Medicare FFS beneficiaries. The findings also reveals the urgent need for future cancer prevention efforts directed towards promoting smoking cessation in the rural WV population. In the long run, such cancer prevention efforts can help to reduce lung cancer incidence, which in turn can help to reduce the geographic disparities in lung cancer mortality

    Detection of Building Damages in High Resolution SAR Images based on SAR Simulation

    Get PDF

    Postmarket sequential database surveillance of medical products

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 193-212).This dissertation focuses on the capabilities of a novel public health data system - the Sentinel System - to supplement existing postmarket surveillance systems of the U.S. Food and Drug Administration (FDA). The Sentinel System is designed to identify and assess safety risks associated with drugs, therapeutic biologics, vaccines, and medical devices that emerge post-licensure. Per the initiating legislation, the FDA must complete a priori evaluations of the Sentinel System's technical capabilities to support regulatory decision-making. This research develops qualitative and quantitative tools to aid the FDA in such evaluations, particularly with regard to the Sentinel System's novel sequential database surveillance capabilities. Sequential database surveillance is a "near real-time" sequential statistical method to evaluate pre-specified exposure-outcome pairs. A "signal" is detected when the data suggest an excess risk that is statistically significant. The qualitative tool - the Sentinel System Pre- Screening Checklist - is designed to determine whether the Sentinel System is well suited, on its face, to evaluate a pre-specified exposure-outcome pair. The quantitative tool - the Sequential Database Surveillance Simulator - allows the user to explore virtually whether sequential database surveillance of a particular exposure-outcome pair is likely to generate evidence to identify and assess safety risks in a timely manner to support regulatory decision-making. Particular attention is paid to accounting for uncertainties including medical product adoption and utilization, misclassification error, and the unknown true excess risk in the environment. Using vaccine examples and the simulator to illustrate, this dissertation first demonstrates the tradeoffs associated with sample size calculations in sequential statistical analysis, particularly the tradeoff between statistical power and median sample size. Second, it demonstrates differences in performance between various surveillance configurations when using distributed database systems. Third, it demonstrates the effects of misclassification error on sequential database surveillance, and specifically how such errors may be accounted for in the design of surveillance. Fourth, it considers the complexities of modeling new medical product adoption, and specifically, the existence of a "dual market" phenomenon for these new medical products. This finding raises non-trivial generalizability concerns regarding evidence generated via sequential database surveillance when performed immediately post-licensure.by Judith C. Maro.Ph.D

    Crop Identification Technolgy Assessment for Remote Sensing (CITARS). Volume 1: Task design plan

    Get PDF
    A plan for quantifying the crop identification performances resulting from the remote identification of corn, soybeans, and wheat is described. Steps for the conversion of multispectral data tapes to classification results are specified. The crop identification performances resulting from the use of several basic types of automatic data processing techniques are compared and examined for significant differences. The techniques are evaluated also for changes in geographic location, time of the year, management practices, and other physical factors. The results of the Crop Identification Technology Assessment for Remote Sensing task will be applied extensively in the Large Area Crop Inventory Experiment
    • ā€¦
    corecore