2,365,693 research outputs found

    Model selection and model averaging on mortality of upper gastrointestinal bleed patients

    Get PDF
    Model Selection (MS) is known to produce uncertainty into model-building process. Besides that, the process of MS is complex and time consuming. Therefore, Model Averaging (MA) had been proposed as an alternative to overcome the issues. This research will provide guidelines of obtaining best model by using two modelling approach which are Model Selection (MS) and Model Averaging (MA) and compares the performance of both methods. Corrected Akaike Information Criteria (AICc) and Bayesian Information Criteria (BIC) were applied in the model-building using MS to help determine the best model. In MA process, model selection criteria are needed to compute the weights of each possible models. Two model selection criteria (AICcand BIC) were compared to observe which will produce model with a better performance. For guidelines illustration, data of Upper Gastrointestinal Bleed (UGIB) were explored to identify influential factors which leads to the mortality of patients. At the end of the study, best model using MA shown to have a better performance andAICc is proven to be a better model selection criterion approach in MA. In conclusion, the most significant factors for mortality of UGIB patients were identified to be shock score, comorbidity and rebleed

    Selection of the Best New Better than used Population Based on Subsamples

    Get PDF
    The present study considers the problem of selecting the ‘Best’ new better than used(NBU) population among the several NBU populations. The procedure to select the ‘Best’ NBU population is developed based on a measure of departure from exponentiality towards NBU, proposed by Pandit and Math(2009)for the problem of testing exponentiality against NBU alternatives in one sample setting. The selection procedure is based on large sample properties of the statistic proposed in Pandit and Math(2009).We also indicate some applications of the selection procedur

    Optimizing Lossy Compression Rate-Distortion from Automatic Online Selection between SZ and ZFP

    Full text link
    With ever-increasing volumes of scientific data produced by HPC applications, significantly reducing data size is critical because of limited capacity of storage space and potential bottlenecks on I/O or networks in writing/reading or transferring data. SZ and ZFP are the two leading lossy compressors available to compress scientific data sets. However, their performance is not consistent across different data sets and across different fields of some data sets: for some fields SZ provides better compression performance, while other fields are better compressed with ZFP. This situation raises the need for an automatic online (during compression) selection between SZ and ZFP, with a minimal overhead. In this paper, the automatic selection optimizes the rate-distortion, an important statistical quality metric based on the signal-to-noise ratio. To optimize for rate-distortion, we investigate the principles of SZ and ZFP. We then propose an efficient online, low-overhead selection algorithm that predicts the compression quality accurately for two compressors in early processing stages and selects the best-fit compressor for each data field. We implement the selection algorithm into an open-source library, and we evaluate the effectiveness of our proposed solution against plain SZ and ZFP in a parallel environment with 1,024 cores. Evaluation results on three data sets representing about 100 fields show that our selection algorithm improves the compression ratio up to 70% with the same level of data distortion because of very accurate selection (around 99%) of the best-fit compressor, with little overhead (less than 7% in the experiments).Comment: 14 pages, 9 figures, first revisio

    Genetic algorithms applied to the scheduling of the Hubble Space Telescope

    Get PDF
    A prototype system employing a genetic algorithm (GA) has been developed to support the scheduling of the Hubble Space Telescope. A non-standard knowledge structure is used and appropriate genetic operators have been created. Several different crossover styles (random point selection, evolving points, and smart point selection) are tested and the best GA is compared with a neural network (NN) based optimizer. The smart crossover operator produces the best results and the GA system is able to evolve complete schedules using it. The GA is not as time-efficient as the NN system and the NN solutions tend to be better

    Sophisticated and small versus simple and sizeable: When does it pay off to introduce drifting coefficients in Bayesian VARs?

    Get PDF
    We assess the relationship between model size and complexity in the time-varying parameter VAR framework via thorough predictive exercises for the Euro Area, the United Kingdom and the United States. It turns out that sophisticated dynamics through drifting coefficients are important in small data sets while simpler models tend to perform better in sizeable data sets. To combine best of both worlds, novel shrinkage priors help to mitigate the curse of dimensionality, resulting in competitive forecasts for all scenarios considered. Furthermore, we discuss dynamic model selection to improve upon the best performing individual model for each point in time

    Two-stage hybrid feature selection algorithms for diagnosing erythemato-squamous diseases

    Get PDF
    This paper proposes two-stage hybrid feature selection algorithms to build the stable and efficient diagnostic models where a new accuracy measure is introduced to assess the models. The two-stage hybrid algorithms adopt Support Vector Machines (SVM) as a classification tool, and the extended Sequential Forward Search (SFS), Sequential Forward Floating Search (SFFS), and Sequential Backward Floating Search (SBFS), respectively, as search strategies, and the generalized F-score (GF) to evaluate the importance of each feature. The new accuracy measure is used as the criterion to evaluated the performance of a temporary SVM to direct the feature selection algorithms. These hybrid methods combine the advantages of filters and wrappers to select the optimal feature subset from the original feature set to build the stable and efficient classifiers. To get the stable, statistical and optimal classifiers, we conduct 10-fold cross validation experiments in the first stage; then we merge the 10 selected feature subsets of the 10-cross validation experiments, respectively, as the new full feature set to do feature selection in the second stage for each algorithm. We repeat the each hybrid feature selection algorithm in the second stage on the one fold that has got the best result in the first stage. Experimental results show that our proposed two-stage hybrid feature selection algorithms can construct efficient diagnostic models which have got better accuracy than that built by the corresponding hybrid feature selection algorithms without the second stage feature selection procedures. Furthermore our methods have got better classification accuracy when compared with the available algorithms for diagnosing erythemato-squamous diseases
    • …
    corecore