51 research outputs found

    Statistical Algorithms for Optimal Experimental Design with Correlated Observations

    Get PDF
    This research is in three parts with different although related objectives. The first part developed an efficient, modified simulated annealing algorithm to solve the D-optimal (determinant maximization) design problem for 2-way polynomial regression with correlated observations. Much of the previous work in D-optimal design for regression models with correlated errors focused on polynomial models with a single predictor variable, in large part because of the intractability of an analytic solution. In this research, we present an improved simulated annealing algorithm, providing practical approaches to specifications of the annealing cooling parameters, thresholds and search neighborhoods for the perturbation scheme, which finds approximate D-optimal designs for 2-way polynomial regression for a variety of specific correlation structures with a given correlation coefficient. Results in each correlated-errors case are compared with the best design selected from the class of designs that are known to be D-optimal in the uncorrelated case: annealing results had generally higher D-efficiency than the best comparison design, especially when the correlation parameter was well away from 0. The second research objective, using Balanced Incomplete Block Designs (BIBDs), wasto construct weakly universal optimal block designs for the nearest neighbor correlation structure and multiple block sizes, for the hub correlation structure with any block size, and for circulant correlation with odd block size. We also constructed approximately weakly universal optimal block designs for the block-structured correlation. Lastly, we developed an improved Particle Swarm Optimization(PSO) algorithm with time varying parameters, and solved D-optimal design for linear regression with it. Then based on that improved algorithm, we combined the non-linear regression problem and decision making, and developed a nested PSO algorithm that finds (nearly) optimal experimental designs with each of the pessimistic criterion, index of optimism criterion, and regret criterion for the Michaelis-Menten model and logistic regression model

    Design and Analysis of Screening Experiments Assuming Effect Sparsity

    Get PDF
    Many initial experiments for industrial and engineering applications employ screening designs to determine which of possibly many factors are significant. These screening designs are usually a highly fractionated factorial or a Plackett-Burman design that focus on main effects and provide limited information for interactions. To help simplify the analysis of these experiments, it is customary to assume that only a few of the effects are actually important; this assumption is known as ‘effect sparsity’. This dissertation will explore both design and analysis aspects of screening experiments assuming effect sparsity. In 1989, Russell Lenth proposed a method for analyzing unreplicated factorials that has become popular due to its simplicity and satisfactory power relative to alternative methods. We propose and illustrate the use of p-values, estimated by simulation, for Lenth t-statistics. This approach is recommended for its versatility. Whereas tabulated critical values are restricted to the case of uncorrelated estimates, we illustrate the use of p-values for both orthogonal and nonorthogonal designs. For cases where there is limited replication, we suggest computing t-statistics and p-values using an estimator that combines the pure error mean square with a modified Lenth’s pseudo standard error. Supersaturated designs (SSDs) are designs that examine more factors than runs available. SSDs were introduced to handle situations in which a large number of factors are of interest but runs are expensive or time-consuming. We begin by assessing the null model performance of SSDs when using all-subsets and forward selection regression. The propensity for model selection criteria to overfit is highlighted. We subsequently propose a strategy for analyzing SSDs that combines all-subsets regression and permutation tests. The methods are illustrated for several examples. In contrast to the usual sequential nature of response surface methods (RSM), recent literature has proposed both screening and response surface exploration using only one three-level design. This approach is named “one-step RSM”. We discuss and illustrate two shortcomings of the current one-step RSM designs and analysis. Subsequently, we propose a new class of three-level designs and an analysis strategy unique to these designs that will address these shortcomings and aid the user in being appropriately advised as to factor importance. We illustrate the designs and analysis with simulated and real data

    Proceedings of the Twentieth Conference of the Association of Christians in the Mathematical Sciences

    Get PDF
    The proceedings of the twentieth conference of the Associate of Christians in the Mathematical Sciences held at Redeemer University College from May 27-30, 2015

    Proceedings of Mathsport international 2017 conference

    Get PDF
    Proceedings of MathSport International 2017 Conference, held in the Botanical Garden of the University of Padua, June 26-28, 2017. MathSport International organizes biennial conferences dedicated to all topics where mathematics and sport meet. Topics include: performance measures, optimization of sports performance, statistics and probability models, mathematical and physical models in sports, competitive strategies, statistics and probability match outcome models, optimal tournament design and scheduling, decision support systems, analysis of rules and adjudication, econometrics in sport, analysis of sporting technologies, financial valuation in sport, e-sports (gaming), betting and sports

    Seventh International Workshop on Simulation, 21-25 May, 2013, Department of Statistical Sciences, Unit of Rimini, University of Bologna, Italy. Book of Abstracts

    Get PDF
    Seventh International Workshop on Simulation, 21-25 May, 2013, Department of Statistical Sciences, Unit of Rimini, University of Bologna, Italy. Book of Abstract

    Seventh International Workshop on Simulation, 21-25 May, 2013, Department of Statistical Sciences, Unit of Rimini, University of Bologna, Italy. Book of Abstracts

    Get PDF
    Seventh International Workshop on Simulation, 21-25 May, 2013, Department of Statistical Sciences, Unit of Rimini, University of Bologna, Italy. Book of Abstract

    Framework for privacy-aware content distribution in peer-to- peer networks with copyright protection

    Get PDF
    The use of peer-to-peer (P2P) networks for multimedia distribution has spread out globally in recent years. This mass popularity is primarily driven by the efficient distribution of content, also giving rise to piracy and copyright infringement as well as privacy concerns. An end user (buyer) of a P2P content distribution system does not want to reveal his/her identity during a transaction with a content owner (merchant), whereas the merchant does not want the buyer to further redistribute the content illegally. Therefore, there is a strong need for content distribution mechanisms over P2P networks that do not pose security and privacy threats to copyright holders and end users, respectively. However, the current systems being developed to provide copyright and privacy protection to merchants and end users employ cryptographic mechanisms, which incur high computational and communication costs, making these systems impractical for the distribution of big files, such as music albums or movies.El uso de soluciones de igual a igual (peer-to-peer, P2P) para la distribución multimedia se ha extendido mundialmente en los últimos años. La amplia popularidad de este paradigma se debe, principalmente, a la distribución eficiente de los contenidos, pero también da lugar a la piratería, a la violación del copyright y a problemas de privacidad. Un usuario final (comprador) de un sistema de distribución de contenidos P2P no quiere revelar su identidad durante una transacción con un propietario de contenidos (comerciante), mientras que el comerciante no quiere que el comprador pueda redistribuir ilegalmente el contenido más adelante. Por lo tanto, existe una fuerte necesidad de mecanismos de distribución de contenidos por medio de redes P2P que no supongan un riesgo de seguridad y privacidad a los titulares de derechos y los usuarios finales, respectivamente. Sin embargo, los sistemas actuales que se desarrollan con el propósito de proteger el copyright y la privacidad de los comerciantes y los usuarios finales emplean mecanismos de cifrado que implican unas cargas computacionales y de comunicaciones muy elevadas que convierten a estos sistemas en poco prácticos para distribuir archivos de gran tamaño, tales como álbumes de música o películas.L'ús de solucions d'igual a igual (peer-to-peer, P2P) per a la distribució multimèdia s'ha estès mundialment els darrers anys. L'àmplia popularitat d'aquest paradigma es deu, principalment, a la distribució eficient dels continguts, però també dóna lloc a la pirateria, a la violació del copyright i a problemes de privadesa. Un usuari final (comprador) d'un sistema de distribució de continguts P2P no vol revelar la seva identitat durant una transacció amb un propietari de continguts (comerciant), mentre que el comerciant no vol que el comprador pugui redistribuir il·legalment el contingut més endavant. Per tant, hi ha una gran necessitat de mecanismes de distribució de continguts per mitjà de xarxes P2P que no comportin un risc de seguretat i privadesa als titulars de drets i els usuaris finals, respectivament. Tanmateix, els sistemes actuals que es desenvolupen amb el propòsit de protegir el copyright i la privadesa dels comerciants i els usuaris finals fan servir mecanismes d'encriptació que impliquen unes càrregues computacionals i de comunicacions molt elevades que fan aquests sistemes poc pràctics per a distribuir arxius de grans dimensions, com ara àlbums de música o pel·lícules

    Subject Index Volumes 1–200

    Get PDF

    Validating supervised learning approaches to the prediction of disease status in neuroimaging

    Get PDF
    Alzheimer’s disease (AD) is a serious global health problem with growing human and monetary costs. Neuroimaging data offers a rich source of information about pathological changes in the brain related to AD, but its high dimensionality makes it difficult to fully exploit using conventional methods. Automated neuroimage assessment (ANA) uses supervised learning to model the relationships between imaging signatures and measures of disease. ANA methods are assessed on the basis of their predictive performance, which is measured using cross validation (CV). Despite its ubiquity, CV is not always well understood, and there is a lack of guidance as to best practice. This thesis is concerned with the practice of validation in ANA. It introduces several key challenges and considers potential solutions, including several novel contributions. Part I of this thesis reviews the field and introduces key theoretical concepts related to CV. Part II is concerned with bias due to selective reporting of performance results. It describes an empirical investigation to assess the likely level of this bias in the ANA literature and relative importance of several contributory factors. Mitigation strategies are then discussed. Part III is concerned with the optimal selection of CV strategy with respect to bias, variance and computational cost. Part IV is concerned with the statistical analysis of CV performance results. It discusses the failure of conventional statistical procedures, reviews previous alternative approaches, and demonstrates a new heuristic solution that fares well in preliminary investigations. Though the focus of this thesis is AD ANA, the issues it addresses are of great importance to all applied machine learning fields where samples are limited and predictive performance is critical
    corecore