3,017 research outputs found

    Generalized resolution for orthogonal arrays

    Full text link
    The generalized word length pattern of an orthogonal array allows a ranking of orthogonal arrays in terms of the generalized minimum aberration criterion (Xu and Wu [Ann. Statist. 29 (2001) 1066-1077]). We provide a statistical interpretation for the number of shortest words of an orthogonal array in terms of sums of R2R^2 values (based on orthogonal coding) or sums of squared canonical correlations (based on arbitrary coding). Directly related to these results, we derive two versions of generalized resolution for qualitative factors, both of which are generalizations of the generalized resolution by Deng and Tang [Statist. Sinica 9 (1999) 1071-1082] and Tang and Deng [Ann. Statist. 27 (1999) 1914-1926]. We provide a sufficient condition for one of these to attain its upper bound, and we provide explicit upper bounds for two classes of symmetric designs. Factor-wise generalized resolution values provide useful additional detail.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1205 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    TUNING OPTIMIZATION SOFTWARE PARAMETERS FOR MIXED INTEGER PROGRAMMING PROBLEMS

    Get PDF
    The tuning of optimization software is of key interest to researchers solving mixed integer programming (MIP) problems. The efficiency of the optimization software can be greatly impacted by the solver’s parameter settings and the structure of the MIP. A designed experiment approach is used to fit a statistical model that would suggest settings of the parameters that provided the largest reduction in the primal integral metric. Tuning exemplars of six and 59 factors (parameters) of optimization software, experimentation takes place on three classes of MIPs: survivable fixed telecommunication network design, a formulation of the support vector machine with the ramp loss and L1-norm regularization, and node packing for coding theory graphs. This research presents and demonstrates a framework for tuning a portfolio of MIP instances to not only obtain good parameter settings used for future instances of the same class of MIPs, but to also gain insights into which parameters and interactions of parameters are significant for that class of MIPs. The framework is used for benchmarking of solvers with tuned parameters on a portfolio of instances. A group screening method provides a way to reduce the number of factors in a design and reduces the time it takes to perform the tuning process. Portfolio benchmarking provides performance information of optimization solvers on a class with instances of a similar structure

    Implications of heterogeneity in discrete choice analysis

    Get PDF
    This dissertation carries out a series of Monte Carlo simulations seeking the implications for welfare estimates from three research practices commonly implemented in empirical applications of mixed logit and latent class logit. Chapter 3 compares welfare measures across conditional logit, mixed logit, and latent class logit. The practice of comparing welfare estimates is widely used in the field. However, this chapter shows comparisons of welfare estimates seem unable to provide reliable information about the differences in welfare estimates that result from controlling for unobserved heterogeneity. The reason is that estimates from mixed logit and latent class logit are inherently inecient and inaccurate. Researchers tend to use their own judgement to select the number of classes of a latent class logit. Chapter 4 studies the reliability of welfare estimates obtained under two scenarios for which an empirical researcher using his/her judgement would arguably choose less classes than the true number of classes. Results show that models with a number of classes smaller than the true number tend to yield down- ward biased and inaccurate estimates. The latent class logit with the true number of classes always yield unbiased estimates but their accuracy may be worse than models with the smaller number of classes. Studies implementing discrete choice experiments commonly obtain estimates of preference parameters from latent class logit models. This practice, however, implies a mismatch: discrete choice experiments are designed under the assumption of homogeneity in preferences, and latent class logit search for heterogeneous preferences. Chapter 5 studies whether welfare estimates are robust to this mismatch. This chapter checks whether the number of choice tasks impact the reliability of welfare estimates. The findings show welfare estimates are unbiased regardless the number of choice tasks, and their accuracy increases with the number of choice tasks. However, some of the welfare estimates are inefficient to the point that cannot be statistically distinguished from zero, regardless the number of choice tasks. Implications from these findings for the empirical literature are discussed

    Analyzing policy capturing data using structural equation modeling for within-subject experiments (SEMWISE)

    Get PDF
    We present the SEMWISE (structural equation modeling for within-subject experiments) approach for analyzing policy capturing data. Policy capturing entails estimating the weights (or utilities) of experimentally manipulated attributes in predicting a response variable of interest (e.g., the effect of experimentally manipulated market-technology combination characteristics on perceived entrepreneurial opportunity). In the SEMWISE approach, a factor model is specified in which latent weight factors capture individually varying effects of experimentally manipulated attributes on the response variable. We describe the core SEMWISE model and propose several extensions (how to incorporate nonbinary attributes and interactions, model multiple indicators of the response variable, relate the latent weight factors to antecedents and/or consequences, and simultaneously investigate several populations of respondents). The primary advantage of the SEMWISE approach is that it facilitates the integration of individually varying policy capturing weights into a broader nomological network while accounting for measurement error. We illustrate the approach with two empirical examples, compare and contrast the SEMWISE approach with multilevel modeling (MLM), discuss how researchers can choose between SEMWISE and MLM, and provide implementation guidelines

    Cyber-physical business systems modelling : advancing Industry 4.0

    Get PDF
    Abstract: The dynamic digital age drives contemporary multinationals to focus on delivering world-class business solutions with the use of advanced technology. Contemporary multinationals relate to a present-day business primarily engaged to generate profits. These complex multinationals offer value through the manufacture, sale, and management of products and services. Disruptive strategies in operations driven by emerging technological innovations demand continuous business improvements. These insightful opportunities are inclusive of operations, enterprise systems, engineering management, and research. Business sustainability is a strategic priority to deliver exceptional digital solutions. The Fourth Industrial Revolutions (4IR) offer significant technological advancements for total business sustainability. The underlying 4IR technologies include Cyber-Physical Systems (CPS). The collective challenges of a large global business are not easy to predict. CPS protocols deliver sustainable prospects required to integrate and model physical systems in real-time driven by the 4IR implementations. The goal of this thesis is to develop a model (CPS) suitable for self-predicting and to determine ideal operational practice driven by technologies of the 4IR. The model (CPS) seeks a novel tool effective for comprehensive business evaluation and optimisation. The competence of the anticipated tool includes suitability to collaborate current operations and predict the impact of change on a complex business. ..D.Phil. (Engineering Management

    Wind-Tunnel Balance Characterization for Hypersonic Research Applications

    Get PDF
    Wind-tunnel research was recently conducted at the NASA Langley Research Center s 31-Inch Mach 10 Hypersonic Facility in support of the Mars Science Laboratory s aerodynamic program. Researchers were interested in understanding the interaction between the freestream flow and the reaction control system onboard the entry vehicle. A five-component balance, designed for hypersonic testing with pressurized flow-through capability, was used. In addition to the aerodynamic forces, the balance was exposed to both thermal gradients and varying internal cavity pressures. Historically, the effect of these environmental conditions on the response of the balance have not been fully characterized due to the limitations in the calibration facilities. Through statistical design of experiments, thermal and pressure effects were strategically and efficiently integrated into the calibration of the balance. As a result of this new approach, researchers were able to use the balance continuously throughout the wide range of temperatures and pressures and obtain real-time results. Although this work focused on a specific application, the methodology shown can be applied more generally to any force measurement system calibration

    Self-Validated Ensemble Modelling

    Get PDF
    An important objective when performing designed experiments is to build models that predict future performance of a system in study; e.g. predict future yields of a bio-process used to manufacture therapeutic proteins. Because experimentation is costly experimental designs are structured to be efficient in terms of the number of trials while providing substantial information about the behavior of the physical system. The strategy to build accurate predictive models in larger data sets is to partition the data into a training set, used to fit the model, and a validation set to access prediction performance. Models are selected that have the lowest prediction error on the validation set. However, designed experiments are usually small in sample size and have a fixed structure which precludes partitioning of any kind; the entire set must be used for training. Contemporary methods use information criteria like the AICc or BIC with model algorithms such as Forward Selection or Lasso to select candidate models. These surrogate prediction measures often produce models with poor prediction performance relative to models selected using a validation procedure such ascross validation. This approach also uses a single fit from a model algorithm which we show to be insufficient. We propose a novel approach that allows the original data set to function as both a training set and a validation set. We accomplish this auto-validation strategy by employing a unique fractionally re-weighted bootstrapping technique. The weighting scheme is structured to induce anti-correlation between the original set and the auto-validation copy. We randomly assign new fractional weights using the bootstrap algorithm and fit a predictive model. This procedure is iterated many times producing a new model each time. The final model is the average of these models. We refer to this new methodology as Self-Validated Ensemble Modeling (SVEM). In this dissertation we investigate the performance of the SVEM algorithm across various scenarios: different model selection algorithms, different designs with varying sample sizes, model noise levels, and sparsity. This investigation shows that SVEM outperforms contemporary one-shot model selection approaches

    Joint optimization of allocation and release policy decisions for surgical block time under uncertainty

    Get PDF
    The research presented in this dissertation contributes to the growing literature on applications of operations research methodology to healthcare problems through the development and analysis of mathematical models and simulation techniques to find practical solutions to fundamental problems facing nearly all hospitals. In practice, surgical block schedule allocation is usually determined regardless of the stochastic nature of case demand and duration. Once allocated, associated block time release policies, if utilized, are often simple rules that may be far from optimal. Although previous research has examined these decisions individually, our model considers them jointly. A multi-objective model that characterizes financial, temporal, and clinical measures is utilized within a simulation optimization framework. The model is also used to test “conventional wisdom” solutions and to identify improved practical approaches. Our result from scheduling multi-priority patients at the Stafford hospital highlights the importance of considering the joint optimization of block schedule and block release policy on quality of care and revenue, taking into account current resources and performance. The proposed model suggests a new approach for hospitals and OR managers to investigate the dynamic interaction of these decisions and to evaluate the impact of changes in the surgical schedule on operating room usage and patient waiting time, where patients have different sensitivities to waiting time. This study also investigated the performance of multiple scheduling policies under multi-priority patients. Experiments were conducted to assess their impacts on the waiting time of patients and hospital profit. Our results confirmed that our proposed threshold-based reserve policy has superior performance over common scheduling policies by preserving a specific amount of OR time for late-arriving, high priority demand

    Analysing freight shippers' mode choice preference heterogeneity using latent class modelling

    Get PDF
    This paper describes a study to improve understanding of the decision-making process of New Zealand firms, freight shippers and agents when making freight transport mode choice decisions. Such studies, despite their importance, are relatively scarce due to issues related to data confidentiality, restraining firms from taking part in such studies. To achieve the objective, we use latent class (LC) modelling, which postulates that firms’ behaviour depends on two components: 1) some observable attributes, such as travel distance and size of operations; and 2) unobserved latent heterogeneity. The latter is taken into account by sorting firms into a number of classes based on similarities in their characteristics. Subsequently, the behaviour of firms in each class is explained by a set of parameter estimates, which differs from the sets assigned to other classes. In this study, data were gathered using stated preference surveys from 190 NZ firms, freight shippers and agents. Based on their freight operations, participants were grouped into: 1) long-haul and large shipments and 2) long-haul and small shipments. Furthermore, as each participant evaluated 18 choice scenarios, the data set contains 3,420 choice records. The results of the LC modelling allow policy makers to design more appropriate strategies and policies for different segments of the population to improve intermodal transport and to attract the largest latent class for both cases. In addition, the LC model indicates that the potential improvement in modal shift, which can be achieved by applying different policy options, varies with both transport distance and the size of shipments. Furthermore, in order to promote sustainable freight transport, one policy would be to increase the reliability of both the rail and sea freight transport services
    • …
    corecore