29 research outputs found

    Alternative Options for Business Decisions Using Nearly Optimal Programming

    No full text
    Linear Programming is a quantitative method for finding the optimal solution when there are limitations on resources. This technique is used extensively in a variety of areas, including the field of business decision making. One shortcoming of mathematical modeling in general and linear programming in particular, is that these models can only represent approximations of the actual constraints in the system. Therefore, the optimal solution might not be the best course of action for a company, since there might be unquantifiable restrictions that could not be represented in the model. This paper illustrates a technique called Nearly Optimal Programming that generates multiple solutions, very close to the optimal objective value. This allows the decision maker to choose from a variety of solutions, or even to combine different solutions to produce another solution with desired characteristics. The example is for a multinational company taken from the literature

    Educational Institutions Must Keep Pace With Changing Computer Technology

    No full text
    Focuses on the need for educational institutions to keep pace with changes concerning computer technology. Curriculum development and reform; Stages of concern about innovation; Examination on department and institutional variables

    The Challenges of Real World Analytics: A Case Study of a Municipal Water Authority Data Analysis

    No full text
    In the consulting world, companies hire statistical consultants to help them understand very large data sets. From the consultant’s point of view, these data sets will often, if not always, have many inconsistencies in the data. Without careful review and preparation of the data submitted by a company for some review, it can take a significant amount of time to simply get the data into the correct format for analysis. It is often true that the management of the company not only has the data in very poor format, but they find it hard to clearly state what exact goal they have or question they would like answered. This is a twofold problem. Not only is cleaning the data imperative, but the consultant must begin to make some overall sense of the data and formulate possible questions that can be answered with the current data. This seems straightforward, but it is quite time-consuming. This paper reviews a consulting project with a municipal water authority. The water authority wanted analytical help understanding data that they collected, and they had vague notions as to what information could truly be gleaned from the data. It took well over a year to clean the data to an understandable format and to clarify any actual and meaningful questions that could be addressed. This is a problem that consultants wrestle with on a regular basis. In the final analysis for this project, the consultants could only suggest to the water authority that it was necessary to secure more information so that more in depth analysis could be completed

    The Importance of Teaching Power in Statistical Hypothesis Teaching

    No full text
    In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value when the null hypothesis turns out to be false. Although power is an important concept, since not rejecting a false hypothesis may result in serious consequences, it is a topic not often covered in any depth in a basic statistics class and it is often ignored by practitioners. Considerations of power help to determine appropriate sample sizes for studies and also force one to consider different effect sizes. These are important concepts but difficult for beginning statistics students to understand. We illustrate how one can provide a simple classroom demonstration; using applets provided by Visual Statistics 2.0, of how to calculate power and at the same time convince students of the importance of power considerations. Specifically, for beginning students, we focus on a common statistical hypothesis testing example, a test of hypothesis of means concerning one sample. For this case, we examine the power of the test at varying levels of significance, sample sizes, standard deviations and effect sizes, all factors which are important to the results of a testing situation. We then illustrate how students, depending on time and resources, can reproduce these power calculations themselves using several statistical software packages. The use of statistical software will also be very helpful to the students who upon graduation become the practitioners. Finally, examples of power analyses are provided for more advanced problems such as analysis of variance and regression analysis, which might be used in a second semester statistics course

    Using Simulation as an Integrated Teaching Tool in the Mathematics Classroom

    No full text
    Illustrates how simulation can be used as an integral teaching tool in different mathematics courses. Simulation offers the student a hands-on type of learning which can greatly clarify theoretical methodology in mathematics. The student can explore new dimensions of a concept by introducing different parameters into the simulation model. The use of simulation is discussed in the context of three different mathematics courses, statistics, calculus, and quantitative analysis. Illustrations are provided to show how easily simulation can be performed using several different but popular packages on a personal computer

    An Oversampling Technique for Classifying Imbalanced Datasets

    No full text
    We propose an oversampling technique to increase the true positive rate (sensitivity) in classifying imbalanced datasets (i.e., those with a value for the target variable that occurs with a small frequency) and hence boost the overall performance measurements such as balanced accuracy, G-mean and area under the receiver operating characteristic (ROC) curve, AUC. This oversampling method is based on the idea of applying the Synthetic Minority Oversampling Technique (SMOTE) on only a selective portion of the dataset instead of the entire dataset. We demonstrate the effectiveness of our oversampling method with four real and simulated datasets generated from three models

    Defined Benefit and Defined Contribution Retirement Plan Simulations

    No full text
    The focus of this paper is the effect of changes to employer sponsored retirement plans on employee retirement benefits. Today’s retirement benefits consist mainly of three types of plans: defined benefit (DB) defined contribution (DC), and “hybrid” plans. Many employers have changed the type of plan they offered in recent years. Specifically, there is a shift from DB plans to DC plans. A retirement benefit comparison is made between a DB plan and a DC plan under different scenarios which depend on years of work, market yield, interest rates, and predicted wage increases. Using simulation modeling, DB and DC benefits are compared over different career lengths for a worker with a starting salary of $50,000. Simulated fluctuations in annual market yield and average national wage increases are used to project DC balances. DB benefits are simulated using random fluctuations in both wage increases and interest rates used for lump sum conversions. The resulting simulated benefits show that DC plans are generally inferior to DB plans. In addition, simulated DC retirement account balances have a much higher standard deviation than the present value of traditional DB plan annuities. However, DB benefits that are converted to lump sums have standard deviations closer to those of DC lump sums due to the variability of interest rates used in the conversion of DB annuities to a lump sum

    The Comparative Efficacy of Imputation Methods for Missing Data in Structural Equation Modeling

    No full text
    Missing data is a problem that permeates much of the research being done today. Traditional techniques for replacing missing values may have serious limitations. Recent developments in computing allow more sophisticated techniques to be used. This paper compares the efficacy of five current, and promising, methods that can be used to deal with missing data. This efficacy will be judged by examining the percent of bias in estimating parameters. The focus of this paper is on structural equation modeling (SEM), a popular statistical technique, which subsumes many of the traditional statistical procedures. To make the comparison, this paper examines a full structural equation model that is generated by simulation in accord with previous research. The five techniques used for comparison are expectation maximization (EM), full information maximum likelihood (FIML), mean substitution (Mean), multiple imputation (MI), and regression imputation (Regression). All of these techniques, other than FIML, impute missing data and result in a complete dataset that can be used by researchers for other research. FIML, on the other hand, can still estimate the parameters of the model. The study involves two levels of sample size (100 and 500) and seven levels of incomplete data (2%, 4%, 8%, 12%, 16%, 24%, and 32% missing completely at random). After extensive bootstrapping and simulation, the results indicate that FIML is a superior method in the estimation of most different types of parameters in a SEM format. Furthermore, MI is found to be superior in the estimation of standard errors. Multiple imputation (MI) also is an excellent estimator, with the exception of datasets with over 24% missing information. Considering the fact that FIML is a direct method and does not actually impute the missing data, whereas MI does, and can yield a complete set of data for the researcher to analyze, we conclude that MI, because of its theoretical and distributional underpinnings, is probably most promising for future applications in this field
    corecore