5,492 research outputs found

    A FLEXIBLE METHOD FOR EMPIRICALLY ESTIMATING PROBABILITY FUNCTIONS

    Get PDF
    This paper presents a hyperbolic trigonometric (HT) transformation procedure for empirically estimating a cumulative probability distribution function (cdf), from which the probability density function (pdf) can be obtained by differentiation. Maximum likelihood (ML) is the appropriate estimation technique, but a particularly appealing feature of the HT transformation as opposed to other zero-one transformations is that the transformed cdf can be fitted with ordinary least squares (OLS) regression. Although OLS estimates are biased and inconsistent, they are usually very close to ML estimates; thus use of OLS estimates as starting values greatly facilitates use of numerical search procedures to obtain ML estimates. ML estimates have desirable asymptotic properties. The procedure is no more difficult to use than unconstrained nonlinear regression. Advantages of the procedure as compared to alternative procedures for fitting probability functions are discussed in the manuscript. Use of the conditional method is illustrated by application to two sets of yield response data.Research Methods/ Statistical Methods,

    pythOPT: A problem-solving environment for optimization methods

    Get PDF
    Optimization is a process of finding the best solutions to problems based on mathematical models. There are numerous methods for solving optimization problems, and there is no method that is superior for all problems. This study focuses on the Particle Swarm Optimization (PSO) family of methods, which is based on the swarm behaviour of biological organisms. These methods are easily adjustable, scalable, and have been proven successful in solving optimization problems. This study examines the performance of nine optimization methods on four sets of problems. The performance analysis of these methods is based on two performance metrics (the win-draw-loss metric and the performance profiles metric) that are used to analyze experimental data. The data are gathered by using each optimization method in multiple configurations to solve four classes of problems. A software package pythOPT was created. It is a problem-solving environment that is comprised of a library, a framework, and a system for benchmarking optimization methods. pythOPT includes code that prepares experiments, executes computations on a distributed system, stores results in a database, analyzes those results, and visualizes analyses. It also includes a framework for building PSO-based methods and a library of benchmark functions used in one of the presented analyses. Using pythOPT, the performance of these nine methods is compared in relation to three parameters: number of available function evaluations, accuracy of solutions, and communication topology. This experiment demonstrates that two methods (SPSO and GCPSO) are superior in finding solutions for the tested classes of problems. Finally, by using pythOPT we can recreate this study and produce similar ones by changing the parameters of an experiment. We can add new methods and evaluate their performances, and this helps in developing new optimization methods

    Performance analysis of parallel branch and bound search with the hypercube architecture

    Get PDF
    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture

    Exploring novel designs of NLP solvers: Architecture and Implementation of WORHP

    Get PDF
    Mathematical Optimization in general and Nonlinear Programming in particular, are applied by many scientific disciplines, such as the automotive sector, the aerospace industry, or the space agencies. With some established NLP solvers having been available for decades, and with the mathematical community being rather conservative in this respect, many of their programming standards are severely outdated. It is safe to assume that such usability shortcomings impede the wider use of NLP methods; a representative example is the use of static workspaces by legacy FORTRAN codes. This dissertation gives an account of the construction of the European NLP solver WORHP by using and combining software standards and techniques that have not previously been applied to mathematical software to this extent. Examples include automatic code generation, a consistent reverse communication architecture and the elimination of static workspaces. The result is a novel, industrial-grade NLP solver that overcomes many technical weaknesses of established NLP solvers and other mathematical software

    Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    Get PDF
    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks

    Hybrid Shrinkage Estimators Using Penalty Bases For The Ordinal One-Way Layout

    Full text link
    This paper constructs improved estimators of the means in the Gaussian saturated one-way layout with an ordinal factor. The least squares estimator for the mean vector in this saturated model is usually inadmissible. The hybrid shrinkage estimators of this paper exploit the possibility of slow variation in the dependence of the means on the ordered factor levels but do not assume it and respond well to faster variation if present. To motivate the development, candidate penalized least squares (PLS) estimators for the mean vector of a one-way layout are represented as shrinkage estimators relative to the penalty basis for the regression space. This canonical representation suggests further classes of candidate estimators for the unknown means: monotone shrinkage (MS) estimators or soft-thresholding (ST) estimators or, most generally, hybrid shrinkage (HS) estimators that combine the preceding two strategies. Adaptation selects the estimator within a candidate class that minimizes estimated risk. Under the Gaussian saturated one-way layout model, such adaptive estimators minimize risk asymptotically over the class of candidate estimators as the number of factor levels tends to infinity. Thereby, adaptive HS estimators asymptotically dominate adaptive MS and adaptive ST estimators as well as the least squares estimator. Local annihilators of polynomials, among them difference operators, generate penalty bases suitable for a range of numerical examples.Comment: Published at http://dx.doi.org/10.1214/009053604000000652 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Asymmetric Load Balancing on a Heterogeneous Cluster of PCs

    Get PDF
    In recent years, high performance computing with commodity clusters of personal computers has become an active area of research. Many organizations build them because they need the computational speedup provided by parallel processing but cannot afford to purchase a supercomputer. With commercial supercomputers and homogenous clusters of PCs, applications that can be statically load balanced are done so by assigning equal tasks to each processor. With heterogeneous clusters, the system designers have the option of quickly adding newer hardware that is more powerful than the existing hardware. When this is done, the assignment of equal tasks to each processor results in suboptimal performance. This research addresses techniques by which the size of the tasks assigned to processors is a suitable match to the processors themselves, in which the more powerful processors can do more work, and the less powerful processors perform less work. We find that when the range of processing power is narrow, some benefit can be achieved with asymmetric load balancing. When the range of processing power is broad, dramatic improvements in performance are realized our experiments have shown up to 92% improvement when asymmetrically load balancing a modified version of the NAS Parallel Benchmarks\u27 LU application
    • …
    corecore