418 research outputs found

    An improved response surface methodology algorithm with an application to traffic signal optimization for urban networks

    Full text link
    This paper illustrates the use of the simulation-optimization technique of response surface methodology (RSM) in traffic signal optimization of urban networks. It also quantifies the gains of using the common random number (CRN) variance reduction strategy in such an optimization procedure. An enhanced RSM algorithm which employs conjugate gradient search techniques and successive second-order models is presented instead of the conventional approach. An illustrative example using an urban traffic network exhibits the superiority of using the CRN strategy ovr direct simulation in performing traffic signal optimization. Relative performance of the two strategies is quantified with computational results using the total network-wide delay as the measure of effectivness

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-Based Optimization (Replaced by Discussion Paper 2007-45)

    Get PDF
    This paper derives a novel procedure for testing the Karush-Kuhn-Tucker (KKT) first-order optimality conditions in models with multiple random responses.Such models arise in simulation-based optimization with multivariate outputs.This paper focuses on expensive simulations, which have small sample sizes.The paper estimates the gradients (in the KKT conditions) through low-order polynomials, fitted locally.These polynomials are estimated using Ordinary Least Squares (OLS), which also enables estimation of the variability of the estimated gradients.Using these OLS results, the paper applies the bootstrap (resampling) method to test the KKT conditions.Furthermore, it applies the classic Student t test to check whether the simulation outputs are feasible, and whether any constraints are binding.The paper applies the new procedure to both a synthetic example and an inventory simulation; the empirical results are encouraging.stopping rule;metaheuristics;RSM;design of experiments

    Design Optimization of Composite Deployable Bridge Systems Using Hybrid Meta-heuristic Methods for Rapid Post-disaster Mobility

    Get PDF
    Recent decades have witnessed an increase in the transportation infrastructure damage caused by natural disasters such as earthquakes, high winds, floods, as well as man-made disasters. Such damages result in a disruption to the transportation infrastructure network; hence, limit the post-disaster relief operations. This led to the exigency of developing and using effective deployable bridge systems for rapid post-disaster mobility while minimizing the weight to capacity ratio. Recent researches for assessments of mobile bridging requirements concluded that current deployable metallic bridge systems are prone to their service life, unable to meet the increase in vehicle design loads, and any trials for the structures’ strengthening will sacrifice the ease of mobility. Therefore, this research focuses on developing a lightweight deployable bridge system using composite laminates for lightweight bridging in the aftermath of natural disaster. The research investigates the structural design optimization for composite laminate deployable bridge systems, as well as the design, development and testing of composite sandwich core sections that act as the compression bearing element in a deployable bridge treadway structure. The thesis is organized into two parts. The first part includes a new improved particle swarm meta-heuristic approach capable of effectively optimizing deployable bridge systems. The developed approach is extended to modify the technique for discrete design of composite laminates and maximum strength design of composite sandwich core sections. The second part focuses on developing, experimentally testing and numerically investigating the performance of different sandwich core configurations that will be used as the compression bearing element in a deployable fibre-reinforced polymer (FRP) bridge girder. The first part investigated different optimization algorithms used for structural optimization. The uncertainty in the effectiveness of the available methods to handle complex structural models emphasized the need to develop an enhanced version of Particle Swarm Optimizer (PSO) without performing multiple operations using different techniques. The new technique implements a better emulation for the attraction and repulsion behavior of the swarm. The new algorithm is called Controlled Diversity Particle Swarm Optimizer (CD-PSO). The algorithm improved the performance of the classical PSO in terms of solution stability, quality, convergence rate and computational time. The CD-PSO is then hybridized with the Response Surface Methodology (RSM) to redirect the swarm search for probing feasible solutions in hyperspace using only the design parameters of strong influence on the objective function. This is triggered when the algorithm fails to obtain good solutions using CD-PSO. The performance of CD-PSO is tested on benchmark structures and compared to others in the literature. Consequently, both techniques, CD-, and hybrid CD-PSO are examined for the minimum weight design of large-scale deployable bridge structure. Furthermore, a discrete version of the algorithm is created to handle the discrete nature of the composite laminate sandwich core design. The second part focuses on achieving an effective composite deployable bridge system, this is realized through maximizing shear strength, compression strength, and stiffness designs of light-weight composite sandwich cores of the treadway bridge’s compression deck. Different composite sandwich cores are investigated and their progressive failure is numerically evaluated. The performance of the sandwich cores is experimentally tested in terms of flatwise compressive strength, edgewise compressive strength and shear strength capacities. Further, the cores’ compression strength and shear strength capacities are numerically simulated and the results are validated with the experimental work. Based on the numerical and experimental tests findings, the sandwich cores plate properties are quantified for future implementation in optimized scaled deployable bridge treadway

    Structural Reliability Assessment Based on the Improved Constrained Differential Evolution Algorithm

    Get PDF
    In this work, the reliability analysis is employed to take into account the uncertainties in a structure. Reliability analysis is a tool to compute the probability of failure corresponding to a given failure mode. In this study, one of the most commonly used reliability analysis method namely first order reliability method is used to calculate the probability of failure. Since finding the most probable point (MPP) or design point is a constrained optimization problem, in contrast to all the previous studies based on the penalty function method or the preference of the feasible solutions technique, in this study one of the latest versions of the differential evolution metaheuristic algorithm named improved (Ό+λ)-constrained differential evolution (ICDE) based on the multi-objective constraint-handling technique is utilized. The ICDE is very easy to implement because there is no need to the time-consuming task of fine tuning of the penalty parameters. Several test problems are used to verify the accuracy and efficiency of the ICDE. The statistical comparisons revealed that the performance of ICDE is better than or comparable with the other considered methods. Also, it shows acceptable convergence rate in the process of finding the design point. According to the results and easier implementation of ICDE, it can be expected that the proposed method would become a robust alternative to the penalty function based methods for the reliability assessment problems in the future works

    Toward a fast and accurate modeling strategy for thermal management in air-cooled data centers

    Get PDF
    Computational fluid dynamics (CFD) has become a popular tool compared to experimental measurement for thermal management in data centers. However, it is very time-consuming and resource-intensive when used to model large-scale data centers, and may not be ready for real-time thermal monitoring. In this thesis, the two main goals are first to develop rapid flow simulation to reduce the computing time while maintaining good accuracy, and second, to develop a whole building energy simulation (BES) strategy for data center modeling. To achieve this end, hybrid modeling and model training methodologies are investigated for rapid flow simulation, and a multi-zone model is proposed for BES. In the scope of hybrid modeling, two methods are proposed, i.e., the hybrid zero/two-equation turbulence model utilizing the zone partitioning technique and a combination of turbulence and floor tile models for the development of the composite performance index. It shows that the zero-equation coupled with either body force and modified body force tile models have the best potential in reducing the computing time, while preserving reasonable accuracy. The hybrid zero/two-equation method cuts down the computing time in half compared to the traditional practice of using only two-equation model. In the scope of model training, reduced order method via proper orthogonal decomposition (POD) and response surface methodology (RSM) are comprehensively studied for data center modeling. Both methods can quickly reconstruct the data center thermal profile and retain good accuracy. The RSM method especially shows numerous advantages in several optimization studies of data centers. Whether it is for the tile selection to control the server rack temperature difference or impacting the decision for the input design parameters in the early stage of data center infrastructure design, RSM can replace the costly experiments and the time-consuming and resource-intensive CFD simulations. Finally, for the whole BES study, the proposed multi-zone model is found to be much more effective compared to the common use single zone model. The location factor plays an important role in deciding whether some of boundary conditions are affecting the cooling electricity consumption. In addition, the effect of supply temperature and volumetric flow rate have significant effects on the energy consumption

    Simulation optimization: A comprehensive review on theory and applications

    Get PDF
    For several decades, simulation has been used as a descriptive tool by the operations research community in the modeling and analysis of a wide variety of complex real systems. With recent developments in simulation optimization and advances in computing technology, it now becomes feasible to use simulation as a prescriptive tool in decision support systems. In this paper, we present a comprehensive survey on techniques for simulation optimization with emphasis given on recent developments. We classify the existing techniques according to problem characteristics such as shape of the response surface (global as compared to local optimization), objective functions (single or multiple objectives) and parameter spaces (discrete or continuous parameters). We discuss the major advantages and possible drawbacks of the different techniques. A comprehensive bibliography and future research directions are also provided in the paper. © "IIE"

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-Based Optimization (Replaced by Discussion Paper 2007-45)

    Get PDF
    This paper derives a novel procedure for testing the Karush-Kuhn-Tucker (KKT) first-order optimality conditions in models with multiple random responses.Such models arise in simulation-based optimization with multivariate outputs.This paper focuses on expensive simulations, which have small sample sizes.The paper estimates the gradients (in the KKT conditions) through low-order polynomials, fitted locally.These polynomials are estimated using Ordinary Least Squares (OLS), which also enables estimation of the variability of the estimated gradients.Using these OLS results, the paper applies the bootstrap (resampling) method to test the KKT conditions.Furthermore, it applies the classic Student t test to check whether the simulation outputs are feasible, and whether any constraints are binding.The paper applies the new procedure to both a synthetic example and an inventory simulation; the empirical results are encouraging.

    Optimal Design of Beam-Based Compliant Mechanisms via Integrated Modeling Frameworks

    Get PDF
    Beam-based Compliant Mechanisms (CMs) are increasingly studied and implemented in precision engineering due to their advantages over the classic rigid-body mechanisms, such as scalability and reduced need for maintenance. Straight beams with uniform cross section are the basic modules in several concepts, and can be analyzed with a large variety of techniques, such as Euler-Bernoulli beam theory, Pseudo-Rigid Body (PRB) method, chain algorithms (e.g.~the Chained Beam-Constraint Model, CBCM) and Finite Element Analysis (FEA). This variety is unquestionably reduced for problems involving special geometries, such as curved or spline beams, variable section beams, nontrivial shapes and, eventually, contacts between bodies. 3D FEA (solid elements) can provide excellent results but the solutions require high computational times. This work compares the characteristics of modern and computationally efficient modeling techniques (1D FEA, PRB method and CBCM), focusing on their applicability in nonstandard problems. In parallel, as an attempt to provide an easy-to-use environment for CM analysis and design, a multi-purpose tool comprising Matlab and modern Computer-Aided Design/Engineering (CAD/CAE) packages is presented. The framework can implement different solvers depending on the adopted behavioral models. Summary tables are reported to guide the designers in the selection of the most appropriate technique and software architecture. The second part of this work reports demonstrative case studies involving either complex shapes of the flexible members or contacts between the members. To improve the clarity, each example has been accurately defined so as to present a specific set of features, which leads in the choice of a technique rather than others. When available, theoretical models are provided for supporting the design studies, which are solved using optimization approaches. Software implementations are discussed throughout the thesis. Starting from previous works found in the literature, this research introduces novel concepts in the fields of constant force CMs and statically balanced CMs. Finally, it provides a first formulation for modeling mutual contacts with the CBCM. For validation purposes, the majority of the computed behaviors are compared with experimental data, obtained from purposely designed test rigs

    Experimental Investigations of Millimeter Wave Beamforming

    Get PDF
    The millimeter wave (mmW) band, commonly referred to as the frequency band between 30 GHz and 300 GHz, is seen as a possible candidate to increase achievable rates for mobile applications due to the existence of free spectrum. However, the high path loss necessitates the use of highly directional antennas. Furthermore, impairments and power constraints make it difficult to provide full digital beamforming systems. In this thesis, we approach this problem by proposing effective beam alignment and beam tracking algorithms for low-complex analog beamforming (ABF) systems, showing their applicability by experimental demonstration. After taking a closer look at particular features of the mmW channel properties and introducing the beamforming as a spatial filter, we begin our investigations with the application of detection theory for the non-convex beam alignment problem. Based on an M-ary hypothesis test, we derive algorithms for defining the length of the training signal efficiently. Using the concept of black-box optimization algorithms, which allow optimization of non-convex algorithms, we propose a beam alignment algorithm for codebook-based ABF based systems, which is shown to reduce the training overhead significantly. As a low-complex alternative, we propose a two-staged gradient-based beam alignment algorithm that uses convex optimization strategies after finding a subregion of the beam alignment function in which the function can be regarded convex. This algorithm is implemented in a real-time prototype system and shows its superiority over the exhaustive search approach in simulations and experiments. Finally, we propose a beam tracking algorithm for supporting mobility. Experiments and comparisons with a ray-tracing channel model show that it can be used efficiently in line of sight (LoS) and non line of sight (NLoS) scenarios for walking-speed movements
    • 

    corecore