41 research outputs found
Optimal Design of Steel Towers Using a Multi-Metaheuristic Based Search Method
In meta-heuristic algorithms, the problem of parameter tuning is one of the most important issues that can be highly time consuming. To overcome this difficulty, a number of researchers have improved the performance of their methods by enhancement and hybridization with other algorithms. In the present paper efforts are made to search design space simultaneously by the Multi Metaheuristic based Search Method (MMSM). In the proposed method, optimization process is performed by dividing the initial population into five subsets so-called islands. An improved multi-metaheuristic method is then employed. After a certain number of repetitions (migration intervals), some percent of the island’s best members are transferred into another island (migration) and replaced by the members of low fitnesses. In the migration phase, the target island is chosen randomly. Examples of large design spaces are utilized to investigate the efficiency of the proposed method. For this purpose, steel are optimized utilizing the proposed method. The results indicate improvements in the available responses
Model Identification, Updating, and Validation of an Active Magnetic Bearing High-Speed Machining Spindle for Precision Machining Operation
High-Speed Machining (HSM) spindles equipped with Active Magnetic Bearings (AMBs) are envisioned to be capable of autonomous self-identification and performance self-optimization for stable high-speed and high quality machining operation. High-speed machining requires carefully selected parameters for reliable and optimal machining performance. For this reason, the accuracy of the spindle model in terms of physical and dynamic properties is essential to substantiate confidence in its predictive aptitude for subsequent analyses.This dissertation addresses system identification, open-loop model development and updating, and closed-loop model validation. System identification was performed in situ utilizing the existing AMB hardware. A simplified, nominal open-loop rotor model was developed based on available geometrical and material information. The nominal rotor model demonstrated poor correlation when compared with open-loop system identification data. Since considerable model error was realized, the nominal rotor model was corrected by employing optimization methodology to minimize the error of resonance and antiresonance frequencies between the modeled and experimental data.Validity of the updated open-loop model was demonstrated through successful implementation of a MIMO u-controller. Since the u-controller is generated based on the spindle model, robust levitation of the real machining spindle is achieved only when the model is of high fidelity. Spindle performance characterization was carried out at the tool location through evaluations of the dynamic stiffness as well as orbits at various rotational speeds. Updated model simulations exhibited high fidelity correspondence to experimental data confirming the predictive aptitude of the updated model. Further, a case study is presented which illustrates the improved performance of the u-controller when designed with lower uncertainty of the model\u27s accurac
Model Identification, Updating, and Validation of an Active Magnetic Bearing High-Speed Machining Spindle for Precision Machining Operation
High-Speed Machining (HSM) spindles equipped with Active Magnetic Bearings (AMBs) are envisioned to be capable of autonomous self-identification and performance self-optimization for stable high-speed and high quality machining operation. High-speed machining requires carefully selected parameters for reliable and optimal machining performance. For this reason, the accuracy of the spindle model in terms of physical and dynamic properties is essential to substantiate confidence in its predictive aptitude for subsequent analyses.This dissertation addresses system identification, open-loop model development and updating, and closed-loop model validation. System identification was performed in situ utilizing the existing AMB hardware. A simplified, nominal open-loop rotor model was developed based on available geometrical and material information. The nominal rotor model demonstrated poor correlation when compared with open-loop system identification data. Since considerable model error was realized, the nominal rotor model was corrected by employing optimization methodology to minimize the error of resonance and antiresonance frequencies between the modeled and experimental data.Validity of the updated open-loop model was demonstrated through successful implementation of a MIMO u-controller. Since the u-controller is generated based on the spindle model, robust levitation of the real machining spindle is achieved only when the model is of high fidelity. Spindle performance characterization was carried out at the tool location through evaluations of the dynamic stiffness as well as orbits at various rotational speeds. Updated model simulations exhibited high fidelity correspondence to experimental data confirming the predictive aptitude of the updated model. Further, a case study is presented which illustrates the improved performance of the u-controller when designed with lower uncertainty of the model\u27s accurac
Porting the Sisal functional language to distributed-memory multiprocessors
Parallel computing is becoming increasingly ubiquitous in recent years. The sizes of application problems continuously increase for solving real-world problems. Distributed-memory multiprocessors have been regarded as a viable architecture of scalable and economical design for building large scale parallel machines. While these parallel machines can provide computational capabilities, programming such large-scale machines is often very difficult due to many practical issues including parallelization, data distribution, workload distribution, and remote memory latency.
This thesis proposes to solve the programmability and performance issues of distributed-memory machines using the Sisal functional language. The programs written in Sisal will be automatically parallelized, scheduled and run on distributed-memory multiprocessors with no programmer intervention. Specifically, the proposed approach consists of the following steps. Given a program written in Sisal, the front end Sisal compiler generates a directed acyclic graph(DAG) to expose parallelism in the program. The DAG is partitioned and scheduled based on loop parallelism. The scheduled DAG is then translated to C programs with machine specific parallel constructs. The parallel C programs are finally compiled by the target machine specific compilers to generate executables.
A distributed-memory parallel machine, the 80-processor ETL EM-X, has been chosen to perform experiments. The entire procedure has been implemented on the EMX multiprocessor. Four problems are selected for experiments: bitonic sorting, search, dot-product and Fast Fourier Transform. Preliminary execution results indicate that automatic parallelization of the Sisal programs based on loop parallelism is effective. The speedup for these four problems is ranging from 17 to 60 on a 64-processor EM-X. Preliminary experimental results further indicate that programming distributed-memory multiprocessors using a functional language indeed frees the programmers from lowl-evel programming details while allowing them to focus on algorithmic performance improvement
Physical parameter-aware Networks-on-Chip design
PhD ThesisNetworks-on-Chip (NoCs) have been proposed as a scalable, reliable
and power-efficient communication fabric for chip multiprocessors
(CMPs) and multiprocessor systems-on-chip (MPSoCs). NoCs determine
both the performance and the reliability of such systems, with a
significant power demand that is expected to increase due to developments
in both technology and architecture. In terms of architecture, an
important trend in many-core systems architecture is to increase the
number of cores on a chip while reducing their individual complexity.
This trend increases communication power relative to computation
power. Moreover, technology-wise, power-hungry wires are dominating
logic as power consumers as technology scales down. For these
reasons, the design of future very large scale integration (VLSI) systems
is moving from being computation-centric to communication-centric.
On the other hand, chip’s physical parameters integrity, especially
power and thermal integrity, is crucial for reliable VLSI systems. However,
guaranteeing this integrity is becoming increasingly difficult with
the higher scale of integration due to increased power density and operating
frequencies that result in continuously increasing temperature
and voltage drops in the chip. This is a challenge that may prevent
further shrinking of devices. Thus, tackling the challenge of power
and thermal integrity of future many-core systems at only one level
of abstraction, the chip and package design for example, is no longer
sufficient to ensure the integrity of physical parameters. New designtime
and run-time strategies may need to work together at different
levels of abstraction, such as package, application, network, to provide
the required physical parameter integrity for these large systems. This
necessitates strategies that work at the level of the on-chip network
with its rising power budget.
This thesis proposes models, techniques and architectures to improve
power and thermal integrity of Network-on-Chip (NoC)-based
many-core systems. The thesis is composed of two major parts: i)
minimization and modelling of power supply variations to improve
power integrity; and ii) dynamic thermal adaptation to improve thermal
integrity. This thesis makes four major contributions. The first is
a computational model of on-chip power supply variations in NoCs.
The proposed model embeds a power delivery model, an NoC activity
simulator and a power model. The model is verified with SPICE simulation
and employed to analyse power supply variations in synthetic
and real NoC workloads. Novel observations regarding power supply
noise correlation with different traffic patterns and routing algorithms
are found. The second is a new application mapping strategy aiming
vii
to minimize power supply noise in NoCs. This is achieved by defining
a new metric, switching activity density, and employing a force-based
objective function that results in minimizing switching density. Significant
reductions in power supply noise (PSN) are achieved with a low
energy penalty. This reduction in PSN also results in a better link timing
accuracy. The third contribution is a new dynamic thermal-adaptive
routing strategy to effectively diffuse heat from the NoC-based threedimensional
(3D) CMPs, using a dynamic programming (DP)-based distributed
control architecture. Moreover, a new approach for efficient extension
of two-dimensional (2D) partially-adaptive routing algorithms
to 3D is presented. This approach improves three-dimensional networkon-
chip (3D NoC) routing adaptivity while ensuring deadlock-freeness.
Finally, the proposed thermal-adaptive routing is implemented in
field-programmable gate array (FPGA), and implementation challenges,
for both thermal sensing and the dynamic control architecture are addressed.
The proposed routing implementation is evaluated in terms
of both functionality and performance.
The methodologies and architectures proposed in this thesis open a
new direction for improving the power and thermal integrity of future
NoC-based 2D and 3D many-core architectures
Conserve and Protect Resources in Software-Defined Networking via the Traffic Engineering Approach
Software Defined Networking (SDN) is revolutionizing the architecture and operation of computer networks and promises a more agile and cost-efficient network management. SDN centralizes the network control logic and separates the control plane from the data plane, thus enabling flexible management of networks. A network based on SDN consists of a data plane and a control plane. To assist management of devices and data flows, a network also has an independent monitoring plane. These coexisting network planes have various types of resources, such as bandwidth utilized to transmit monitoring data, energy spent to power data forwarding devices and computational resources to control a network. Unwise management, even abusive utilization of these resources lead to the degradation of the network performance and increase the Operating Expenditure (Opex) of the network owner. Conserving and protecting limited network resources is thus among the key requirements for efficient networking.
However, the heterogeneity of the network hardware and network traffic workloads expands the configuration space of SDN, making it a challenging task to operate a network efficiently. Furthermore, the existing approaches usually lack the capability to automatically adapt network configurations to handle network dynamics and diverse optimization requirements. Addtionally, a centralized SDN controller has to run in a protected environment against certain attacks. This thesis builds upon the centralized management capability of SDN, and uses cross-layer network optimizations to perform joint traffic engineering, e.g., routing, hardware and software configurations. The overall goal is to overcome the management complexities in conserving and protecting resources in multiple functional planes in SDN when facing network heterogeneities and system dynamics. This thesis presents four contributions: (1) resource-efficient network monitoring, (2) resource-efficient data forwarding, (3) using self-adaptive algorithms to improve network resource efficiency, and (4) mitigating abusive usage of resources for network controlling.
The first contribution of this thesis is a resource-efficient network monitoring solution. In this thesis, we consider one specific type of virtual network management function: flow packet inspection. This type of the network monitoring application requires to duplicate packets of target flows and send them to packet monitors for in-depth analysis. To avoid the competition for resources between the original data and duplicated data, the network operators can transmit the data flows through physically (e.g., different communication mediums) or virtually (e.g., distinguished network slices) separated channels having different resource consumption properties. We propose the REMO solution, namely Resource Efficient distributed Monitoring, to reduce the overall network resource consumption incurred by both types of data, via jointly considering the locations of the packet monitors, the selection of devices forking the data packets, and flow path scheduling strategies.
In the second contribution of this thesis, we investigate the resource efficiency problem in hybrid, server-centric data center networks equipped with both traditional wired connections (e.g., InfiniBand or Ethernet) and advanced high-data-rate wireless links (e.g., directional 60GHz wireless technology). The configuration space of hybrid SDN equipped with both wired and wireless communication technologies is massively large due to the complexity brought by the device heterogeneity. To tackle this problem, we present the ECAS framework to reduce the power consumption and maintain the network performance.
The approaches based on the optimization models and heuristic algorithms are considered as the traditional way to reduce the operation and facility resource consumption in SDN. These approaches are either difficult to directly solve or specific for a particular problem space. As the third contribution of this thesis, we investigates the approach of using Deep Reinforcement Learning (DRL) to improve the adaptivity of the management modules for network resource and data flow scheduling. The goal of the DRL agent in the SDN network is to reduce the power consumption of SDN networks without severely degrading the network performance.
The fourth contribution of this thesis is a protection mechanism based upon flow rate limiting to mitigate abusive usage of the SDN control plane resource. Due to the centralized architecture of SDN and its handling mechanism for new data flows, the network controller can be the failure point due to the crafted cyber-attacks, especially the Control-Plane- Saturation (CPS) attack. We proposes an In-Network Flow mAnagement Scheme (INFAS) to effectively reduce the generation of malicious control packets depending on the parameters configured for the proposed mitigation algorithm.
In summary, the contributions of this thesis address various unique challenges to construct resource-efficient and secure SDN. This is achieved by designing and implementing novel and intelligent models and algorithms to configure networks and perform network traffic engineering, in the protected centralized network controller
Un environnement pour l’optimisation sans dérivées
RÉSUMÉ : L’optimisation sans dérivées (DFO) est une branche particulière de l’optimisation qui étudie les problèmes pour lesquels les dérivées de l’objectif et/ou des contraintes ne sont pas disponibles. Généralement issues des simulations, les fonctions traitées peuvent être coûteuses à évaluer que ce soit en temps d’exécution ou en mémoire, bruitées, non dérivables ou simplement pas accessibles pour des raisons de confidentialité. La DFO spécifie des algorithmes qui se basent sur divers concepts dont certains ont été spécialement conçus pour traiter ce type de fonctions. On parle aussi parfois de boîtes grises ou noires pour souligner le peu d’information disponible sur la fonction objectif et/ou les contraintes. Il existe plusieurs solveurs et boîtes à outils qui permettent de traiter les problèmes de DFO. Le but de ce mémoire est de présenter un environnement entièrement développé en Python
qui regroupe quelques outils et modules utiles dans un contexte de DFO, ainsi que quelques solveurs. Cet environnement a la particularité d’être écrit de façon modulaire. Cela permet une liberté à l’utilisateur en termes de manipulation, personnalisation et développement
d’algorithmes de DFO. Dans ce mémoire, on présente la structure générale de la bibliothèque fournie, nommée
DFO.py, ainsi que les détails des solveurs implémentés, en précisant les parties qui peuvent être modifiées et les différentes options disponibles. Une étude comparative est aussi présentée, le cas échéant, afin de mettre en évidence l’effet des choix des options utilisées sur l’efficacité
de chaque solveur. Ces comparaisons sont visualisées à l’aide de profils de performance et de données que nous avons aussi implémentés dans un module indépendant nommé Profiles.py. Mots clés : Optimisation sans dérivées, optimisation de boîte noire, Python, profils de performance, profils de données.----------ABSTRACT : Derivative free optimization (DFO) is a branch of optimization that aims to study problems for which the derivatives of the objective function and/or constraints are not available. These functions generally come from simulation problems, therefore calling a function to evaluate a certain point can be expensive in terms of execution time or memory. The functions can be noisy, non diffrentiable or simply not accessible. DFO algorithms use different concepts and tools to adjust to these special circumstances in order to provide the best solution possible. Sometimes, the terms grey box or black box optimisation can also be used to emphasize the lack of information given about the objective function. There is a rich literature of solvers and toolboxes specialized in DFO problems. Our goal is to provide a Python environment called DFO.py, that regroups certain tools and modules that
can be used in a DFO framework. The modular implementation of this environment is meant to allow a certain freedom in terms of modifying and customizing solvers. In this document, we present the general structure of DFO.py as well as the implementation details of each solver provided. These solvers can be modified by changing certain options that are mentioned in their respective sections. We also provide a benchmark study to compare the results obtained with each version of the same solver. This benchmark is done using performance and data profiles, which are part of an independent module called Profiles.py also presented in this document. Key words: Derivative free optimization, black box optimization, performance profile, data profile, Python