733 research outputs found

    Combining filter method and dynamically dimensioned search for constrained global optimization

    Get PDF
    In this work we present an algorithm that combines the filter technique and the dynamically dimensioned search (DDS) for solving nonlinear and nonconvex constrained global optimization problems. The DDS is a stochastic global algorithm for solving bound constrained problems that in each iteration generates a randomly trial point perturbing some coordinates of the current best point. The filter technique controls the progress related to optimality and feasibility defining a forbidden region of points refused by the algorithm. This region can be given by the flat or slanting filter rule. The proposed algorithm does not compute or approximate any derivatives of the objective and constraint functions. Preliminary experiments show that the proposed algorithm gives competitive results when compared with other methods.The first author thanks a scholarship supported by the International Cooperation Program CAPES/ COFECUB at the University of Minho. The second and third authors thanks the support given by FCT (Funda¸c˜ao para Ciˆencia e Tecnologia, Portugal) in the scope of the projects: UID/MAT/00013/2013 and UID/CEC/00319/2013. The fourth author was partially supported by CNPq-Brazil grants 308957/2014-8 and 401288/2014-5.info:eu-repo/semantics/publishedVersio

    Filter-based stochastic algorithm for global optimization

    Get PDF
    We propose the general Filter-based Stochastic Algorithm (FbSA) for the global optimization of nonconvex and nonsmooth constrained problems. Under certain conditions on the probability distributions that generate the sample points, almost sure convergence is proved. In order to optimize problems with computationally expensive black-box objective functions, we develop the FbSA-RBF algorithm based on the general FbSA and assisted by Radial Basis Function (RBF) surrogate models to approximate the objective function. At each iteration, the resulting algorithm constructs/updates a surrogate model of the objective function and generates trial points using a dynamic coordinate search strategy similar to the one used in the Dynamically Dimensioned Search method. To identify a promising best trial point, a non-dominance concept based on the values of the surrogate model and the constraint violation at the trial points is used. Theoretical results concerning the sufficient conditions for the almost surely convergence of the algorithm are presented. Preliminary numerical experiments show that the FbSA-RBF is competitive when compared with other known methods in the literature.The authors are grateful to the anonymous referees for their fruitful comments and suggestions.The first and second authors were partially supported by Brazilian Funds through CAPES andCNPq by Grants PDSE 99999.009400/2014-01 and 309303/2017-6. The research of the thirdand fourth authors were partially financed by Portuguese Funds through FCT (Fundação para Ciência e Tecnologia) within the Projects UIDB/00013/2020 and UIDP/00013/2020 of CMAT-UM and UIDB/00319/2020

    A hybrid approach to constrained global optimization

    Get PDF
    In this paper, we propose a novel hybrid global optimization method to solve constrained optimization problems. An exact penalty function is first applied to approximate the original constrained optimization problem by a sequence of optimization problems with bound constraints. To solve each of these box constrained optimization problems, two hybrid methods are introduced, where two different strategies are used to combine limited memory BFGS (L-BFGS) with Greedy Diffusion Search (GDS). The convergence issue of the two hybrid methods is addressed. To evaluate the effectiveness of the proposed algorithm, 18 box constrained and 4 general constrained problems from the literature are tested. Numerical results obtained show that our proposed hybrid algorithm is more effective in obtaining more accurate solutions than those compared to

    Model Calibration in Watershed Hydrology

    Get PDF
    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing

    Developing Parsimonious and Efficient Algorithms for Water Resources Optimization Problems

    Get PDF
    In the current water resources scientific literature, a wide variety of engineering design problems are solved in a simulation-optimization framework. These problems can have single or multiple objective functions and their decision variables can have discrete or continuous values. The majority of current literature in the field of water resources systems optimization report using heuristic global optimization algorithms, including evolutionary algorithms, with great success. These algorithms have multiple parameters that control their behavior both in terms of computational efficiency and the ability to find near globally optimal solutions. Values of these parameters are generally obtained by trial and error and are case study dependent. On the other hand, water resources simulation-optimization problems often have computationally intensive simulation models that can require seconds to hours for a single simulation. Furthermore, analysts may have limited computational budget to solve these problems, as such, the analyst may not be able to spend some of the computational budget to fine-tune the algorithm settings and parameter values. So, in general, algorithm parsimony in the number of parameters is an important factor in the applicability and performance of optimization algorithms for solving computationally intensive problems. A major contribution of this thesis is the development of a highly efficient, single objective, parsimonious optimization algorithm for solving problems with discrete decision variables. The algorithm is called Hybrid Discrete Dynamically Dimensioned Search, HD-DDS, and is designed based on Dynamically Dimensioned Search (DDS) that was developed by Tolson and Shoemaker (2007) for solving single objective hydrologic model calibration problems with continuous decision variables. The motivation for developing HD-DDS comes from the parsimony and high performance of original version of DDS. Similar to DDS, HD-DDS has a single parameter with a robust default value. HD-DDS is successfully applied to several benchmark water distribution system design problems where decision variables are pipe sizes among the available pipe size options. Results show that HD-DDS exhibits superior performance in specific comparisons to state-of-the-art optimization algorithms. The parsimony and efficiency of the original and discrete versions of DDS and their successful application to single objective water resources optimization problems with discrete and continuous decision variables motivated the development of a multi-objective optimization algorithm based on DDS. This algorithm is called Pareto Archived Dynamically Dimensioned Search (PA-DDS). The algorithm parsimony is a major factor in the design of PA-DDS. PA-DDS has a single parameter from its search engine DDS. In each iteration, PA-DDS selects one archived non-dominated solution and perturbs it to search for new solutions. The solution perturbation scheme of PA-DDS is similar to the original and discrete versions of DDS depending on whether the decision variable is discrete or continuous. So, PA-DDS can handle both types of decision variables. PA-DDS is applied to several benchmark mathematical problems, water distribution system design problems, and water resources model calibration problems with great success. It is shown that hypervolume contribution, HVC1, as defined in Knowles et al. (2003) is the superior selection metric for PA-DDS when solving multi-objective optimization problems with Pareto fronts that have a general (unknown) shape. However, one of the main contributions of this thesis is the development of a selection metric specifically designed for solving multi-objective optimization problems with a known or expected convex Pareto front such as water resources model calibration problems. The selection metric is called convex hull contribution (CHC) and makes the optimization algorithm sample solely from a subset of archived solutions that form the convex approximation of the Pareto front. Although CHC is generally applicable to any stochastic search optimization algorithm, it is applied to PA-DDS for solving six water resources calibration case studies with two or three objective functions. These case studies are solved by PA-DDS with CHC and HVC1 selections using 1,000 solution evaluations and by PA-DDS with CHC selection and two popular multi-objective optimization algorithms, AMALGAM and ε-NSGAII, using 10,000 solution evaluations. Results are compared based on the best case and worst case performances (out of multiple optimization trials) from each algorithm to measure the expected performance range for each algorithm. Comparing the best case performance of these algorithms shows that, PA-DDS with CHC selection using 1,000 solution evaluations perform very well in five out of six case studies. Comparing the worst case performance of the algorithms shows that with 1,000 solution evaluations, PA-DDS with CHC selection perform well in four out of six case studies. Furthermore, PA-DDS with CHC selection using 10,000 solution evaluations perform comparable to AMALGAM and ε-NSGAII. Therefore, it is concluded that PA-DDS with CHC selection is a powerful optimization algorithm for finding high quality solutions of multi-objective water resources model calibration problems with convex Pareto front especially when the computational budget is limited

    Surrogate Model Algorithms for Computationally Expensive Black-Box Global Optimization Problems

    Get PDF
    Surrogate models (also called response surface models or metamodels) have been widely used in the literature to solve continuous black-box global optimization problems that have computationally expensive objective functions. Different surrogate models such as radial basis functions, kriging, multivariate adaptive regression splines, and polynomial regression models have been used in various applications. It is in general however unknown which model will perform best for a given application, and computation time restrictions do not allow trying different models. Thus, in the first part of this thesis, a family of algorithms (SO-M, SO-M-c, SO-M-s) based on using a mixture of surrogate models is developed. The second part of the thesis extends the research in using surrogate models for mixed-integer (algorithm SO-MI) and purely integer (algorithms SO-I) optimization problems. Finally, a real world application problem arising in the agricultural land use management of a watershed is examined (algorithms SO-Ic). The algorithm SO-M uses Dempster-Shafer theory to combine information derived from various model characteristics in order to determine the influence of individual models in the mixture. Extensions of SO-M with respect to the sampling strategy (algorithms SO-M-c and SO-M-s) have been compared in numerical experiments, and it was found that whenever it is a priori unknown which surrogate model should be used, it is advisable to use a mixture model in order to prevent accidentally selecting the worst model. It could be shown that mixture models containing radial basis function interpolants generally work very well, whereas using only polynomial regression models should be avoided. Moreover, algorithms using mixture models often outperform the algorithms that use only the single models that are contributing to the mixture. Although there are many computationally expensive black-box optimization applications that have besides continuous also integer variables, or that have only integer variables, algorithms for solving these types of problems are scarce. In the second part of this thesis two algorithms, namely SO-MI for mixed-integer problems, and SO-I for purely integer problems have been developed and were shown to find accurate solutions for computationally expensive problems with black-box objective functions and possibly black-box constraints. The constraints were treated with a penalty approach and numerical experiments showed that the surrogate model based algorithms outperformed commonly used algorithms for (mixed-) integer problems such as branch and bound, and genetic algorithms. Also NOMAD (Nonsmooth Optimization by Mesh Adaptive Direct Search) has been included in the comparison. NOMAD is suitable for integer and mixed-integer black-box problems, but its performance for these problem types has not been studied in the literature. In the numerical experiments, NOMAD also proved superior as compared to branch and bound and the genetic algorithm, but it performed worse than SO-I and SO-MI for most test problems. Lastly, the algorithm SO-I has been further extended to directly handling constraints with a response surface. The algorithm, SO-Ic, has been developed specifically for a watershed management problem that has only one constraint, but SO-Ic is easily generalizable for problems with more constraints. In the considered application problem parts of the agricultural land in the Cannonsville reservoir watershed in upstate New York have to be retired in order to decrease the total phosphorus runoff to a given limit at minimal cost. A computationally expensive simulation model has to be used to compute the costs and phosphorus runoff. The performance of SO-Ic has been compared to a genetic algorithm, NOMAD, and the discrete dynamically dimensioned search algorithm on three problem instances with different sizes of the feasible region. The surrogate model based algorithm SO-Ic performed also for these problems significantly better than all other algorithms and could be shown to be the most robust

    Optimal Control of an Uninhabited Loyal Wingman

    Get PDF
    As researchers strive to achieve autonomy in systems, many believe the goal is not that machines should attain full autonomy, but rather to obtain the right level of autonomy for an appropriate man-machine interaction. A common phrase for this interaction is manned-unmanned teaming (MUM-T), a subset of which, for unmanned aerial vehicles, is the concept of the loyal wingman. This work demonstrates the use of optimal control and stochastic estimation techniques as an autonomous near real-time dynamic route planner for the DoD concept of the loyal wingman. First, the optimal control problem is formulated for a static threat environment and a hybrid numerical method is demonstrated. The optimal control problem is transcribed to a nonlinear program using direct orthogonal collocation, and a heuristic particle swarm optimization algorithm is used to supply an initial guess to the gradient-based nonlinear programming solver. Next, a dynamic and measurement update model and Kalman filter estimating tool is used to solve the loyal wingman optimal control problem in the presence of moving, stochastic threats. Finally, an algorithm is written to determine if and when the loyal wingman should dynamically re-plan the trajectory based on a critical distance metric which uses speed and stochastics of the moving threat as well as relative distance and angle of approach of the loyal wingman to the threat. These techniques are demonstrated through simulation for computing the global outer-loop optimal path for a minimum time rendezvous with a manned lead while avoiding static as well as moving, non-deterministic threats, then updating the global outer-loop optimal path based on changes in the threat mission environment. Results demonstrate a methodology for rapidly computing an optimal solution to the loyal wingman optimal control problem

    Real-time Control and Optimization of Water Supply and Distribution infrastructure

    Get PDF
    Across North America, water supply and distribution systems (WSDs) are controlled manually by operational staff - who place a heavy reliance on their experience and judgement when rendering operational decisions. These decisions range from scheduling the operation of pumps, valves and chemical dosing in the system. However, due to the uncertainty of demand, stringent water quality regulatory constraints, external forcing (cold/drought climates, fires, bursts) from the environment, and the non-stationarity of climate change, operators have the tendency to control their systems conservatively and reactively. WSDs that are operated in such fashion are said to be 'reactive' because: (i) the operators manually react to changes in the system behaviour, as measured by Supervisory Control and Data Acquisition (SCADA) systems; and (ii) are not always aware of any anomalies in the system until they are reported by consumers and authorities. The net result is that the overall operations of WSDs are suboptimal with respect to energy consumption, water losses, infrastructure damage and water quality. In this research, an intelligent platform, namely the Real-time Dynamically Dimensioned Scheduler (RT-DDS), is developed and quantitatively assessed for the proactive control and optimization of WSD operations. The RT-DDS platform was configured to solve a dynamic control problem at every timestep (hour) of the day. The control problem involved the minimization of energy costs (over the 24-hour period) by recommending 'near-optimal' pump schedules, while satisfying hydraulic reliability constraints. These constraints were predefined by operational staff and regulatory limits and define a tolerance band for pressure and storage levels across the WSD system. The RT-DDS platform includes three essential modules. The first module produces high-resolution forecasts of water demand via ensemble machine learning techniques. A water demand profile for the next 24-hours is predicted based on historical demand, ambient conditions (i.e. temperature, precipitation) and current calendar information. The predicted profile is then fed into the second module, which involves a simulation model of the WSD. The model is used to determine the hydraulic impacts of particular control settings. The results of the simulation model are used to guide the search strategy of the final module - a stochastic single solution optimization algorithm. The optimizer is parallelized for computational efficiency, such that the reporting frequency of the platform is within 15 minutes of execution time. The fidelity of the prediction engine of the RT-DDS platform was evaluated with an Advanced Metering Infrastructure (AMI) driven case study, whereby the short-term water consumption of the residential units in the city were predicted. A Multi-Layer Perceptron (MLP) model alongside ensemble-driven learning techniques (Random forests, Bagging trees and Boosted trees) were built, trained and validated as part of this research. A three-stage validation process was adopted to assess the replicative, predictive and structural validity of the models. Further, the models were assessed in their predictive capacity at two different spatial resolutions: at a single meter and at the city-level. While the models proved to have strong generalization capability, via good performance in the cross-validation testing, the models displayed slight biases when aiming to predict extreme peak events in the single meter dataset. It was concluded that the models performed far better with a lower spatial resolution (at the city or district level) whereby peak events are far more normalized. In general, the models demonstrated the capacity of using machine learning techniques in the context of short term water demand forecasting - particularly for real-time control and optimization. In determining the optimal representation of pump schedules for real-time optimization, multiple control variable formulations were assessed. These included binary control statuses and time-controlled triggers, whereby the pump schedule was represented as a sequence of on/off binary variables and active/idle discrete time periods, respectively. While the time controlled trigger representation systematically outperformed the binary representation in terms of computational efficiency, it was found that both formulations led to conditions whereby the system would violate the predefined maximum number of pump switches per calendar day. This occurred because at each timestep the control variable formulation was unaware of the previously elapsed pump switches in the subsequent hours. Violations in the maximum pump switch limits lead to transient instabilities and thus create hydraulically undesirable conditions. As such, a novel feedback architecture was proposed, such that at every timestep, the number of switches that had elapsed in the previous hours was explicitly encoded into the formulation. In this manner, the maximum number of switches per calendar day was never violated since the optimizer was aware of the current trajectory of the system. Using this novel formulation, daily energy cost savings of up to 25\% were achievable on an average day, leading to cost savings of over 2.3 million dollars over a ten-year period. Moreover, stable hydraulic conditions were produced in the system, thereby changing very little when compared to baseline operations in terms of quality of service and overall condition of assets

    Methods and Techniques for Dynamic Deployability of Software-Defined Security Services

    Get PDF
    With the recent trend of “network softwarisation”, enabled by emerging technologies such as Software-Defined Networking and Network Function Virtualisation, system administrators of data centres and enterprise networks have started replacing dedicated hardware-based middleboxes with virtualised network functions running on servers and end hosts. This radical change has facilitated the provisioning of advanced and flexible network services, ultimately helping system administrators and network operators to cope with the rapid changes in service requirements and networking workloads. This thesis investigates the challenges of provisioning network security services in “softwarised” networks, where the security of residential and business users can be provided by means of sets of software-based network functions running on high performance servers or on commodity devices. The study is approached from the perspective of the telecom operator, whose goal is to protect the customers from network threats and, at the same time, maximize the number of provisioned services, and thereby revenue. Specifically, the overall aim of the research presented in this thesis is proposing novel techniques for optimising the resource usage of software-based security services, hence for increasing the chances for the operator to accommodate more service requests while respecting the desired level of network security of its customers. In this direction, the contributions of this thesis are the following: (i) a solution for the dynamic provisioning of security services that minimises the utilisation of computing and network resources, and (ii) novel methods based on Deep Learning and Linux kernel technologies for reducing the CPU usage of software-based security network functions, with specific focus on the defence against Distributed Denial of Service (DDoS) attacks. The experimental results reported in this thesis demonstrate that the proposed solutions for service provisioning and DDoS defence require fewer computing resources, compared to similar approaches available in the scientific literature or adopted in production networks

    Towards cognitive in-operation network planning

    Get PDF
    Next-generation internet services such as live TV and video on demand require high bandwidth and ultra-low latency. The ever-increasing volume, dynamicity and stringent requirements of these services’ demands are generating new challenges to nowadays telecom networks. To decrease expenses, service-layer content providers are delivering their content near the end users, thus allowing a low latency and tailored content delivery. As a consequence of this, unseen metro and even core traffic dynamicity is arising with changes in the volume and direction of the traffic along the day. A tremendous effort to efficiently manage networks is currently ongoing towards the realisation of 5G networks. This translates in looking for network architectures supporting dynamic resource allocation, fulfilling strict service requirements and minimising the total cost of ownership (TCO). In this regard, in-operation network planning was recently proven to successfully support various network reconfiguration use cases in prospective scenarios. Nevertheless, additional research to extend in-operation planning capabilities from typical reactive optimization schemes to proactive and predictive schemes based on the analysis of network monitoring data is required. A hot topic raising increasing attention is cognitive networking, where an elevated knowledge about the network could be obtained as a result of introducing data analytics in the telecom operator’s infrastructure. By using predictive knowledge about the network traffic, in-operation network planning mechanisms could be enhanced to efficiently adapt the network by means of future traffic prediction, thus achieving cognitive in-operation network planning. In this thesis, we focus on studying mechanisms to enable cognitive in-operation network planning in core networks. In particular, we focus on dynamically reconfiguring virtual network topologies (VNT) at the MPLS layer, covering a number of detailed objectives. First, we start studying mechanisms to allow network traffic flow modelling, from monitoring and data transformation to the estimation of predictive traffic model based on this data. By means of these traffic models, then we tackle a cognitive approach to periodically adapt the core VNT to current and future traffic, using predicted traffic matrices based on origin-destination (OD) predictive models. This optimization approach, named VENTURE, is efficiently solved using dedicated heuristic algorithms and its feasibility is demonstrated in an experimental in-operation network planning environment. Finally, we extend VENTURE to consider core flows dynamicity as a result of metro flows re-routing, which represents a meaningful dynamic traffic scenario. This extension, which entails enhancements to coordinate metro and core network controllers with the aim of allowing fast adaption of core OD traffic models, is evaluated and validated in terms of traffic models accuracy and experimental feasibility.Els serveis d’internet de nova generació tals com la televisió en viu o el vídeo sota demanda requereixen d’un gran ample de banda i d’ultra-baixa latència. L’increment continu del volum, dinamicitat i requeriments d’aquests serveis està generant nous reptes pels teleoperadors de xarxa. Per reduir costs, els proveïdors de contingut estan disposant aquests més a prop dels usuaris finals, aconseguint així una entrega de contingut feta a mida. Conseqüentment, estem presenciant una dinamicitat mai vista en el tràfic de xarxes de metro amb canvis en la direcció i el volum del tràfic al llarg del dia. Actualment, s’està duent a terme un gran esforç cap a la realització de xarxes 5G. Aquest esforç es tradueix en cercar noves arquitectures de xarxa que suportin l’assignació dinàmica de recursos, complint requeriments de servei estrictes i minimitzant el cost total de la propietat. En aquest sentit, recentment s’ha demostrat com l’aplicació de “in-operation network planning” permet exitosament suportar diversos casos d’ús de reconfiguració de xarxa en escenaris prospectius. No obstant, és necessari dur a terme més recerca per tal d’estendre “in-operation network planning” des d’un esquema reactiu d’optimització cap a un nou esquema proactiu basat en l’analítica de dades provinents del monitoritzat de la xarxa. El concepte de xarxes cognitives es també troba al centre d’atenció, on un elevat coneixement de la xarxa s’obtindria com a resultat d’introduir analítica de dades en la infraestructura del teleoperador. Mitjançant un coneixement predictiu sobre el tràfic de xarxa, els mecanismes de in-operation network planning es podrien millorar per adaptar la xarxa eficientment basant-se en predicció de tràfic, assolint així el que anomenem com a “cognitive in-operation network Planning”. En aquesta tesi ens centrem en l’estudi de mecanismes que permetin establir “el cognitive in-operation network Planning” en xarxes de core. En particular, ens centrem en reconfigurar dinàmicament topologies de xarxa virtual (VNT) a la capa MPLS, cobrint una sèrie d’objectius detallats. Primer comencem estudiant mecanismes pel modelat de fluxos de tràfic de xarxa, des del seu monitoritzat i transformació fins a l’estimació de models predictius de tràfic. Posteriorment, i mitjançant aquests models predictius, tractem un esquema cognitiu per adaptar periòdicament la VNT utilitzant matrius de tràfic basades en predicció de parells origen-destí (OD). Aquesta optimització, anomenada VENTURE, és resolta eficientment fent servir heurístiques dedicades i és posteriorment avaluada sota escenaris de tràfic de xarxa dinàmics. A continuació, estenem VENTURE considerant la dinamicitat dels fluxos de tràfic de xarxes de metro, el qual representa un escenari rellevant de dinamicitat de tràfic. Aquesta extensió involucra millores per coordinar els operadors de metro i core amb l’objectiu d’aconseguir una ràpida adaptació de models de tràfic OD. Finalment, proposem dues arquitectures de xarxa necessàries per aplicar els mecanismes anteriors en entorns experimentals, emprant protocols estat-de-l’art com són OpenFlow i IPFIX. La metodologia emprada per avaluar el treball anterior consisteix en una primera avaluació numèrica fent servir un simulador de xarxes íntegrament dissenyat i desenvolupat per a aquesta tesi. Després d’aquesta validació basada en simulació, la factibilitat experimental de les arquitectures de xarxa proposades és avaluada en un entorn de proves distribuït.Postprint (published version
    • …
    corecore