11 research outputs found

    Multi-operator Differential Evolution with MOEA/D for Solving Multi-objective Optimization Problems, Journal of Telecommunications and Information Technology, 2022, nr3

    Get PDF
    In this paper, we propose a multi-operator differentia evolution variant that incorporates three diverse mutation strategies in MOEA/D. Instead of exploiting the local region, the proposed approach continues to search for optimal solutions in the entire objective space. It explicitly maintains diversity of the population by relying on the benefit of clustering. To promowe convergence, the solutions close to the ideal position, in the objective space are given preference in the evolutionary process. The core idea is to ensure diversity of the population by applying multiple mutation schemes and a faster convergence rate, giving preference to solutions based on their proximity to the ideal position in the MOEA/D paradigm. The performance of the proposed algorithm is evaluated by two popular test suites. The experimental results demonstrate that the proposed approach outperforms other MOEA/D algorithms

    Static and dynamic global stiffness analysis for automotive pre-design

    Get PDF
    Tesi en modalitat de cotutela: Universitat Politècnica de Catalunya i Swansea UniversityIn order to be worldwide competitive, the automotive industry is constantly challenged to produce higher quality vehicles in the shortest time possible and with the minimum costs of production. Most of the problems with new products derive from poor quality design processes, which often leads to undesired issues in a stage where changes are extremely expensive. During the preliminary design phase, designers have to deal with complex parametric problems where material and geometric characteristics of the car components are unknown. Any change in these parameters might significantly affect the global behaviour of the car. A target which is very sensitive to small variations of the parameters is the noise and vibration response of the vehicle (NVH study), which strictly depends on its global static and dynamic stiffness. In order to find the optimal solution, a lot of configurations exploring all the possible parametric combinations need to be tested. The current state of the art in the automotive design context is still based on standard numerical simulations, which are computationally very expensive when applied to this kind of multidimensional problems. As a consequence, a limited number of configurations is usually analysed, leading to suboptimal products. An alternative is represented by reduced order method (ROM) techniques, which are based on the idea that the essential behaviour of complex systems can be accurately described by simplified low-order models. This thesis proposes a novel extension of the proper generalized decomposition (PGD) method to optimize the design process of a car structure with respect to its global static and dynamic stiffness properties. In particular, the PGD method is coupled with the inertia relief (IR) technique and the inverse power method (IPM) to solve, respectively, the parametric static and dynamic stiffness analysis of an unconstrained car structure and extract its noise and vibrations properties. A main advantage is that, unlike many other ROM methods, the proposed approach does not require any pre-processing phase to collect prior knowledge of the solution. Moreover, the PGD solution is computed with only one offline computation and presents an explicit dependency on the introduced design variables. This allows to compute the solutions at a negligible computational cost and therefore opens the door to fast optimisation studies and real-time visualisations of the results in a pre-defined range of parameters. A novel algebraic approach is also proposed which allows to involve both material and complex geometric parameters, such that shape optimisation studies can be performed. In addition, the method is developed in a nonintrusive format, such that an interaction with commercial software is possible, which makes it particularly interesting for industrial applications. Finally, in order to support the designers in the decision-making process, a graphical interface app is developed which allows to visualise in real-time how changes in the design variables affect pre-defined quantities of interest.Para ser competitiva en todo el mundo, la industria del automóvil se enfrenta constantemente al reto de producir vehículos de mayor calidad en el menor tiempo posible y con los mínimos costes de producción. La mayor parte de los problemas de los nuevos productos derivan de la mala calidad de los procesos de diseño, que a menudo conduce a problemas no deseados en una fase en la que los cambios son extremadamente caros. Durante la fase de diseño preliminar, los diseñadores tienen que enfrentarse a complejos problemas paramétricos en los que se desconocen las características materiales y geométricas de los componentes del coche. Cualquier cambio en estos parámetros puede afectar significativamente al comportamiento global del coche. Un objetivo muy sensible a pequeñas variaciones de los parámetros es la respuesta al ruido y las vibraciones del vehículo (estudio NVH), que depende estrictamente de su rigidez global estática y dinámica. Para encontrar la solución óptima, es necesario probar muchas configuraciones que exploren todas las combinaciones paramétricas posibles. El estado actual de la técnica en el contexto del diseño de automóviles sigue basándose en simulaciones numéricas estándar, que son muy costosas desde el punto de vista de cálculo cuando se aplican a este tipo de problemas multidimensionales. Como consecuencia, se suele analizar un número limitado de configuraciones, lo que conduce a productos subóptimos. Una alternativa la representan las técnicas de reduced order modelling (ROM), que se basan en la idea de que el comportamiento esencial de los sistemas complejos puede describirse con precisión mediante modelos simplificados. Esta tesis propone una nueva extensión del método de proper generalised decomposition (PGD) para optimizar el proceso de diseño de la estructura de un automóvil con respecto a sus propiedades globales de rigidez estática y dinámica. En particular, el método PGD se acopla con la técnica de inertia relief (IR) y el inverse power method (IPM) para resolver, respectivamente, el análisis paramétrico de la rigidez estática y dinámica de una estructura de coche sin restricciones y extraer sus propiedades de ruido y vibraciones. Una de las principales ventajas es que, a diferencia de muchos otros métodos ROM, el enfoque propuesto no requiere ninguna fase de preprocesamiento para recoger el conocimiento previo de la solución. Además, la solución del PGD se calcula con un solo cálculo fuera de línea y presenta una dependencia explícita de las variables de diseño introducidas. Esto permite calcular las soluciones con un coste computacional insignificante y, por tanto, abre la puerta a estudios de optimización rápidos y a la visualización en tiempo real de los resultados en un rango predefinido de parámetros. También se propone un nuevo enfoque algebraico que permite involucrar tanto el material como los parámetros geométricos complejos, de manera que se pueden realizar estudios de optimización de la forma. Además, el método se desarrolla en un formato no intrusivo, de forma que es posible la interacción con software comercial, lo que lo hace especialmente interesante para aplicaciones industriales. Por último, para apoyar a los diseñadores en el proceso de toma de decisiones, se desarrolla una aplicación de interfaz gráfica que permite visualizar en tiempo real cómo los cambios en las variables de diseño afectan a las cantidades de interés predefinidas.Postprint (published version

    Static and dynamic global stiffness analysis for automotive pre-design

    Get PDF
    In order to be worldwide competitive, the automotive industry is constantly challenged to produce higher quality vehicles in the shortest time possible and with the minimum costs of production. Most of the problems with new products derive from poor quality design processes, which often leads to undesired issues in a stage where changes are extremely expensive. During the preliminary design phase, designers have to deal with complex parametric problems where material and geometric characteristics of the car components are unknown. Any change in these parameters might significantly affect the global behaviour of the car. A target which is very sensitive to small variations of the parameters is the noise and vibration response of the vehicle (NVH study), which strictly depends on its global static and dynamic stiffness. In order to find the optimal solution, a lot of configurations exploring all the possible parametric combinations need to be tested. The current state of the art in the automotive design context is still based on standard numerical simulations, which are computationally very expensive when applied to this kind of multidimensional problems. As a consequence, a limited number of configurations is usually analysed, leading to suboptimal products. An alternative is represented by reduced order method (ROM) techniques, which are based on the idea that the essential behaviour of complex systems can be accurately described by simplified low-order models.This thesis proposes a novel extension of the proper generalized decomposi-tion (PGD) method to optimize the design process of a car structure with respect to its global static and dynamic stiffness properties. In particular, the PGD method is coupled with the inertia relief (IR) technique and the inverse power method (IPM) to solve, respectively, the parametric static and dynamic stiffness analysis of an unconstrained car structure and extract its noise and vibrations properties. A main advantage is that, unlike many other ROM methods, the proposed approach does not require any pre-processing phase to collect prior knowledge of the solution. Moreover, the PGD solution is computed with only one offline computation and presents an explicit dependency on the introduced design variables. This allows to compute the solutions at a negligible computational cost and therefore opens the door to fast optimisation studies and real-time visualisations of the results in a pre-defined range of parameters. A novel algebraic approach is also proposed which allows to involve both material and com-plex geometric parameters, such that shape optimisation studies can be performed. In addition, the method is developed in a nonintrusive format, such that an interaction with commercial software is possible, which makes it particularly interesting for industrial applications. Finally, in order to support the designers in the decision-making process, a graphical interface app is developed which allows to visualise in real-time how changes in the design variables affect pre-defined quantities of interest

    Time series prediction and forecasting using Deep learning Architectures

    Get PDF
    Nature brings time series data everyday and everywhere, for example, weather data, physiological signals and biomedical signals, financial and business recordings. Predicting the future observations of a collected sequence of historical observations is called time series forecasting. Forecasts are essential, considering the fact that they guide decisions in many areas of scientific, industrial and economic activity such as in meteorology, telecommunication, finance, sales and stock exchange rates. A massive amount of research has already been carried out by researchers over many years for the development of models to improve the time series forecasting accuracy. The major aim of time series modelling is to scrupulously examine the past observation of time series and to develop an appropriate model which elucidate the inherent behaviour and pattern existing in time series. The behaviour and pattern related to various time series may possess different conventions and infact requires specific countermeasures for modelling. Consequently, retaining the neural networks to predict a set of time series of mysterious domain remains particularly challenging. Time series forecasting remains an arduous problem despite the fact that there is substantial improvement in machine learning approaches. This usually happens due to some factors like, different time series may have different flattering behaviour. In real world time series data, the discriminative patterns residing in the time series are often distorted by random noise and affected by high-frequency perturbations. The major aim of this thesis is to contribute to the study and expansion of time series prediction and multistep ahead forecasting method based on deep learning algorithms. Time series forecasting using deep learning models is still in infancy as compared to other research areas for time series forecasting.Variety of time series data has been considered in this research. We explored several deep learning architectures on the sequential data, such as Deep Belief Networks (DBNs), Stacked AutoEncoders (SAEs), Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). Moreover, we also proposed two different new methods based on muli-step ahead forecasting for time series data. The comparison with state of the art methods is also exhibited. The research work conducted in this thesis makes theoretical, methodological and empirical contributions to time series prediction and multi-step ahead forecasting by using Deep Learning Architectures

    Adaptive estimation and change detection of correlation and quantiles for evolving data streams

    Get PDF
    Streaming data processing is increasingly playing a central role in enterprise data architectures due to an abundance of available measurement data from a wide variety of sources and advances in data capture and infrastructure technology. Data streams arrive, with high frequency, as never-ending sequences of events, where the underlying data generating process always has the potential to evolve. Business operations often demand real-time processing of data streams for keeping models up-to-date and timely decision-making. For example in cybersecurity contexts, analysing streams of network data can aid the detection of potentially malicious behaviour. Many tools for statistical inference cannot meet the challenging demands of streaming data, where the computational cost of updates to models must be constant to ensure continuous processing as data scales. Moreover, these tools are often not capable of adapting to changes, or drift, in the data. Thus, new tools for modelling data streams with efficient data processing and model updating capabilities, referred to as streaming analytics, are required. Regular intervention for control parameter configuration is prohibitive to the truly continuous processing constraints of streaming data. There is a notable absence of such tools designed with both temporal-adaptivity to accommodate drift and the autonomy to not rely on control parameter tuning. Streaming analytics with these properties can be developed using an Adaptive Forgetting (AF) framework, with roots in adaptive filtering. The fundamental contributions of this thesis are to extend the streaming toolkit by using the AF framework to develop autonomous and temporally-adaptive streaming analytics. The first contribution uses the AF framework to demonstrate the development of a model, and validation procedure, for estimating time-varying parameters of bivariate data streams from cyber-physical systems. This is accompanied by a novel continuous monitoring change detection system that compares adaptive and non-adaptive estimates. The second contribution is the development of a streaming analytic for the correlation coefficient and an associated change detector to monitor changes to correlation structures across streams. This is demonstrated on cybersecurity network data. The third contribution is a procedure for estimating time-varying binomial data with thorough exploration of the nuanced behaviour of this estimator. The final contribution is a framework to enhance extant streaming quantile estimators with autonomous, temporally-adaptive properties. In addition, a novel streaming quantile procedure is developed and demonstrated, in an extensive simulation study, to show appealing performance.Open Acces

    Adaptive Modelling and Planning for Learning Intelligent Behaviour

    Get PDF
    Institute of Perception, Action and BehaviourAn intelligent agent must be capable of using its past experience to develop an understanding of how its actions affect the world in which it is situated. Given some objective, the agent must be able to effectively use its understanding of the world to produce a plan that is robust to the uncertainty present in the world. This thesis presents a novel computational framework called the Adaptive Modelling and Planning System (AMPS) that aims to meet these requirements for intelligence. The challenge of the agent is to use its experience in the world to generate a model. In problems with large state and action spaces, the agent can generalise from limited experience by grouping together similar states and actions, effectively partitioning the state and action spaces into finite sets of regions. This process is called abstraction. Several different abstraction approaches have been proposed in the literature, but the existing algorithms have many limitations. They generally only increase resolution, require a large amount of data before changing the abstraction, do not generalise over actions, and are computationally expensive. AMPS aims to solve these problems using a new kind of approach. AMPS splits and merges existing regions in its abstraction according to a set of heuristics. The system introduces splits using a mechanism related to supervised learning and is defined in a general way, allowing AMPS to leverage a wide variety of representations. The system merges existing regions when an analysis of the current plan indicates that doing so could be useful. Because several different regions may require revision at any given time, AMPS prioritises revision to best utilise whatever computational resources are available. Changes in the abstraction lead to changes in the model, requiring changes to the plan. AMPS prioritises the planning process, and when the agent has time, it replans in high-priority regions. This thesis demonstrates the flexibility and strength of this approach in learning intelligent behaviour from limited experience

    Virtual Factory:a systemic approach to building smart factories

    Get PDF

    Adaptive Simulation Modelling Using The Digital Twin Paradigm

    Get PDF
    Structural Health Monitoring (SHM) involves the application of qualified standards, by competent people, using appropriate processes and procedures throughout the struc- ture’s life cycle, from design to decommissioning. The main goal is to ensure that through an ongoing process of risk management, the structure’s continued fitness-for-purpose (FFP) is maintained – allowing for optimal use of the structure with a minimal chance of downtime and catastrophic failure. While undertaking the SHM task, engineers use model(s) to predict the risk to the structure from degradation mechanisms such as corrosion and cracking. These predictive models are either physics-based, data-driven or hybrid based. The process of building these predictive models tends to involve processing some input parameters related to the material properties (e.g.: mass density, modulus of elasticity, polarisation current curve, etc) or/and the environment, to calibrate the model and using them for the predictive simulation. So, the accuracy of the predictions is very much dependent upon the input data describing the properties of the materials and/or the environmental conditions the structure experiences. For the structure(s) with non-uniform and complex degradation behaviour, this pro- cess is repeated over the life-time of the structure(s), i.e., when each new survey is per- formed (or new data is available) and then the survey data are used to infer changes in the material or environmental properties. This conventional parameter tuning and updat- ing approach is computationally expensive and time-consuming, as multi-simulations are needed and manual intervention is expected to determine the optimal model parameters. There is therefore a need for a fundamental paradigm shift to address the shortcomings of conventional approaches. The Digital Twin (DT) offers such a paradigm shift in that it integrates ultra-high fidelity simulation model(s) with other related structural data, to mirror the structural behaviour of its corresponding physical twin. DT’s inherent ability to handle large data allows for the inclusion of an evolving set of data relating to the struc- ture with time as well as provides for the adaptation of the simulation model with very little need for human intervention. This research project investigated DT as an alternative to the existing model calibration and adaptation approach. It developed a design of experiment platform for online model validation and adaptation (i.e., parameter updating) solver(s) within the Digital Twin paradigm. The design of experimental platform provided a basis upon which an approach based on the creation of surrogates and reduced order model (ROM)-assisted parameter search were developed for improving the efficiency of model calibration and adaptation. Furthermore, the developed approach formed a basis for developing solvers which pro- vide for the self-calibration and self-adaptation capability required for the prediction and analysis of an asset’s structural behaviour over time. The research successfully demonstrated that such solvers can be used to efficiently calibrate ultra-high-fidelity simulation model within a DT environment for the accurate prediction of the status of a real-world engineering structure
    corecore