26 research outputs found

    Arquiteturas de hardware para aceleração de algoritmos de controle preditivo não-linear

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2018.O Controle Preditivo Baseado em Modelos (MPC) é uma técnica avançada de controle que vem ganhando espaço tanto na academia quanto na indústria ao longo das últimas décadas. O fato de incorporar restrições em sua lei de controle e de poder ser aplicada tanto para sistemas lineares simples quanto para sistemas não-lineares complexos com múltiplas entradas e múltiplas saídas tornam seu emprego bastante atraente. Porém, seu alto custo computacional muitas vezes impede sua aplicação a sistemas com dinâmicas rápidas, principalmente a sistemas não-lineares embarcados onde há restrições computacionais e de consumo de energia. Baseado nisso, este trabalho se propõe a desenvolver algoritmos e arquiteturas em hardware capazes de viabilizar a aplicação do Controle Preditivo Não-Linear (NMPC) para sistemas embarcados. Duas abordagens são desenvolvidas ao longo do trabalho. A primeira aplica técnicas de aprendizado de máquina utilizando Redes Neurais Artificiais (RNAs) e Máquinas de Vetor de Suporte (SVMs) para criar soluções que aproximam o comportamento do NMPC em hardware. Neste caso, técnicas para o treinamento das RNAs e SVMs são exploradas com o intuito de generalizar uma solução capaz de lidar com uma ampla faixa de referências de controle. Em seguida, arquiteturas de hardware em ponto-flutuante para a implementação de RNAs do tipo RBF (Radial Basis Functions) e SVMs são desenvolvidas juntamente com configurador automático capaz de gerar os códigos VHDL (VHSIC Hardware Description Language) das respectivas arquiteturas baseado nos resultados de treinamento e sua topologia. As arquiteturas resultantes são testadas em um FPGA (Field-Programmable Gate Array) de baixo custo e são capazes de computar soluções em menos de 1 s. Na segunda abordagem, o algoritmo heurístico de Otimização por Enxame de Partículas (PSO), é estudado e adaptado para etapa de busca da sequência de controle ótima do NMPC. Dentre as modificações estão incluídas a adição de funções de penalização para obedecer às restrições de estados do sistema, o aprimoramento da técnica KPSO (Knowledge-Based PSO), denominada KPSO+SS, onde resultados de períodos de soluções de períodos amostragem anteriores são combinados com informações sobre o sinal de controle em estado estacionário e seus valores máximos e mínimos para agilizar a busca pela solução ótima. Mais uma vez, arquiteturas de hardware em ponto-flutuante são desenvolvidas para viabilizar a aplicação do controlador NMPC-PSO a sistemas embarcados. Um gerador de códigos da solução NMPC-PSO é proposto para permitir a aplicação da mesma arquitetura a outros sistemas. Em seguida, a solução é testada para o procedimento de swing-up do pêndulo invertido utilizando uma plataforma hardware-inthe- loop (HIL) e apresentou bom desempenho em tempo-real calculando a solução em menos de 3 ms. Finalmente, a solução NMPC-PSO é validada em um sistema de pêndulos gêmeos e outro sistema de controle de atitudes de um satélite.Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) e Decanato de Pesquisa e Inovação -(DPI/ UnB).Model-based Predictive Control (MPC) is an advanced control technique that has been gaining adoption in industry and the academy along the last few decades. Its ability to incorporate system constraints in the control law and be applied from simple linear systems up to more complex nonlinear systems with multiple inputs and outputs attracts its usage. However, the high computational cost associated with this technique often hinders its use, especially in embedded nonlinear systems with fast dynamics with computational and restrictions. Based on these facts, this work aims to study and develop algorithms and hardware architectures that can enable the application of Nonlinear Model Predictive Control (NMPC) on embedded systems. Two approaches are developed throughout this work. The first one applies machine learning techniques using Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) to create solutions that approximate the NMPC behavior in hardware. In this case, ANN and SVM training techniques are explored with the aim to generalize the control solution and work on a large range of reference control inputs. Next, floating-point hardware architectures to implement Radial Basis Function ANNs and SVM solutions are developed along with an automatic architectural configuration too, capable of generating the VHDL (VHSIC Hardware Description Language) codes based on the training results and its topology. Resulting architectures are tested on a low-cost FPGA (Field-Programmable Gate Array) and are capable of computing the solution in under 1 s. In a second approach, the Particle Swarm Optimization (PSO), which is a heuristic algorithm, is studied and adapted to perform the optimal control sequence search phase of the NMPC. Among the main optimizations performed are the addition of penalty functions to address the controlled system state constraints, an improved KPSO (Knowledge-Based PSO) technique named KPSO+SS, where results from previous sampling periods are combined with steady-state control information to speed-up the optimal solution search. Hardware architectures with floating-point arithmetic to enable the application of the NMPC-PSO solution on embedded systems are developed. Once again, a hardware description configuration tool is created to allow the architecture to be applied to multiple systems. Then, the solution is applied to a real-time inverted pendulum swing-up procedure tested on a hardware-in-the-loop (HIL) platform. The experiment yielding good performance and control results and was able to compute the solutions in under 3 ms. Finally, the NMPC-PSO solution is further validated performing a swing-up procedure on a Twin Pendulum system and then on a satellite control platform, a system with multiple inputs and output

    Nonlinear moving-horizon state estimation for hardware implementation and a model predictive control application

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2021.Nesta dissertação, exploramos a aplicação de redes neurais artificiais de funções de base radial (RBFs) embutidas em hardware para estimação de estados e controle em tempo real utilizando os algoritmos de Moving-Horizon Estimation(MHE) e Model Predictive Control (MPC). Esses algoritmos foram posteriormente aproximados por RBFs e implementados em um Field Programmable Gate Array (FPGA), que tem mostrado bons resultados em termos de precisão e tempo ˜ computacional. Mostramos que a estimativa de estado usando a versão aproximada do MHE ˜ pode ser executada usando um kit em escala de laboratório de aproximadamente 500 kHz para ´ um pendulo invertido a uma taxa de clock de cerca de 110 MHz. A latência para fornecer uma estimativa pode ser reduzida ainda mais quando FPGAs com clocks mais altos são usados, pois a ˜ arquitetura da rede neural artificial e inerentemente paralela. Após uma inspeção mais detalhada, ˜ descobriu-se que era possível reduzir o custo da área de chip trocando a função de custo por uma ˜ com resultados mais facilmente representáveis. Ele poderia então utilizar uma representação em ˜ 32 bits e o modulo CORDIC poderia ser removido, usando apenas a aproximação mais simples da ˜ serie de Taylor de 2 ´ ª ordem. Em seguida, expandimos isso, investigando a ideia de usar uma única rede neural para substituir tanto o controle quanto o estimatidor de estados. Comparado a um MPC com informações completas, sua versão utilizando o MHE não teve um bom desempenho contra ˜ ruídos de saída. A princípio não foi possível aproximar o controle e a estimativa do pêndulo com um bom resultado, porem ao separar o controle em duas partes obtivemos melhores resultados. Por fim, verificamos que tal rede neural foi capaz de estabilizar o sistema de pendulo invertido, ˆ mas não de aproximar sua parte oscilante n ˜ ao linear. A solução aqui apresentada ˜ e encorajada a ser estendida para sistemas mais complexos e não lineares, uma vez que uma arquitetura com ˜ complexidade razoável é encontrada para a rede neural artificial para ser implementada.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).In this dissertation, we explore the application of radial basis functions (RBFs) artificial neural networks embedded in hardware for real-time estimation and control algorithms as the Moving- Horizon Estimation (MHE) and the Model Predictive Control (MPC). These algorithms are then approximated using RBFs and implemented in a Field Programmable Gate Array (FPGA), which has shown good results in terms of accuracy and computational time. We show that the state estimate using the approximate version of the MHE can be run using a laboratory-scale kit of approximately 500 kHz for an inverted pendulum at a clock rate of about 110 MHz. The latency to provide an estimate can be further reduced when FPGAs with higher clocks are used as the artificial neural network architecture is inherently parallel. Upon further inspection, it was found to be possible to reduce the chip area cost by switching the cost function for one with more easily representable results. It could then utilize a 32-bits representation and the CORDIC module could be removed, using instead only the simpler 2o order Taylor approximation. We then expand upon this, probing at the idea of using a single neural network to substitute both the control and state-estimation. Compared to a MPC with full information, its version utilizing the MHE did not perform well against output noises. At first, it was not possible to approximate the pendulum control and estimation with a good result, however when separating the control in two parts we gained better outcomes. Lastly, we verify that such a neural network was capable of stabilizing the inverted pendulum system, but not of approximating the non-linear swing-up part of it. The solution herein presented is encouraged to be further extended for more complex and nonlinear systems, given that an architecture is found for the artificial neural network with reasonable complexity to be implemented

    Embedded Electronic Systems for Electronic Skin Applications

    Get PDF
    The advances in sensor devices are potentially providing new solutions to many applications including prosthetics and robotics. Endowing upper limb prosthesis with tactile sensors (electronic/sensitive skin) can be used to provide tactile sensory feedback to the amputees. In this regard, the prosthetic device is meant to be equipped with tactile sensing system allowing the user limb to receive tactile feedback about objects and contact surfaces. Thus, embedding tactile sensing system is required for wearable sensors that should cover wide areas of the prosthetics. However, embedding sensing system involves set of challenges in terms of power consumption, data processing, real-time response and design scalability (e-skin may include large number of tactile sensors). The tactile sensing system is constituted of: (i) a tactile sensor array, (ii) an interface electronic circuit, (iii) an embedded processing unit, and (iv) a communication interface to transmit tactile data. The objective of the thesis is to develop an efficient embedded tactile sensing system targeting e-skin application (e.g. prosthetic) by: 1) developing a low power and miniaturized interface electronics circuit, operating in real-time; 2) proposing an efficient algorithm for embedded tactile data processing, affecting the system time latency and power consumption; 3) implementing an efficient communication channel/interface, suitable for large amount of data generated from large number of sensors. Most of the interface electronics for tactile sensing system proposed in the literature are composed of signal conditioning and commercial data acquisition devices (i.e. DAQ). However, these devices are bulky (PC-based) and thus not suitable for portable prosthetics from the size, power consumption and scalability point of view. Regarding the tactile data processing, some works have exploited machine learning methods for extracting meaningful information from tactile data. However, embedding these algorithms poses some challenges because of 1) the high amount of data to be processed significantly affecting the real time functionality, and 2) the complex processing tasks imposing burden in terms of power consumption. On the other hand, the literature shows lack in studies addressing data transfer in tactile sensing system. Thus, dealing with large number of sensors will pose challenges on the communication bandwidth and reliability. Therefore, this thesis exploits three approaches: 1) Developing a low power and miniaturized Interface Electronics (IE), capable of interfacing and acquiring signals from large number of tactile sensors in real-time. We developed a portable IE system based on a low power arm microcontroller and a DDC232 A/D converter, that handles an array of 32 tactile sensors. Upon touch applied to the sensors, the IE acquires and pre-process the sensor signals at low power consumption achieving a battery lifetime of about 22 hours. Then we assessed the functionality of the IE by carrying out Electrical and electromechanical characterization experiments to monitor the response of the interface electronics with PVDF-based piezoelectric sensors. The results of electrical and electromechanical tests validate the correct functionality of the proposed system. In addition, we implemented filtering methods on the IE that reduced the effect of noise in the system. Furthermore, we evaluated our proposed IE by integrating it in tactile sensory feedback system, showing effective deliver of tactile data to the user. The proposed system overcomes similar state of art solutions dealing with higher number of input channels and maintaining real time functionality. 2) Optimizing and implementing a tensorial-based machine learning algorithm for touch modality classification on embedded Zynq System-on-chip (SoC). The algorithm is based on Support Vector Machine classifier to discriminate between three input touch modality classes \u201cbrushing\u201d, \u201crolling\u201d and \u201csliding\u201d. We introduced an efficient algorithm minimizing the hardware implementation complexity in terms of number of operations and memory storage which directly affect time latency and power consumption. With respect to the original algorithm, the proposed approach \u2013 implemented on Zynq SoC \u2013 achieved reduction in the number of operations per inference from 545 M-ops to 18 M-ops and the memory storage from 52.2 KB to 1.7 KB. Moreover, the proposed method speeds up the inference time by a factor of 43 7 at a cost of only 2% loss in accuracy, enabling the algorithm to run on embedded processing unit and to extract tactile information in real-time. 3) Implementing a robust and efficient data transfer channel to transfer aggregated data at high transmission data rate and low power consumption. In this approach, we proposed and demonstrated a tactile sensory feedback system based on an optical communication link for prosthetic applications. The optical link features a low power and wide transmission bandwidth, which makes the feedback system suitable for large number of tactile sensors. The low power transmission is due to the employed UWB-based optical modulation. We implemented a system prototype, consisting of digital transmitter and receiver boards and acquisition circuits to interface 32 piezoelectric sensors. Then we evaluated the system performance by measuring, processing and transmitting data of the 32 piezoelectric sensors at 100 Mbps data rate through the optical link, at 50 pJ/bit communication energy consumption. Experimental results have validated the functionality and demonstrated the real time operation of the proposed sensory feedback system

    Probabilistic performance validation of deep learning-based robust NMPC controllers

    Get PDF
    Solving nonlinear model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep learning and to verify its quality using probabilistic validation techniques. We propose a probabilistic validation technique based on finite families, combined with the idea of generalized maximum and constraint backoff to enable statistically valid conclusions related to general performance indicators. The potential of the proposed approach is demonstrated with simulation results of an uncertain nonlinear system.gencia Estatal de Investigación (AEI)-Spain Grant PID2019-106212RB-C41/AEI/10.13039/501100011

    Probabilistic performance validation of deep learning-based robust NMPC controllers

    Get PDF
    Solving nonlinear model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep learning and to verify its quality using probabilistic validation techniques. We propose a probabilistic validation technique based on finite families, combined with the idea of generalized maximum and constraint backoff to enable statistically valid conclusions related to general performance indicators. The potential of the proposed approach is demonstrated with simulation results of an uncertain nonlinear system

    Seleção de parâmetros de máquinas de vetores de suporte usando otimização multiobjetivo baseada em meta-heurísticas

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2019.Máquinas de Vetores de Suporte (Support Vector Machines - SVMs) representam uma técnica de Aprendizagem de Máquina (Machine Learning - ML), que modelam classificadores e regressores, amplamente utilizados principalmente devido às suas propriedades matemáticas que incluem boa capacidade de generalização e robustez. O modelo de treinamento das SVMs busca minimizar o risco empírico simultaneamente à capacidade de generalização. Entretanto, para obter modelos com boa precisão e baixa complexidade, é necessário definir o kernel e os seus parâmetros, assim como os parâmetros do modelo de treinamento. Os parâmetros do kernel e do modelo de treinamento juntos são denominados de hiperparâmetros do problema de seleção de parâmetros das SVMs. Porém minimizar a complexidade e maximizar a capacidade de generalização das SVM/SVRs são critérios contraditórios e, por isso, neste trabalho, o problema de seleção de parâmetros é modelado com um problema de otimização multiobjetivo (MOOP). Para resolver este problema, foi desenvolvida uma meta-heurística multiobjetivo chamada de Adaptive Parameter with Mutant Tournament Multi-Objective Differential Evolution (APMT-MODE), Multi-Objective Particle Swarm Optimization (MOPSO), objetivando resolver o problema de seleção de parâmetros. Os algoritmos desenvolvidos foram testados com um conjunto de treinamento para classificadores e regressores (obtidos no repositório University of California, Irvine), combinados com os kernels gaussiano, cauchy, polinomial e arco cosseno. Para validar as meta-heurísticas desenvolvidas, foi realizado o teste estatístico de Friedman e os testes post hoc, os quais mostraram que o algoritmo APMT-MODE é superior ao clássico algoritmo Non Sorting Genetic Algorithm - II. Além desses estudos, foram realizados estudos de comparação da complexidade computacional entre modelos de SVMs com diferentes kernels, nos quais os kernels gaussiano e polinomial configurados pelo APMT-MODE obtiveram melhor desempenho. Como aplicação em situações reais, o APMT-MODE foi empregado para obtenção de modelos para predição da penetração e largura de cordões de solda, que são utilizados como parâmetros para o controle de processo de soldas. A partir das análises, concluiu que os modelos gerados pelo APMT-MODE são mais eficientes que os encontrados para redes neurais do tipo perceptron. Finalmente, o Grid Search foi empregado para caracterizar os espaço de busca e a complexidade do problema de seleção de parâmetros modelado como um MOOP.Support Vector Machines (SVMs) represent a Machine Learning technique (ML), which model classifiers and regressors, widely used mainly because of their mathematical properties that include good generalizability and robustness. The SVM training model seeks to minimize the empirical risk simultaneously to the generalization capacity. However, to obtain models with good precision and low complexity, it is necessary to define the kernel and its parameters, as well as the parameters of the training model. The parameters of the kernel and the training model together are called hyperparameters of the parameter selection problem of the SVMs. However, minimizing the complexity and maximizing the generalization capacity of the SVM/SVRs are conflicting criteria and, therefore, in this work, the problem of parameter selection is modeled with a multiobjective optimization (MOOP) problem. In order to solve this problem, a multiobjective meta-heuristic called Adaptive Parameter with Mutant Tournament Multi-Objective Differential Evolution (APMT-MODE) was developed, aiming at solving the problem of parameter selection. The algorithms developed were tested with a training set for classifiers and regressors (obtained from the University of California, Irvine repository), combined with the Gaussian, Cauchy, polynomial, and cosine arc textures. In order to validate the developed heuristics, the Friedman statistical test, and the post-hoc tests were performed, which showed that the APMT-MODE algorithm is superior to the classic Non-Sorting Genetic Algorithm-II algorithm. In addition to these studies, computational complexity was compared between models of SVMs with different kernels, in which the Gaussian and polynomial kernels configured by APMT-MODE obtained better performance than others considered algorithms. As an application in real situations, the APMTMODE was used to obtain models for penetration prediction and width of weld beads, which are used as parameters for the process control of welds. From the analysis, it was concluded that the models generated by APMT-MODE are more efficient than those found for neural networks of the perceptron type. Finally, Grid Search was used to characterize the search space and complexity of the parameter selection problem modeled as a MOOP

    Arquiteturas de hardware para aceleração de algoritmos de reconstrução morfológica

    Get PDF
    Este trabalho apresenta um estudo da implementação de algoritmos para a reconstrução morfológica de imagens bio-medicas em FPGAs (Field Programmable Gate Arrays). As arquiteturas foram baseadas nos algoritmos Sequential Reconstruction (SR) e Fast Hybrid (FH) usando linguagem de descrição de hardware VHDL (Very High Description Language). A metodologia para avaliar a plataforma consistiu em verificar a arquitetura projetada no QuestaSim, fornecendo como dados de entrada as imagens a ser reconstruídas. Adicionalmente, a validação dos resultados da arquitetura foi feita usando linguagem C ou Matlab (usando a função imreconstruct). Além disso, um estudo consumo de recursos de hardware para diferentes tamanhos e conteúdos de imagens foram realizados com o intuito de verificar a aplicabilidade dos algoritmos em arquiteturas reconfiguráveis. Neste trabalho, para a aceleração do processo de reconstrução da imagem foi proposta uma arquitetura reconfigurável baseada no algoritmo FH junto com um algoritmo de aprendizagem de máquina, especificamente uma máquina de vetores de suporte (SVM). Para o treinamento da SVM foi usada uma metodologia de verificação/validação obtendo aproximadamente 20.000 dados de treinamento. Finalmente, foi implementada uma arquitetura que particiona a imagem original em quatro unidades de processamento, processando cada unidade em paralelo. O sistema final implementado fornece um pixel processado por cada ciclo de relógio, depois de um tempo de latência, sendo aproximadamente 8 vezes mais rápida que sua versão não particionada. Adicionalmente, foram feitas comparações rodando os algoritmos de reconstrução morfológica em um processador ARM embarcado dentro do FPGA.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).This work presents a study of the implementation of algorithms for the morphological reconstruction of bio-medical images in FPGAs (Field Programmable Gate Arrays). The architectures were based on Sequential Reconstruction (SR) algorithms and Fast Hybrid (FH), using VHDL (Very High Description Language). The methodology for the evaluation of the platform consisted of verifying the architecture designed in QuestaSim, providing the images to be reconstructed as input data. Additionally, the validation of the results of the architecture was made using C or Matlab languages (using the imreconstruct function). Additionally, a study of hardware resource consumption for different sizes and content of images was conducted, in order to verify the applicability of the algorithms in reconfigurable architectures. In this work, in order to accelerate the image reconstruction process, a reconfigurable architecture based on the FH algorithm is proposed together with machine learning, specifically a support vector machine (SVM). For the SVM training a verification/validation methodology was used, obtaining approximately 20,000 training data. Finally, an architecture was implemented that partitions the original image in four processing units, processing each unit in parallel. The final system implemented provides one pixel processed for each clock cycle, after a latency time, being approximately 8 times faster than its unpartitioned version. Lastly, comparisons were made by running the morphological reconstruction algorithms in an ARM processor embedded within the FPGA

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Real-time Optimal Energy Management System for Plug-in Hybrid Electric Vehicles

    Get PDF
    Air pollution and rising fuel costs are becoming increasingly important concerns for the transportation industry. Hybrid electric vehicles (HEVs) are seen as a solution to these problems as they off er lower emissions and better fuel economy compared to conventional internal combustion engine vehicles. A typical HEV powertrain consists of an internal combustion engine, an electric motor/generator, and a power storage device (usually a battery). Another type of HEV is the plug-in hybrid electric vehicle (PHEV), which is conceptually similar to the fully electric vehicle. The battery in a PHEV is designed to be fully charged using a conventional home electric plug or a charging station. As such, the vehicle can travel further in full-electric mode, which greatly improves the fuel economy of PHEVs compared to HEVs. In this study, an optimal energy management system (EMS) for a PHEV is designed to minimize fuel consumption by considering engine emissions reduction. This is achieved by using the model predictive control (MPC) approach. MPC is an optimal model-based approach that can accommodate the many constraints involved in the design of EMSs, and is suitable for real-time implementations. The design and real-time implementation of such a control approach involves control-oriented modeling, controller design (including high-level and low-level controllers), and control scheme performance evaluation. All of these issues will be addressed in this thesis. A control-relevant parameter estimation (CRPE) approach is used to make the control-oriented model more accurate. This improves the EMS performance, while maintaining its real-time implementation capability. To reduce the computational complexity, the standard MPC controller is replaced by its explicit form. The explicit model predictive controller (eMPC) achieves the same performance as the implicit MPC, but requires less computational effort, which leads to a fast and reliable implementation. The performance of the control scheme is evaluated through different stages of model-in-the-loop (MIL) simulations with an equation-based and validated high-fidelity simulation model of a PHEV powertrain. Finally, the CRPE-eMPC EMS is validated through a hardware-in-the-loop (HIL) test. HIL simulation shows that the proposed EMS can be implemented to a commercial control hardware in real time and results in promising fuel economy figures and emissions performance, while maintaining vehicle drivability

    Industrial Applications: New Solutions for the New Era

    Get PDF
    This book reprints articles from the Special Issue "Industrial Applications: New Solutions for the New Age" published online in the open-access journal Machines (ISSN 2075-1702). This book consists of twelve published articles. This special edition belongs to the "Mechatronic and Intelligent Machines" section
    corecore