340 research outputs found

    Proceedings of SIRM 2023 - The 15th European Conference on Rotordynamics

    Get PDF
    It was our great honor and pleasure to host the SIRM Conference after 2003 and 2011 for the third time in Darmstadt. Rotordynamics covers a huge variety of different applications and challenges which are all in the scope of this conference. The conference was opened with a keynote lecture given by Rainer Nordmann, one of the three founders of SIRM “Schwingungen in rotierenden Maschinen”. In total 53 papers passed our strict review process and were presented. This impressively shows that rotordynamics is relevant as ever. These contributions cover a very wide spectrum of session topics: fluid bearings and seals; air foil bearings; magnetic bearings; rotor blade interaction; rotor fluid interactions; unbalance and balancing; vibrations in turbomachines; vibration control; instability; electrical machines; monitoring, identification and diagnosis; advanced numerical tools and nonlinearities as well as general rotordynamics. The international character of the conference has been significantly enhanced by the Scientific Board since the 14th SIRM resulting on one hand in an expanded Scientific Committee which meanwhile consists of 31 members from 13 different European countries and on the other hand in the new name “European Conference on Rotordynamics”. This new international profile has also been emphasized by participants of the 15th SIRM coming from 17 different countries out of three continents. We experienced a vital discussion and dialogue between industry and academia at the conference where roughly one third of the papers were presented by industry and two thirds by academia being an excellent basis to follow a bidirectional transfer what we call xchange at Technical University of Darmstadt. At this point we also want to give our special thanks to the eleven industry sponsors for their great support of the conference. On behalf of the Darmstadt Local Committee I welcome you to read the papers of the 15th SIRM giving you further insight into the topics and presentations

    Brief Review on Identification, Categorization and Elimination of Power Quality Issues in a Microgrid Using Artificial Intelligent Techniques

    Get PDF
    Power quality is the manifestation of a disruption in the supply voltage, current or frequency that damages the utility equipment and has become an important issue with the introduction of more sophisticated and sensitive devices. So, the supply power quality issue still remains a major challenge as its degradation can cause huge destabilization of electrical networks. As renewable energy sources have irregular nature, a microgrid essentially needs energy storage system containing advanced power electronic converters which is the root cause of majority of power quality disturbances. Also, the integration of non-linear and unbalanced loads into the grid adds to its power quality problems. This article gives a compact overview on the identification, categorization and mitigation of these power quality events in a microgrid by using various Artificial Intelligence-based techniques like Optimization techniques, Adaptive Learning techniques, Signal Processing and Pattern Recognition, Neural Networks and Fuzzy Logic

    Power Quality Management and Classification for Smart Grid Application using Machine Learning

    Get PDF
    The Efficient Wavelet-based Convolutional Transformer network (EWT-ConvT) is proposed to detect power quality disturbances in time-frequency domain using attention mechanism. The support of machine learning further improves the network accuracy with synthetic signal generation and less system complexity under practical environment. The proposed EWT-ConvT can achieve 94.42% accuracy which is superior than other deep learning models. The detection of disturbances using EWT-ConvT can also be implemented into smart grid applications for real-time embedded system development

    Modeling and Simulation in Engineering

    Get PDF
    The Special Issue Modeling and Simulation in Engineering, belonging to the section Engineering Mathematics of the Journal Mathematics, publishes original research papers dealing with advanced simulation and modeling techniques. The present book, “Modeling and Simulation in Engineering I, 2022”, contains 14 papers accepted after peer review by recognized specialists in the field. The papers address different topics occurring in engineering, such as ferrofluid transport in magnetic fields, non-fractal signal analysis, fractional derivatives, applications of swarm algorithms and evolutionary algorithms (genetic algorithms), inverse methods for inverse problems, numerical analysis of heat and mass transfer, numerical solutions for fractional differential equations, Kriging modelling, theory of the modelling methodology, and artificial neural networks for fault diagnosis in electric circuits. It is hoped that the papers selected for this issue will attract a significant audience in the scientific community and will further stimulate research involving modelling and simulation in mathematical physics and in engineering

    Selected Papers from 2020 IEEE International Conference on High Voltage Engineering (ICHVE 2020)

    Get PDF
    The 2020 IEEE International Conference on High Voltage Engineering (ICHVE 2020) was held on 6–10 September 2020 in Beijing, China. The conference was organized by the Tsinghua University, China, and endorsed by the IEEE Dielectrics and Electrical Insulation Society. This conference has attracted a great deal of attention from researchers around the world in the field of high voltage engineering. The forum offered the opportunity to present the latest developments and different emerging challenges in high voltage engineering, including the topics of ultra-high voltage, smart grids, and insulating materials

    Utilidad de las señales de oximetría y flujo aéreo en el diagnóstico simplificado de la apnea obstructiva del sueño. Diseño de un test automático domiciliario

    Get PDF
    Obstructive Sleep Apnea (OSA) is a respiratory disorder characterized by recurrent episodes of total (apnea) or partial (hypopnea) absence of airflow during sleep. Untreated OSA produces a significant decrease in quality of life and is associated with the main causes of mortality in industrialized countries.However, OSA is considered an underdiagnosed chronic disease. Continuous positive airway pressure (CPAP) is the most common therapeutic option. Nocturnal polysomnography (PSG) in a specialized sleep unit is the reference diagnostic method, although it has low availability and accessibility. Consequently, in recent years there has been a significant demand for abbreviated methods, most of them at home, to reduce waiting lists. The fundamental hypothesis that the use of automatic processing techniques based on machine learning tools could allow maximizing the diagnostic accuracy of a reduced set of combined biomedical signals: overnight oximetry and airflow recorded at patient&#8217;s home. The main objective was to evaluate whether the joint analysis by means of machine learning algorithms of unsupervised SpO2 and AF signals acquired at patient's home leads to a significant increase in diagnostic performance compared to single-channel approaches. A prospective observational study was carried out in which a population referred consecutively to the Sleep Unit showing moderate-to-high clinical suspicion of having OSA was analyzed.All patients underwent an unsupervised PSG at home(gold standard) from which the SpO2 and AF signals were extracted, which were subsequently processed offline.The apnea-hypopnea index(AHI) derived from the PSG was used to confirm or rule out the presence of the disease.Three different approaches for screening patients with suspected OSA were assessed in terms of the source of information used: single-channel based on SpO2, single-channel based on AF, and two-channel combining information from both SpO2 and AF.The automatic processing of the SpO2 and AF signals was developed in 4 stages: preprocessing, feature extraction, feature selection, and pattern recognition. Unsupervised SpO2 and AF recordings were parameterized using the fast correlation-based filter(FCBF)algorithm.The following machine learning methods were used: linear regression(MLR), multilayer perceptron neural networks(MLP) and support vector machines(SVM). The population was divided into independent training and test groups. Agreement between the estimated and the actual AHIderived from at-home PSG was assessed, and typical OSA cutoff points(5, 15, and 30 events/h) were applied. A total of 299 unattended PSGs were performed at home, with a validity percentage of 85.6%. The highest agreement between the estimated AHI and the PSG AHI was reached by the SVMSpO2+AF model, with an CCI 0.93 and a 4-class kappa index 0.71, as well as with an overall accuracy for the 4 OSA severity categories equal to 81.25%, significantly higher than the individual analysis of the SpO2 signal and the airflow signal.The SVMSpO2+AF model achieved the highest diagnostic performance of all algorithms for the detection of severe OSA, with an accuracy of 95.83% and AUC ROC 0.98. In addition, the AUC ROC of the dual-channel models was significantly higher (p<0.01) than that achieved by all the single-channel approaches for the cutoff of 15events/h. The proposed methodology based on the joint automatic analysis of the SpO2 and AF signals acquired at home showed a high complementarity that led to a remarkable increase in diagnostic performance compared to single-channel approaches. The automatic models outperformed the conventional indices(desaturation and airflow-derived indexes) both in terms of correlation and concordance with the AHI from PSG, as well as in terms of overall diagnostic accuracy, providing a moderate increase in diagnostic performance, particularly in the detection of moderate-to-severe OSA.Our findings suggest that the joint analysis of oximetry and airflow signals by means of machine learning methods allows a simplified as well as accurate screening of OSA at patient's home.La Apnea Obstructiva del Sueño (AOS) es un trastorno respiratorio crónico infradiagnosticado caracterizado por la repetición recurrente de episodios de ausencia total (apnea) o parcial (hipopnea) del flujo aéreo (FA) durante el sueño, que disminuye la calidad de vida y aumenta la mortalidad. La CPAP es el tratamiento más habitual, no invasivo, eficaz y coste-efectivo, por lo que favorecer el proceso de diagnóstico es fundamental. La PSG nocturna es el método diagnóstico de referencia, presentando baja disponibilidad y accesibilidad, lo que ha contribuido a desbordar los recursos disponibles, retrasando el diagnóstico y el tratamiento. En contexto de la simplificación diagnóstica portátil, en auge, el uso de únicamente una (monocanal) o dos (bi-canal) señales, como las de SpO2 y FA ha sido ampliamente explorado, aunque la mayoría en entornos hospitalarios controlados. La hipótesis se fundamenta en que las técnicas de procesado automático basadas en machine learning podrían maximizar la precisión diagnóstica de un conjunto reducido de señales combinadas. El objetivo consistió en evaluar si el análisis conjunto mediante algoritmos de aprendizaje automático de las señales de SpO2 y FA no supervisadas adquiridas en el domicilio aumenta el rendimiento diagnóstico en comparación con los enfoques de un solo canal. Se llevó a cabo un estudio observacional prospectivo en pacientes con sospecha moderada-alta de AOS. Se realizó una PSG no supervisada en su domicilio (gold standard de referencia), de la que se extrajeron las señales de SpO2 y FA, procesadas offline posteriormente. El índice de apnea-hipopnea (IAH) derivado de la PSG se empleó para confirmar o descartar la presencia de la enfermedad. Se implementaron y compararon 3 metodologías de screening en función de la fuente de información empleada: (1) monocanal basado en SpO2, (2) monocanal basado en FA, (3) bi-canal combinando SpO2 y FA. El procesado automático de las señales de SpO2 y FA se desarrolló en 4 etapas: preprocesado, extracción de características, selección de características (mediante fast correlation-based filter, FCBF) y reconocimiento de patrones. Cada enfoque de screening se empleó para estimar automáticamente el IAH utilizando los siguientes métodos de machine learning: (1) regresión lineal múltiple (MLR), (2) redes neuronales perceptrón multicapa (MLP) y (3) máquinas vector soporte (SVM). La población se dividió en grupos independientes de entrenamiento (60%) y test (40%). Se realizaron un total de 299 PSGs domiciliarias. Los modelos de enfoque combinado bi-canal alcanzaron valores de concordancia entre el IAH estimado y el IAH de la PSG domiciliaria y de rendimiento diagnóstico para todos los puntos de corte típicos de AOS (5, 15 y 30 e/h) superiores al enfoque monocanal. La mayor concordancia fue alcanzada por el modelo SVMSpO2+FA (CCI 0.93, kappa4 clases 0.71, precisión global 81.25%), significativamente superior a los análisis individuales. El modelo SVMSpO2+FA alcanzó el mayor rendimiento diagnóstico de todos los algoritmos para la detección de AOS grave (precisión 95.83% y AUC ROC 0.98). Además, el AUC ROC de los modelos bi-canal fue superior (p <0.01) al de los enfoques monocanal para el punto de corte de 15 e/h. La metodología propuesta basada en el análisis automático conjunto de las señales de SpO2 y FA adquiridas en el domicilio mostró una alta complementariedad y un notable aumento del rendimiento diagnóstico en comparación con los enfoques monocanal. Los modelos automáticos superaron globalmente a los índices clásicos (de desaturación y de eventos de flujo aéreo), aportando un incremento moderado del rendimiento diagnóstico particularmente en la detección de AOS moderado-grave. Los resultados obtenidos indican que el análisis conjunto de las señales de oximetría y flujo mediante métodos de aprendizaje automático permite un screening simplificado a la vez que preciso de la AOS en el domicilio del paciente.Escuela de DoctoradoDoctorado en Investigación en Ciencias de la Salu

    Evaluating footwear “in the wild”: Examining wrap and lace trail shoe closures during trail running

    Get PDF
    Trail running participation has grown over the last two decades. As a result, there have been an increasing number of studies examining the sport. Despite these increases, there is a lack of understanding regarding the effects of footwear on trail running biomechanics in ecologically valid conditions. The purpose of our study was to evaluate how a Wrap vs. Lace closure (on the same shoe) impacts running biomechanics on a trail. Thirty subjects ran a trail loop in each shoe while wearing a global positioning system (GPS) watch, heart rate monitor, inertial measurement units (IMUs), and plantar pressure insoles. The Wrap closure reduced peak foot eversion velocity (measured via IMU), which has been associated with fit. The Wrap closure also increased heel contact area, which is also associated with fit. This increase may be associated with the subjective preference for the Wrap. Lastly, runners had a small but significant increase in running speed in the Wrap shoe with no differences in heart rate nor subjective exertion. In total, the Wrap closure fit better than the Lace closure on a variety of terrain. This study demonstrates the feasibility of detecting meaningful biomechanical differences between footwear features in the wild using statistical tools and study design. Evaluating footwear in ecologically valid environments often creates additional variance in the data. This variance should not be treated as noise; instead, it is critical to capture this additional variance and challenges of ecologically valid terrain if we hope to use biomechanics to impact the development of new products

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation

    Artificial Intelligence-based Control Techniques for HVDC Systems

    Get PDF
    The electrical energy industry depends, among other things, on the ability of networks to deal with uncertainties from several directions. Smart-grid systems in high-voltage direct current (HVDC) networks, being an application of artificial intelligence (AI), are a reliable way to achieve this goal as they solve complex problems in power system engineering using AI algorithms. Due to their distinctive characteristics, they are usually effective approaches for optimization problems. They have been successfully applied to HVDC systems. This paper presents a number of issues in HVDC transmission systems. It reviews AI applications such as HVDC transmission system controllers and power flow control within DC grids in multi-terminal HVDC systems. Advancements in HVDC systems enable better performance under varying conditions to obtain the optimal dynamic response in practical settings. However, they also pose difficulties in mathematical modeling as they are non-linear and complex. ANN-based controllers have replaced traditional PI controllers in the rectifier of the HVDC link. Moreover, the combination of ANN and fuzzy logic has proven to be a powerful strategy for controlling excessively non-linear loads. Future research can focus on developing AI algorithms for an advanced control scheme for UPFC devices. Also, there is a need for a comprehensive analysis of power fluctuations or steady-state errors that can be eliminated by the quick response of this control scheme. This survey was informed by the need to develop adaptive AI controllers to enhance the performance of HVDC systems based on their promising results in the control of power systems. Doi: 10.28991/ESJ-2023-07-02-024 Full Text: PD

    Neural networks in control engineering

    Get PDF
    The purpose of this thesis is to investigate the viability of integrating neural networks into control structures. These networks are an attempt to create artificial intelligent systems with the ability to learn and remember. They mathematically model the biological structure of the brain and consist of a large number of simple interconnected processing units emulating brain cells. Due to the highly parallel and consequently computationally expensive nature of these networks, intensive research in this field has only become feasible due to the availability of powerful personal computers in recent years. Consequently, attempts at exploiting the attractive learning and nonlinear optimization characteristics of neural networks have been made in most fields of science and engineering, including process control. The control structures suggested in the literature for the inclusion of neural networks in control applications can be divided into four major classes. The first class includes approaches in which the network forms part of an adaptive mechanism which modulates the structure or parameters of the controller. In the second class the network forms part of the control loop and replaces the conventional control block, thus leading to a pure neural network control law. The third class consists of topologies in which neural networks are used to produce models of the system which are then utilized in the control structure, whilst the fourth category includes suggestions which are specific to the problem or system structure and not suitable for a generic neural network-based-approach to control problems. Although several of these approaches show promising results, only model based structures are evaluated in this thesis. This is due to the fact that many of the topologies in other classes require system estimation to produce the desired network output during training, whereas the training data for network models is obtained directly by sampling the system input(s) and output(s). Furthermore, many suggested structures lack the mathematical motivation to consider them for a general structure, whilst the neural network model topologies form natural extensions of their linear model based origins. Since it is impractical and often impossible to collect sufficient training data prior to implementing the neural network based control structure, the network models have to be suited to on-line training during operation. This limits the choice of network topologies for models to those that can be trained on a sample by sample basis (pattern learning) and furthermore are capable of learning even when the variation in training data is relatively slow as is the case for most controlled dynamic systems. A study of feedforward topologies (one of the main classes of networks) shows that the multilayer perceptron network with its backpropagation training is well suited to model nonlinear mappings but fails to learn and generalize when subjected to slow varying training data. This is due to the global input interpretation of this structure, in which any input affects all hidden nodes such that no effective partitioning of the input space can be achieved. This problem is overcome in a less flexible feedforward structure, known as regular Gaussian network. In this network, the response of each hidden node is limited to a -sphere around its center and these centers are fixed in a uniform distribution over the entire input space. Each input to such a network is therefore interpreted locally and only effects nodes with their centers in close proximity. A deficiency common to all feedforward networks, when considered as models for dynamic systems, is their inability to conserve previous outputs and states for future predictions. Since this absence of dynamic capability requires the user to identify the order of the system prior to training and is therefore not entirely self-learning, more advanced network topologies are investigated. The most versatile of these structures, known as a fully recurrent network, re-uses the previous state of each of its nodes for subsequent outputs. However, despite its superior modelling capability, the tests performed using the Williams and Zipser training algorithm show that such structures often fail to converge and require excessive computing power and time, when increased in size. Despite its rigid structure and lack of dynamic capability, the regular Gaussian network produces the most reliable and robust models and was therefore selected for the evaluations in this study. To overcome the network initialization problem, found when using a pure neural network model, a combination structure· _in which the network operates in parallel with a mathematical model is suggested. This approach allows the controller to be implemented without any prior network training and initially relies purely on the mathematical model, much like conventional approaches. The network portion is then trained during on-line operation in order to improve the model. Once trained, the enhanced model can be used to improve the system response, since model exactness plays an important role in the control action achievable with model based structures. The applicability of control structures based on neural network models is evaluated by comparing the performance of two network approaches to that of a linear structure, using a simulation of a nonlinear tank system. The first network controller is developed from the internal model control (IMC) structure, which includes a forward and inverse model of the system to be controlled. Both models can be replaced by a combination of mathematical and neural topologies, the network portion of which is trained on-line to compensate for the discrepancies between the linear model _ and nonlinear system. Since the network has no dynamic ·capacity, .former system outputs are used as inputs to the forward and inverse model. Due to this direct feedback, the trained structure can be tuned to perform within limits not achievable using a conventional linear system. As mentioned previously the IMC structure uses both forward and inverse models. Since the control law requires that these models are exact inverses, an iterative inversion algorithm has to be used to improve the values produced by the inverse combination model. Due to deadtimes and right-half-plane zeroes, many systems are furthermore not directly invertible. Whilst such unstable elements can be removed from mathematical models, the inverse network is trained directly from the forward model and can not be compensated. These problems could be overcome by a control structure for which only a forward model is required. The neural predictive controller (NPC) presents such a topology. Based on the optimal control philosophy, this structure uses a model to predict several future outputs. The errors between these and the desired output are then collected to form the cost function, which may also include other factors such as the magnitude of the change in input. The input value that optimally fulfils all the objectives used to formulate the cost function, can then be found by locating its minimum. Since the model in this structure includes a neural network, the optimization can not be formulated in a closed mathematical form and has to be performed using a numerical method. For the NPC topology, as for the neural network IMC structure, former system outputs are fed back to the model and again the trained network approach produces results not achievable with a linear model. Due to the single network approach, the NPC topology furthermore overcomes the limitations described for the neural network IMC structure and can be extended to include multivariable systems. This study shows that the nonlinear modelling capability of neural networks can be exploited to produce learning control structures with improved responses for nonlinear systems. Many of the difficulties described are due to the computational burden of these networks and associated algorithms. These are likely to become less significant due to the rapid development in computer technology and advances in neural network hardware. Although neural network based control structures are unlikely to replace the well understood linear topologies, which are adequate for the majority of applications, they might present a practical alternative where (due to nonlinearity or modelling errors) the conventional controller can not achieve the required control action
    corecore