629 research outputs found

    Data-driven power system operation: Exploring the balance between cost and risk

    Get PDF
    Supervised machine learning has been successfully used in the past to infer a system's security boundary by training classifiers (also referred to as security rules) on a large number of simulated operating conditions. Although significant research has been carried out on using classifiers for the detection of critical operating points, using classifiers for the subsequent identification of suitable preventive/corrective control actions remains underdeveloped. This paper focuses on addressing the challenges that arise when utilizing security rules for control purposes. The inherent trade-off between operating cost and security risk is explored in detail. To optimally navigate this trade-off, a novel approach is proposed that uses an ensemble learning method (AdaBoost) to infer a probabilistic description of a system's security boundary and Platt Calibration to correct the introduced bias. Subsequently, a general-purpose framework for building probabilistic and disjunctive security rules of a system's secure operating domain is developed that can be embedded within classic operation formulations. Through case studies on the IEEE 39-bus system, it is showcased how security rules can be efficiently utilized to optimally operate the system under multiple uncertainties while respecting a user-defined cost-risk balance. This is a fundamental step towards embedding data-driven models within classic optimisation approaches

    PEMFC performance improvement through oxygen starvation prevention, modeling, and diagnosis of hydrogen leakage

    Get PDF
    Catalyst degradation results in emerging pinholes in Proton Exchange Membrane Fuel Cells (PEMFCs) and subsequently hydrogen leakage. Oxygen starvation resulting from hydrogen leaks is one of the primary life-limiting factors in PEMFCs. Voltage reduces as a result of oxygen starvation, and the cell performance deteriorates. Starved PEMFCs also work as a hydrogen pump, increasing the amount of hydrogen on the cathode side, resulting in hydrogen emissions. Therefore, it is important to delay the occurrence of oxygen starvation within the Membrane Electrode Assembly (MEA) while simultaneously be able to diagnose the hydrogen crossover through the pinholes. In this work, first, we focus on catalyst configuration as a novel method to prevent oxygen starvation and catalyst degradation. It is hypothesized that the redistribution of the platinum catalyst can increase the maximum current density and prevent oxygen starvation and catalyst degradation. Therefore, a multi-objective optimization problem is defined to maximize fuel cell efficiency and to prevent oxygen starvation in the PEMFC. Results indicate that the maximum current density rises about eight percent, while the maximum PEMFC power density increases by twelve percent. In the next step, a previously developed pseudo two-dimensional model is used to simulate fuel cell behavior in the normal and the starvation mode. This model is developed further to capture the effect of the hydrogen pumping phenomenon and to measure the amount of hydrogen in the outlet of the cathode channel. The results obtained from the model are compared with the experimental data, and validation shows that the proposed model is fast and precise. Next, Machine Learning (ML) estimators are used to first detect whether there is a hydrogen crossover in the fuel cell and second to capture the amount of hydrogen cross over. K Nearest Neighbour (KNN) and Artificial Neural Network (ANN) estimators are chosen for leakage detection and classification. Eventually, a pair of ANN classifier-regressor is chosen to first isolate leaky PEMFCs and then quantify the amount of leakage. The classifier and regressor are both trained on the datasets that are generated by the pseudo two-dimensional model. Different performance indexes are evaluated to assure that the model is not underfitting/overfitting. This ML diagnosis algorithm can be employed as an onboard diagnosis system that can be used to detect and possibly prevent cell reversal failures

    Use of Machine Learning for Automated Convergence of Numerical Iterative Schemes

    Get PDF
    Convergence of a numerical solution scheme occurs when a sequence of increasingly refined iterative solutions approaches a value consistent with the modeled phenomenon. Approximations using iterative schemes need to satisfy convergence criteria, such as reaching a specific error tolerance or number of iterations. The schemes often bypass the criteria or prematurely converge because of oscillations that may be inherent to the solution. Using a Support Vector Machines (SVM) machine learning approach, an algorithm is designed to use the source data to train a model to predict convergence in the solution process and stop unnecessary iterations. The discretization of the Navier Stokes (NS) equations for a transient local hemodynamics case requires determining a pressure correction term from a Poisson-like equation at every time-step. The pressure correction solution must fully converge to avoid introducing a mass imbalance. Considering time, frequency, and time-frequency domain features of its residual’s behavior, the algorithm trains an SVM model to predict the convergence of the Poisson equation iterative solver so that the time-marching process can move forward efficiently and effectively. The fluid flow model integrates peripheral circulation using a lumped-parameter model (LPM) to capture the field pressures and flows across various circulatory compartments. Machine learning opens the doors to an intelligent approach for iterative solutions by replacing prescribed criteria with an algorithm that uses the data set itself to predict convergence

    Modelling and diagnosis of solid oxide fuel cell (SOFC)

    Get PDF
    The development of mathematical models and numerical simulations is crucial to design improvement, optimization, and control of solid oxide fuel cells (SOFCs). The current study introduces a novel and computationally efficient pseudo-two-dimensional (pseudo-2D) model for simulating a single cell of a high-temperature hydrogen-fueled SOFC. The simplified pseudo-2D model can evaluate the cell polarization curve, species concentrations along the channel, cell temperature, and the current density distribution. The model takes the cell voltage as an input and computes the total current as an output. A full-physics three-dimensional model is then developed in ANSYS Fluent, with a complete step-by-step modeling approach being explained, to study the same cell with the identical operating conditions. The 3D model is validated against the other numerical and experimental studies available in the literature. It is shown that although the pseudo-2D solution converges significantly faster in comparison with the 3D case, the results of both models thoroughly match especially for the case of species distributions. The simplified model was then used to conduct sensitivity analysis of the effects of multi-physiochemical properties of porous electrodes on the polarization curve of the cell. A systematic inverse approach was then used to estimate the mentioned properties by applying the pattern search optimization algorithm to the polarization curve found by the pseudo-2D model. Finally, nine different input parameters of the model were changed to find the hydrogen distribution for each case, and a huge dataset of nearly half a million operating points was generated. The data was successfully employed to design a novel classifier-regressor pair as a virtual hydrogen sensor for online tracking of hydrogen concentration along the cell to avoid fuel starvation

    An extensive study on iterative solver resilience : characterization, detection and prediction

    Get PDF
    Soft errors caused by transient bit flips have the potential to significantly impactan applicalion's behavior. This has motivated the design of an array of techniques to detect, isolate, and correct soft errors using microarchitectural, architectural, compilation­based, or application-level techniques to minimize their impact on the executing application. The first step toward the design of good error detection/correction techniques involves an understanding of an application's vulnerability to soft errors. This work focuses on silent data e orruption's effects on iterative solvers and efforts to mitigate those effects. In this thesis, we first present the first comprehensive characterizalion of !he impact of soft errors on !he convergen ce characteris tics of six iterative methods using application-level fault injection. We analyze the impact of soft errors In terms of the type of error (single-vs multi-bit), the distribution and location of bits affected, the data structure and statement impacted, and varialion with time. We create a public access database with more than 1.5 million fault injection results. We then analyze the performance of soft error detection mechanisms and present the comparalive results. Molivated by our observations, we evaluate a machine-learning based detector that takes as features that are the runtime features observed by the individual detectors to arrive al their conclusions. Our evalualion demonstrates improved results over individual detectors. We then propase amachine learning based method to predict a program's error behavior to make fault injection studies more efficient. We demonstrate this method on asse ssing the performance of soft error detectors. We show that our method maintains 84% accuracy on average with up to 53% less cost. We also show, once a model is trained further fault injection tests would cost 10% of the expected full fault injection runs.“Soft errors” causados por cambios de estado transitorios en bits, tienen el potencial de impactar significativamente el comportamiento de una aplicación. Esto, ha motivado el diseño de una variedad de técnicas para detectar, aislar y corregir soft errors aplicadas a micro-arquitecturas, arquitecturas, tiempo de compilación y a nivel de aplicación para minimizar su impacto en la ejecución de una aplicación. El primer paso para diseñar una buna técnica de detección/corrección de errores, implica el conocimiento de las vulnerabilidades de la aplicación ante posibles soft errors. Este trabajo se centra en los efectos de la corrupción silenciosa de datos en soluciones iterativas, así como en los esfuerzos para mitigar esos efectos. En esta tesis, primeramente, presentamos la primera caracterización extensiva del impacto de soft errors sobre las características convergentes de seis métodos iterativos usando inyección de fallos a nivel de aplicación. Analizamos el impacto de los soft errors en términos del tipo de error (único vs múltiples-bits), de la distribución y posición de los bits afectados, las estructuras de datos, instrucciones afectadas y de las variaciones en el tiempo. Creamos una base de datos pública con más de 1.5 millones de resultados de inyección de fallos. Después, analizamos el desempeño de mecanismos de detección de soft errors actuales y presentamos los resultados de su comparación. Motivados por las observaciones de los resultados presentados, evaluamos un detector de soft errors basado en técnicas de machine learning que toma como entrada las características observadas en el tiempo de ejecución individual de los detectores anteriores al llegar a su conclusión. La evaluación de los resultados obtenidos muestra una mejora por sobre los detectores individualmente. Basados en estos resultados propusimos un método basado en machine learning para predecir el comportamiento de los errores en un programa con el fin de hacer el estudio de inyección de errores mas eficiente. Presentamos este método para evaluar el rendimiento de los detectores de soft errors. Demostramos que nuestro método mantiene una precisión del 84% en promedio con hasta un 53% de mejora en el tiempo de ejecución. También mostramos que una vez que un modelo ha sido entrenado, las pruebas de inyección de errores siguientes costarían 10% del tiempo esperado de ejecución.Postprint (published version

    Algorithms for Verification of Analog and Mixed-Signal Integrated Circuits

    Get PDF
    Over the past few decades, the tremendous growth in the complexity of analog and mixed-signal (AMS) systems has posed great challenges to AMS verification, resulting in a rapidly growing verification gap. Existing formal methods provide appealing completeness and reliability, yet they suffer from their limited efficiency and scalability. Data oriented machine learning based methods offer efficient and scalable solutions but do not guarantee completeness or full coverage. Additionally, the trend towards shorter time to market for AMS chips urges the development of efficient verification algorithms to accelerate with the joint design and testing phases. This dissertation envisions a hierarchical and hybrid AMS verification framework by consolidating assorted algorithms to embrace efficiency, scalability and completeness in a statistical sense. Leveraging diverse advantages from various verification techniques, this dissertation develops algorithms in different categories. In the context of formal methods, this dissertation proposes a generic and comprehensive model abstraction paradigm to model AMS content with a unifying analog representation. Moreover, an algorithm is proposed to parallelize reachability analysis by decomposing AMS systems into subsystems with lower complexity, and dividing the circuit's reachable state space exploration, which is formulated as a satisfiability problem, into subproblems with a reduced number of constraints. The proposed modeling method and the hierarchical parallelization enhance the efficiency and scalability of reachability analysis for AMS verification. On the subject of learning based method, the dissertation proposes to convert the verification problem into a binary classification problem solved using support vector machine (SVM) based learning algorithms. To reduce the need of simulations for training sample collection, an active learning strategy based on probabilistic version space reduction is proposed to perform adaptive sampling. An expansion of the active learning strategy for the purpose of conservative prediction is leveraged to minimize the occurrence of false negatives. Moreover, another learning based method is proposed to characterize AMS systems with a sparse Bayesian learning regression model. An implicit feature weighting mechanism based on the kernel method is embedded in the Bayesian learning model for concurrent quantification of influence of circuit parameters on the targeted specification, which can be efficiently solved in an iterative method similar to the expectation maximization (EM) algorithm. Besides, the achieved sparse parameter weighting offers favorable assistance to design analysis and test optimization
    corecore