42 research outputs found

    ROBUSTIFICATION OF CHAOS IN 2D MAPS

    Full text link

    Sensitivitätsanalyse und robustes Prozessdesign pharmazeutischer Herstellungsprozesse

    Get PDF
    The existence of parameter uncertainties(PU) limits model-based process design techniques. It also hinders the modernization of pharmaceutical manufacturing processes, which is necessitated for intensified market competition and Quality by Design (QbD) principles. Thus, in this thesis, proper approaches are proposed for efficient and effective sensitivity analysis and robust design of pharmaceutical processes. Moreover, the point estimate method (PEM) and polynomial chaos expansion (PCE) are further implemented for uncertainty propagation and quantification (UQ) in the proposed approaches. Global sensitivity analysis (GSA) provides quantitative measures on the influence of PU on process outputs over the entire parameter domain. Two GSA techniques are presented in detail and computed with the PCE. The results from case studies show that GSA is able to quantify the heterogeneity of the information in PU and model structure and parameter dependencies affects significantly the final GSA result as well as output variation. Frameworks for robust process design are introduced to alleviate the adverse effect of PU on process performance. The first robust design framework is developed based on the PEM. The proposed approach has high computational efficiency and is able to take parameter dependencies into account. Then, a novel approach, in which the Gaussian mixture distribution (GMD) concept is combined with PEM, is proposed to handle non-Gaussian distribution. The resulting GMD-PEM concept provides a better trade-off between process efficiency and probability of constraint violations than other approaches. The second robust design framework is based on the iterative back-off strategy and PCE. It provides designs with the desired robustness, while the associated computational expense is independent from the optimization problem. The decoupling of optimization and UQ provides the possibility of implementing robust process design to more complex pharmaceutical manufacturing processes with large number of PU. In this thesis, the case studies include unit operations for (bio)chemical synthesis, separation (crystallization) and formulation (freeze-drying), which cover the complete production chain of pharmaceutical manufacturing. Results from the case studies reveal the significant impact of PU on process design. Also they show the efficiency and effectiveness of the proposed frameworks regarding process performance and robustness in the context of QbD.Die pharmazeutische Industrie muss sowohl den gestiegenen Wettbewerbsdruck standhalten als auch die von Regulierungsbehörden geforderte QbD-Initiative (Quality by Design) umsetzen. Modellgestützte Verfahren können einen signifikanten Beitrag leisten, aber Parameterunsicherheiten (PU) erschweren jedoch eine zuverlässige modellgestützte Prozessauslegung. Das Ziel dieser Arbeit ist daher die Erforschung von effizienten Approaches zur Sensitivitätsanalyse und robusten Prozessdesign der pharmazeutische Industrie. Methoden, Point Estimate Method (PEM) und Polynomial Chaos Expansion (PCE), wurde implementiert, um effizient Unsicherheitenquantifizierung (UQ) zu erlauben. Der globalen Sensitivitätsanalyse (GSA) ist eine systematische Quantifizierung von Parameterschwankungen auf die Simulationsergebnisse. Zwei GSA Techniken werden im Detail vorgestellt und an Beispielen demonstriert. Die Ergebnisse zeigen sowohl den Mehrwert der GSA im Kontext des robusten Prozessdesigns als auch die Relevanz zur korrekten Berücksichtigung von Parameterkorrelationen bei der GSA. Um den schädlichen Einfluss von PU auf die modellgestützte Prozessauslegung zusätzlich zu minimieren, wurden weitere Konzepte aus der robusten Optimierung untersucht. Zunächst wurde das erste Konzept basierend auf der PEM entwickelt. Das erste Konzept zeigt einen deutlich reduzierte Rechenaufwand und kann auch die Parameterkorrelationen entsprechend in der robusten Prozessauslegung berücksichtigen. In einem zweiten Schritt wurde ein neuer Ansatz, der die Gauß-Mischverteilung mit der PEM kombiniert, hierzu für nicht normalverteilte PU erfolgreich implementiert. Weiterhin wurde eine iterative Back-off-Strategie erforscht, die auch die PU entsprechend berücksichtigt aber leichte Rechenaufwand zeigt. Durch die Entkoppelung von UQ und Optimierung können wesentlich komplexere pharmazeutische Herstellungsprozesse mit einer hohen Anzahl an PU implementiert werden. Die in dieser Arbeit untersuchten verfahrenstechnische Grundoperationen decken somit einen Großteil der gesamten Produktionskette der pharmazeutischen Herstellung ab. Die Ergebnisse der untersuchten Beispiele zeigen deutlich den Einfluss von PU auf das modellgestützte Prozessdesign auf. Mithilfe der vorgeschlagenen Approaches können die PU effektiv und effizient bei einer optimalen Balance von Rechenaufwand und der geforderten Zuverlässigkeit ganz im QbD-Sinne berücksichtigt werden

    Development of new learning control approaches

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Mean Subtraction and Mode Selection in Dynamic Mode Decomposition

    Full text link
    Koopman mode analysis has provided a framework for analysis of nonlinear phenomena across a plethora of fields. Its numerical implementation via Dynamic Mode Decomposition (DMD) has been extensively deployed and improved upon over the last decade. We address the problems of mean subtraction and DMD mode selection in the context of finite dimensional Koopman invariant subspaces. Preprocessing of data by subtraction of the temporal mean of a time series has been a point of contention in companion matrix-based DMD. This stems from the potential of said preprocessing to render DMD equivalent to temporal DFT. We prove that this equivalence is impossible when the order of the DMD-based representation of the dynamics exceeds the dimension of the system. Moreover, this parity of DMD and DFT is mostly indicative of an inadequacy of data, in the sense that the number of snapshots taken is not enough to represent the true dynamics of the system. We then vindicate the practice of pruning DMD eigenvalues based on the norm of the respective modes. Once a minimum number of time delays has been taken, DMD eigenvalues corresponding to DMD modes with low norm are shown to be spurious, and hence must be discarded. When dealing with mean-subtracted data, the above criterion for detecting synthetic eigenvalues can be applied after additional pre-processing. This takes the form of an eigenvalue constraint on Companion DMD, or yet another time delay.Comment: 43 pages, 7 figure

    Tropical Cyclone Data Assimilation: Experiments with a Coupled Global-Limited-Area Analysis System

    Get PDF
    This study investigates the benefits of employing a limited-area data assimilation (DA) system to enhance lower-resolution global analyses in the Northwest Pacific tropical cyclone (TC) basin. Numerical experiments are carried out with a global analysis system at horizontal resolution T62 and a limited-area analysis system at resolutions from 200 km to 36 km. The global and limited-area DA systems, which are both based on the Local Ensemble Transform Kalman Filter algorithm, are implemented using a unique configuration, in which the global DA system provides information about the large-scale analysis and background uncertainty to the limited-area DA system. In experiments that address the global-to-limited-area resolution ratio, the limited-area analyses of the storm locations for experiments in which the ratio is 1:2 are, on average, more accurate than those from the global analyses. Increasing the resolution of the limited-area system beyond 100 km adds little direct benefit to the analysis of position or intensity, although 48 km analyses reduce boundary effects of coupling the models and may benefit analyses in which observations with larger representativeness error are assimilated. Two factors contribute to the higher accuracy of the limited-area analyses. First, the limited-area system improves the accuracy of the location estimates for strong storms, which is introduced when the background is updated by the global assimilation. Second, it improves the accuracy of the background estimate of the storm locations for moderate and weak storms. Improvements in the steering flow analysis due to increased resolution are modest and short-lived in the forecasts. Limited-area track forecasts are more accurate, on average, than global forecasts, independently of the strength of the storms up to five days. This forecast improvement is due to the more accurate analysis of the initial position of storms and the better representation of the interactions between the storms and their immediate environment. Experiments that test the treatment and quality control (QC) methods of TC observations show that significant gainful improvements can be achieved in the analyses and forecasts of TCs when observations with large representativeness error are not discarded in the online QC procedure. These experiments examine the impact of assimilating TCVitals SLP, QuikSCAT 10 m wind components, and reconnaissance dropsondes alongside the conventional observations assimilated by NCEP in real time. Implementing a Combined method that clips the special TC observations via Huberization when multiple observation types are unavailable, and keeping the TCVital observation when other special observations are present, showed significant systematic improvements for strong and moderate storm analyses and forecasts

    Proceedings of the 2021 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    2021, the annual joint workshop of the Fraunhofer IOSB and KIT IES was hosted at the IOSB in Karlsruhe. For a week from the 2nd to the 6th July the doctoral students extensive reports on the status of their research. The results and ideas presented at the workshop are collected in this book in the form of detailed technical reports

    Proceedings of the 2021 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    2021, the annual joint workshop of the Fraunhofer IOSB and KIT IES was hosted at the IOSB in Karlsruhe. For a week from the 2nd to the 6th July the doctoral students extensive reports on the status of their research. The results and ideas presented at the workshop are collected in this book in the form of detailed technical reports

    Stochastic and Optimal Distributed Control for Energy Optimization and Spatially Invariant Systems

    Get PDF
    Improving energy efficiency and grid responsiveness of buildings requires sensing, computing and communication to enable stochastic decision-making and distributed operations. Optimal control synthesis plays a significant role in dealing with the complexity and uncertainty associated with the energy systems. The dissertation studies general area of complex networked systems that consist of interconnected components and usually operate in uncertain environments. Specifically, the contents of this dissertation include tools using stochastic and optimal distributed control to overcome these challenges and improve the sustainability of electric energy systems. The first tool is developed as a unifying stochastic control approach for improving energy efficiency while meeting probabilistic constraints. This algorithm is applied to demonstrate energy efficiency improvement in buildings and improving operational efficiency of virtualized web servers, respectively. Although all the optimization in this technique is in the form of convex optimization, it heavily relies on semidefinite programming (SP). A generic SP solver can handle only up to hundreds of variables. This being said, for a large scale system, the existing off-the-shelf algorithms may not be an appropriate tool for optimal control. Therefore, in the sequel I will exploit optimization in a distributed way. The second tool is itself a concrete study which is optimal distributed control for spatially invariant systems. Spatially invariance means the dynamics of the system do not vary as we translate along some spatial axis. The optimal H2 [H-2] decentralized control problem is solved by computing an orthogonal projection on a class of Youla parameters with a decentralized structure. Optimal H∞ [H-infinity] performance is posed as a distance minimization in a general L∞ [L-infinity] space from a vector function to a subspace with a mixed L∞ and H∞ space structure. In this framework, the dual and pre-dual formulations lead to finite dimensional convex optimizations which approximate the optimal solution within desired accuracy. Furthermore, a mixed L2 [L-2] /H∞ synthesis problem for spatially invariant systems as trade-offs between transient performance and robustness. Finally, we pursue to deal with a more general networked system, i.e. the Non-Markovian decentralized stochastic control problem, using stochastic maximum principle via Malliavin Calculus

    Simulation and Robust Optimization for Electric Devices with Uncertainties

    Get PDF
    This dissertation deals with modeling, simulation and optimization of low-frequency electromagnetic devices and quantification of the impact of uncertainties on these devices. The emphasis of these methods is on their application for electric machines. A Permanent Magnet Synchronous Machine (PMSM) is simulated using Iso-Geometric Analysis (IGA). An efficient modeling procedure has been established by incorporating a harmonic stator-rotor coupling. The procedure is found to be stable. Furthermore, it is found that there is strong reduction in computational time with respect to a classical monolithic finite element method. The properties of the ingredients of IGA, i.e. B-splines and Non-Uniform B-Splines, are exploited to conduct a shape optimization for the example of a Stern-Gerlach magnet. It is shown that the IGA framework is a reliable and promising tool for simulating and optimizing electric devices. Different formulations for robust optimization are recalled. The formulations are tested for the optimization of the size of the permanent magnet in a PMSM. It is shown that under the application of linearization the deterministic and the stochastic formulation are equivalent. An efficient deterministic optimization algorithm is constructed by the implementation of an affine decomposition. It is shown that the deterministic algorithm outperforms the widely used stochastic algorithms for this application. Finally, different models to incorporate uncertainties in the simulation of PMSMs are developed. They incorporate different types of rotor eccentricity, uncertainties in the permanent magnets (geometric and material related) and uncertainties that are introduced by the welding processes during the manufacturing. Their influences are studied using stochastic collocation and using the classical Monte Carlo method. Furthermore, the Multilevel Monte Carlo approach is combined with error estimation and applied to determine high dimensional uncertainties in a PMSM
    corecore