91 research outputs found

    Identification and control of deposition processes

    Get PDF
    The electrochemical deposition process is defined as the production of a coating on a surface from an aqueous solution composed of several substances. Electrochemical deposition processes are characterized by strong nonlinearity, large complexity and disturbances. Therefore, improving production quality requires the identification of a reasonably accurate model which should be found from data in a reasonable amount of time and with a reasonable computational effort. This identification makes it possible to predict the behavior of unmeasured signals and design a control algorithm to meet the demands of consumers. This thesis addresses the identification and control of the deposition processes. A model for an electrochemical cell that takes into account both electrode interfaces and the activity of ions participating in the deposition process is developed and a method for taking into account uncompensated resistance is proposed. Identifiability of two models, the conventional model and the developed model, is investigated under step and sweep form of applied voltage. It is proven that conventional electrochemical cell model can be identified uniquely using a series of step voltage experiments or in a single linear sweep voltammetry experiment on the basis of the measurements of cell current. The Zakai filtering and pathwise filtering methods are applied to a nonlinear in the parameters electrochemical cell model to estimate the electrode kinetics and mass-transfer parameters of the copper electrodeposition process. In the case of known parameters the feedforward controllers that force the concentration at the boundary to follow the desired reference concentration are designed for the deposition processes. The adaptive boundary concentration control problem for the electrochemical cell with simultaneous parameter identification is solved using the Zakai filtering method. Using such a control, depletion in industrial applications, such as copper deposition baths, can be avoided. An identification method for identifying kinetic parameters and a time-varying mixed potential process of the nonlinear electroless nickel plating model is proposed. The method converts the original nonlinear time-varying identification problem into a time-invariant quadratic optimization problem solvable by conventional least squares

    Beyond Gaussian Statistical Modeling in Geophysical Data Assimilation

    Get PDF
    International audienceThis review discusses recent advances in geophysical data assimilation beyond Gaussian statistical modeling, in the fields of meteorology, oceanography, as well as atmospheric chemistry. The non-Gaussian features are stressed rather than the nonlinearity of the dynamical models, although both aspects are entangled. Ideas recently proposed to deal with these non-Gaussian issues, in order to improve the state or parameter estimation, are emphasized. The general Bayesian solution to the estimation problem and the techniques to solve it are first presented, as well as the obstacles that hinder their use in high-dimensional and complex systems. Approximations to the Bayesian solution relying on Gaussian, or on second-order moment closure, have been wholly adopted in geophysical data assimilation (e.g., Kalman filters and quadratic variational solutions). Yet, nonlinear and non-Gaussian effects remain. They essentially originate in the nonlinear models and in the non-Gaussian priors. How these effects are handled within algorithms based on Gaussian assumptions is then described. Statistical tools that can diagnose them and measure deviations from Gaussianity are recalled. The following advanced techniques that seek to handle the estimation problem beyond Gaussianity are reviewed: maximum entropy filter, Gaussian anamorphosis, non-Gaussian priors, particle filter with an ensemble Kalman filter as a proposal distribution, maximum entropy on the mean, or strictly Bayesian inferences for large linear models, etc. Several ideas are illustrated with recent or original examples that possess some features of high-dimensional systems. Many of the new approaches are well understood only in special cases and have difficulties that remain to be circumvented. Some of the suggested approaches are quite promising, and sometimes already successful for moderately large though specific geophysical applications. Hints are given as to where progress might come from

    Air-data estimation for air-breathing hypersonic vehicles

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1996.Includes bibliographical references (p. 194-198).by Bryan Heejin Kang.Ph.D

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Nonlinear Bayesian Estimation via Solution of The Fokker-Planck Equation

    Get PDF
    A general approach to optimal nonlinear filtering can be described by a recursive Bayesian approach. The key step in this approach is to determine the probability density function of the state vector conditioned on available measurements. However, an optimal solution to the Bayesian filtering problem can only be obtained exactly for a small class of problems such as linear and Gaussian cases. Therefore, in practice, approximate solutions, such as the extended Kalman filter, have been used.An optimal nonlinear filtering in a recursive Bayesian approach is a two-step process which consists of the prediction and the update process. In the update process, the priori conditional state probability density function (PDF) from the prediction process is updated through Bayes' rule using measurements from sensors. The prediction of conditional state PDF can be made by solving the Fokker-Planck equation (FPE) that governs the time-evolution the conditional state PDF. However, it is extremely difficult to obtain an analytical solution of the Fokker-Planck equation with the exception of a few special cases. So far this estimation method has not been employed much in practice because of the high computational cost needed in solving the FPE numerically. In this dissertation, methods to improve the efficiency of the numerical method in solving the FPE are investigated to enhance the efficiency of the nonlinear filtering.Two finite difference methods, namely i) the explicit forward method and ii) the alternating direction implicit (ADI) method, are used to solve the FPE numerically. Although the explicit forward method is much simpler to implement, the ADI method is preferred for its low computational cost. To reduce the computational cost further, as the first contribution of the dissertation, a moving domain scheme is developed to reduce the domain of integration required for solving the Fokker-Planck equation numerically. Simulation results show that the accuracy of the estimation is improved as compared with the Extended Kalman Filter, and at the same time the computational cost is significantly lower with the proposed moving grid scheme than the case without it.Recently a nonlinear filtering algorithm using a direct quadrature method of moments was proposed, where the associated Fokker-Planck equation is solved efficiently via discrete quadrature based on moment constraints. For some problems, however, this approach showed the phenomenon similar to the "degeneracy'' in a particle filter, which is the concentration of weight on particular particles. The possible cause of the phenomenon is that only the weights are updated through the modified Bayes' rule. Therefore, in this dissertation, as another contribution, a new hybrid filter is proposed where the measurement update equations in the extended or the unscented Kalman filter are used along with the direct quadrature method of moments to solve the FPE. In this way the "degeneracy'' problem can be mitigated.Then, new proposed filtering methods are applied to several challenging problems such as i) the bearing-only tracking problem, ii) the relative orbit position estimation problem, and iii) the orbit determination problem to demonstrate their advantages. Simulation results indicate that the performance of the proposed filters are better than existing nonlinear filtering methods, such as the Extended Kalman Filter especially with less measurement updates
    corecore