385 research outputs found

    Modeling and Intelligent Control for Spatial Processes and Spatially Distributed Systems

    Full text link
    Dynamical systems are often characterized by their time-dependent evolution, named temporal dynamics. The space-dependent evolution of dynamical systems, named spatial dynamics, is another important domain of interest for many engineering applications. By studying both the spatial and temporal evolution, novel modeling and control applications may be developed for many industrial processes. One process of special interest is additive manufacturing, where a three-dimensional object is manufactured in a layer-wise fashion via a numerically controlled process. The material is printed over a spatial domain in each layer and subsequent layers are printed on top of each other. The spatial dynamics of the printing process over the layers is named the layer-to-layer spatial dynamics. Additive manufacturing provides great flexibility in terms of material selection and design geometry for modern manufacturing applications, and has been hailed as a cornerstone technology for smart manufacturing, or Industry 4.0, applications in industry. However, due to the issues in reliability and repeatability, the applicability of additive manufacturing in industry has been limited. Layer-to-layer spatial dynamics represent the dynamics of the printed part. Through the layer-to-layer spatial dynamics, it is possible to represent the physical properties of the part such as dimensional properties of each layer in the form of a heightmap over a spatial domain. Thus, by considering the spatial dynamics, it is possible to develop models and controllers for the physical properties of a printed part. This dissertation develops control-oriented models to characterize the spatial dynamics and layer-to-layer closed-loop controllers to improve the performance of the printed parts in the layer-to-layer spatial domain. In practice, additive manufacturing resources are often utilized as a fleet to improve the throughput and yield of a manufacturing system. An additive manufacturing fleet poses additional challenges in modeling, analysis, and control at a system-level. An additive manufacturing fleet is an instance of the more general class of spatially distributed systems, where the resources in the system (e.g., additive manufacturing machines, robots) are spatially distributed within the system. The goal is to efficiently model, analyze, and control spatially distributed systems by considering the system-level interactions of the resources. This dissertation develops a centralized system-level modeling and control framework for additive manufacturing fleets. Many monitoring and control applications rely on the availability of run-time, up-to-date representations of the physical resources (e.g., the spatial state of a process, connectivity and availability of resources in a fleet). Purpose-driven digital representations of the physical resources, known as digital twins, provide up-to-date digital representations of resources in run-time for analysis and control. This dissertation develops an extensible digital twin framework for cyber-physical manufacturing systems. The proposed digital twin framework is demonstrated through experimental case studies on abnormality detection, cyber-security, and spatial monitoring for additive manufacturing processes. The results and the contributions presented in this dissertation improve the performance and reliability of additive manufacturing processes and fleets for industrial applications, which in turn enables next-generation manufacturing systems with enhanced control and analysis capabilities through intelligent controllers and digital twins.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169635/1/baltaefe_1.pd

    Director's discretionary fund report for fiscal year 1994

    Get PDF
    This technical memorandum contains brief technical papers describing research and technology development programs sponsored by the Ames Research Center Director's Discretionary Fund during fiscal year 1991 (October 1993 through September 1994). An appendix provides administrative information for each of the sponsored research programs

    Languages and Tools for Optimization of Large-Scale Systems

    Get PDF
    Modeling and simulation are established techniques for solving design problems in a wide range of engineering disciplines today. Dedicated computer languages, such as Modelica, and efficient software tools are available. In this thesis, an extension of Modelica, Optimica, targeted at dynamic optimization of Modelica models is proposed. In order to demonstrate the Optimica extension, supporting software has been developed. This includes a modularly extensible Modelica compiler, the JModelica compiler, and an extension that supports also Optimica. A Modelica library for paper machine dryer section modeling, DryLib, has been developed. The classes in the library enable structured and hierarchical modeling of dryer sections at the application user level, while offering extensibility for the expert user. Based on DryLib, a parameter optimization problem, a model reduction problem, and an optimization-based control problem have been formulated and solved. A start-up optimization problem for a plate reactor has been formulated in Optimica, and solved by means of the Optimica compiler. In addition, the robustness properties of the start-up trajectories have been evaluated by means of Monte-Carlo simulation. In many control systems, it is necessary to consider interaction with a user. In this thesis, a manual control scheme for an unstable inverted pendulum system, where the inputs are bounded, is presented. The proposed controller is based on the notion of reachability sets and guarantees semi global stability for all references. An inverted pendulum on a two wheels robot has been developed. A distributed control system, including sensor processing algorithms and a stabilizing control scheme has been implemented on three on-board embedded processors

    On the 3D electromagnetic quantitative inverse scattering problem: algorithms and regularization

    Get PDF
    In this thesis, 3D quantitative microwave imaging algorithms are developed with emphasis on efficiency of the algorithms and quality of the reconstruction. First, a fast simulation tool has been implemented which makes use of a volume integral equation (VIE) to solve the forward scattering problem. The solution of the resulting linear system is done iteratively. To do this efficiently, two strategies are combined. First, the matrix-vector multiplications needed in every step of the iterative solution are accelerated using a combination of the Fast Fourier Transform (FFT) method and the Multilevel Fast Multipole Algorithm (MLFMA). It is shown that this hybridMLFMA-FFT method is most suited for large, sparse scattering problems. Secondly, the number of iterations is reduced by using an extrapolation technique to determine suitable initial guesses, which are already close to the solution. This technique combines a marching-on-in-source-position scheme with a linear extrapolation over the permittivity under the form of a Born approximation. It is shown that this forward simulator indeed exhibits a better efficiency. The fast forward simulator is incorporated in an optimization technique which minimizes the discrepancy between measured data and simulated data by adjusting the permittivity profile. A Gauss-Newton optimization method with line search is employed in this dissertation to minimize a least squares data fit cost function with additional regularization. Two different regularization methods were developed in this research. The first regularization method penalizes strong fluctuations in the permittivity by imposing a smoothing constraint, which is a widely used approach in inverse scattering. However, in this thesis, this constraint is incorporated in a multiplicative way instead of in the usual additive way, i.e. its weight in the cost function is reduced with an improving data fit. The second regularization method is Value Picking regularization, which is a new method proposed in this dissertation. This regularization is designed to reconstruct piecewise homogeneous permittivity profiles. Such profiles are hard to reconstruct since sharp interfaces between different permittivity regions have to be preserved, while other strong fluctuations need to be suppressed. Instead of operating on the spatial distribution of the permittivity, as certain existing methods for edge preservation do, it imposes the restriction that only a few different permittivity values should appear in the reconstruction. The permittivity values just mentioned do not have to be known in advance, however, and their number is also updated in a stepwise relaxed VP (SRVP) regularization scheme. Both regularization techniques have been incorporated in the Gauss-Newton optimization framework and yield significantly improved reconstruction quality. The efficiency of the minimization algorithm can also be improved. In every step of the iterative optimization, a linear Gauss-Newton update system has to be solved. This typically is a large system and therefore is solved iteratively. However, these systems are ill-conditioned as a result of the ill-posedness of the inverse scattering problem. Fortunately, the aforementioned regularization techniques allow for the use of a subspace preconditioned LSQR method to solve these systems efficiently, as is shown in this thesis. Finally, the incorporation of constraints on the permittivity through a modified line search path, helps to keep the forward problem well-posed and thus the number of forward iterations low. Another contribution of this thesis is the proposal of a new Consistency Inversion (CI) algorithm. It is based on the same principles as another well known reconstruction algorithm, the Contrast Source Inversion (CSI) method, which considers the contrast currents – equivalent currents that generate a field identical to the scattered field – as fundamental unknowns together with the permittivity. In the CI method, however, the permittivity variables are eliminated from the optimization and are only reconstructed in a final step. This avoids alternating updates of permittivity and contrast currents, which may result in a faster convergence. The CI method has also been supplemented with VP regularization, yielding the VPCI method. The quantitative electromagnetic imaging methods developed in this work have been validated on both synthetic and measured data, for both homogeneous and inhomogeneous objects and yield a high reconstruction quality in all these cases. The successful, completely blind reconstruction of an unknown target from measured data, provided by the Institut Fresnel in Marseille, France, demonstrates at once the validity of the forward scattering code, the performance of the reconstruction algorithm and the quality of the measurements. The reconstruction of a numerical MRI based breast phantom is encouraging for the further development of biomedical microwave imaging and of microwave breast cancer screening in particular

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Research and development programs Quarterly report, Jan. - Mar. 1967

    Get PDF
    Thin film and semiconductor microelectronics, radar scattering, radome thermal stress, boundary layer phenomena, guided missile parts, turbulent mixing, antenna systems, and plasma dynamic

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
    • …
    corecore