1,497 research outputs found

    Concurrent Design of Embedded Control Software

    Get PDF
    Embedded software design for mechatronic systems is becoming an increasingly time-consuming and error-prone task. In order to cope with the heterogeneity and complexity, a systematic model-driven design approach is needed, where several parts of the system can be designed concurrently. There is however a trade-off between concurrency efficiency and integration efficiency. In this paper, we present a case study on the development of the embedded control software for a real-world mechatronic system in order to evaluate how we can integrate concurrent and largely independent designed embedded system software parts in an efficient way. The case study was executed using our embedded control system design methodology which employs a concurrent systematic model-based design approach that ensures a concurrent design process, while it still allows a fast integration phase by using automatic code synthesis. The result was a predictable concurrently designed embedded software realization with a short integration time

    Distortion of Gravitational-Wave Packets Due to their Self-Gravity

    Full text link
    When a source emits a gravity-wave (GW) pulse over a short period of time, the leading edge of the GW signal is redshifted more than the inner boundary of the pulse. The GW pulse is distorted by the gravitational effect of the self-energy residing in between these shells. We illustrate this distortion for GW pulses from the final plunge of black hole (BH) binaries, leading to the evolution of the GW profile as a function of the radial distance from the source. The distortion depends on the total GW energy released and the duration of the emission, scaled by the total binary mass, M. The effect should be relevant in finite box simulations where the waveforms are extracted within a radius of <~ 100M. For characteristic emission parameters at the final plunge between binary BHs of arbitrary spins, this effect could distort the simulated GW templates for LIGO and LISA by a fraction of 0.001. Accounting for the wave distortion would significantly decrease the waveform extraction errors in numerical simulations.Comment: accepted for publication in Physical Review

    Stable super-resolution limit and smallest singular value of restricted Fourier matrices

    Full text link
    Super-resolution refers to the process of recovering the locations and amplitudes of a collection of point sources, represented as a discrete measure, given M+1M+1 of its noisy low-frequency Fourier coefficients. The recovery process is highly sensitive to noise whenever the distance Δ\Delta between the two closest point sources is less than 1/M1/M. This paper studies the {\it fundamental difficulty of super-resolution} and the {\it performance guarantees of a subspace method called MUSIC} in the regime that Δ<1/M\Delta<1/M. The most important quantity in our theory is the minimum singular value of the Vandermonde matrix whose nodes are specified by the source locations. Under the assumption that the nodes are closely spaced within several well-separated clumps, we derive a sharp and non-asymptotic lower bound for this quantity. Our estimate is given as a weighted 2\ell^2 sum, where each term only depends on the configuration of each individual clump. This implies that, as the noise increases, the super-resolution capability of MUSIC degrades according to a power law where the exponent depends on the cardinality of the largest clump. Numerical experiments validate our theoretical bounds for the minimum singular value and the resolution limit of MUSIC. When there are SS point sources located on a grid with spacing 1/N1/N, the fundamental difficulty of super-resolution can be quantitatively characterized by a min-max error, which is the reconstruction error incurred by the best possible algorithm in the worst-case scenario. We show that the min-max error is closely related to the minimum singular value of Vandermonde matrices, and we provide a non-asymptotic and sharp estimate for the min-max error, where the dominant term is (N/M)2S1(N/M)^{2S-1}.Comment: 47 pages, 8 figure

    Concurrent Design of Embedded Control Software

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationRecent trends in high performance computing present larger and more diverse computers using multicore nodes possibly with accelerators and/or coprocessors and reduced memory. These changes pose formidable challenges for applications code to attain scalability. Software frameworks that execute machine-independent applications code using a runtime system that shields users from architectural complexities oer a portable solution for easy programming. The Uintah framework, for example, solves a broad class of large-scale problems on structured adaptive grids using fluid-flow solvers coupled with particle-based solids methods. However, the original Uintah code had limited scalability as tasks were run in a predefined order based solely on static analysis of the task graph and used only message passing interface (MPI) for parallelism. By using a new hybrid multithread and MPI runtime system, this research has made it possible for Uintah to scale to 700K central processing unit (CPU) cores when solving challenging fluid-structure interaction problems. Those problems often involve moving objects with adaptive mesh refinement and thus with highly variable and unpredictable work patterns. This research has also demonstrated an ability to run capability jobs on the heterogeneous systems with Nvidia graphics processing unit (GPU) accelerators or Intel Xeon Phi coprocessors. The new runtime system for Uintah executes directed acyclic graphs of computational tasks with a scalable asynchronous and dynamic runtime system for multicore CPUs and/or accelerators/coprocessors on a node. Uintah's clear separation between application and runtime code has led to scalability increases without significant changes to application code. This research concludes that the adaptive directed acyclic graph (DAG)-based approach provides a very powerful abstraction for solving challenging multiscale multiphysics engineering problems. Excellent scalability with regard to the different processors and communications performance are achieved on some of the largest and most powerful computers available today

    Investigating applications portability with the Uintah DAG-based runtime system on PetaScale supercomputers

    Get PDF
    pre-printPresent trends in high performance computing present formidable challenges for applications code using multicore nodes possibly with accelerators and/or co-processors and reduced memory while still attaining scalability. Software frameworks that execute machine-independent applications code using a runtime system that shields users from architectural complexities offer a possible solution. The Uintah framework for example, solves a broad class of large-scale problems on structured adaptive grids using fluid-flow solvers coupled with particle-based solids methods. Uintah executes directed acyclic graphs of computational tasks with a scalable asynchronous and dynamic runtime system for CPU cores and/or accelerators/coprocessors on a node. Uintah's clear separation between application and runtime code has led to scalability increases of 1000x without significant changes to application code. This methodology is tested on three leading Top500 machines; OLCF Titan, TACC Stampede and ALCF Mira using three diverse and challenging applications problems. This investigation of scalability with regard to the different processors and communications performance leads to the overall conclusion that the adaptive DAG-based approach provides a very powerful abstraction for solving challenging multi-scale multi-physics engineering problems on some of the largest and most powerful computers available today

    An Automated Method For Model-Plant Mismatch Detection And Correction In Process Plants Employing Model Predictive Control (MPC)

    Get PDF
    A model-predictive controller (MPC) uses the process model to predict future outputs of the system. Hence, its performance is directly related to the quality of the model. The difference between the model and the actual plant is termed model-plant mismatch (MPM). Since MPM has significant effect on MPC performance, the model has to be corrected and updated whenever high MPM is detected. Re-identification of the process model with large number of inputs and outputs is costly due to potential production losses and high manpower efforts. Therefore, detection of the location of the mismatch is needed so that only that channel is re-identified. Detection methods using partial correlation analysis as well as other methods have been developed, but these are qualitative methods that does not indicate the extent of the mismatch clearly and whether or not corrective action is necessary. The proposed methodology of this project uses a quantitative variable (e/u) which is the model errors divided by the manipulated variables, to identify changes in the plant gain and hence the mismatch. Taguchi experiments were carried out to identity the most contributing gains to the overall process, and then focus on these major contributors to find the threshold limits of mismatch by trial and error. When the mismatch indicated by the variable (e/u) exceeds the threshold limit, auto-correction of the model gain of the controller is made to match with the new plant gain. The proposed method was assessed in simulations using MA TLAB and Simulink on the Wood and Berry distillation column case study and was successfully validated. Testing for various mismatch scenarios for both two major contributors to the process, the algorithm was able to bring the output back to the desired set-point in a very short time
    corecore