36 research outputs found

    Dynamically reconfigurable management of energy, performance, and accuracy applied to digital signal, image, and video Processing Applications

    Get PDF
    There is strong interest in the development of dynamically reconfigurable systems that can meet real-time constraints in energy/power-performance-accuracy (EPA/PPA). In this dissertation, I introduce a framework for implementing dynamically reconfigurable digital signal, image, and video processing systems. The basic idea is to first generate a collection of Pareto-optimal realizations in the EPA/PPA space. Dynamic EPA/PPA management is then achieved by selecting the Pareto-optimal implementations that can meet the real-time constraints. The systems are then demonstrated using Dynamic Partial Reconfiguration (DPR) and dynamic frequency control on FPGAs. The framework is demonstrated on: i) a dynamic pixel processor, ii) a dynamically reconfigurable 1-D digital filtering architecture, and iii) a dynamically reconfigurable 2-D separable digital filtering system. Efficient implementations of the pixel processor are based on the use of look-up tables and local-multiplexes to minimize FPGA resources. For the pixel-processor, different realizations are generated based on the number of input bits, the number of cores, the number of output bits, and the frequency of operation. For each parameters combination, there is a different pixel-processor realization. Pareto-optimal realizations are selected based on measurements of energy per frame, PSNR accuracy, and performance in terms of frames per second. Dynamic EPA/PPA management is demonstrated for a sequential list of real-time constraints by selecting optimal realizations and implementing using DPR and dynamic frequency control. Efficient FPGA implementations for the 1-D and 2-D FIR filters are based on the use a distributed arithmetic technique. Different realizations are generated by varying the number of coefficients, coefficient bitwidth, and output bitwidth. Pareto-optimal realizations are selected in the EPA space. Dynamic EPA management is demonstrated on the application of real-time EPA constraints on a digital video. The results suggest that the general framework can be applied to a variety of digital signal, image, and video processing systems. It is based on the use of offline-processing that is used to determine the Pareto-optimal realizations. Real-time constraints are met by selecting Pareto-optimal realizations pre-loaded in memory that are then implemented efficiently using DPR and/or dynamic frequency control

    Design methodology for embedded computer vision systems

    Get PDF
    Computer vision has emerged as one of the most popular domains of embedded appli¬cations. Though various new powerful embedded platforms to support such applica¬tions have emerged in recent years, there is a distinct lack of efficient domain-specific synthesis techniques for optimized implementation of such systems. In this thesis, four different aspects that contribute to efficient design and synthesis of such systems are explored: (1) Graph Transformations: Dataflow modeling is widely used in digital signal processing (DSP) systems. However, support for dynamic behavior in such systems exists mainly at the modeling level and there is a lack of optimized synthesis tech¬niques for these models. New transformation techniques for efficient system-on-chip (SoC) design methods are proposed and implemented for cyclo-static dataflow and its parameterized version (parameterized cyclo-static dataflow) -- two powerful models that allow dynamic reconfigurability and phased behavior in DSP systems. (2) Design Space Exploration: The broad range of target platforms along with the complexity of applications provides a vast design space, calling for efficient tools to explore this space and produce effective design choices. A novel architectural level design methodology based on a formalism called multirate synchronization graphs is presented along with methods for performance evaluation. (3) Multiprocessor Communication Interface: Efficient code synthesis for emerg¬ing new parallel architectures is an important and sparsely-explored problem. A widely-encountered problem in this regard is efficient communication between pro¬cessors running different sub-systems. A widely used tool in the domain of general-purpose multiprocessor clusters is MPI (Message Passing Interface). However, this does not scale well for embedded DSP systems. A new, powerful and highly optimized communication interface for multiprocessor signal processing systems is presented in this work that is based on the integration of relevant properties of MPI with dataflow semantics. (4) Parameterized Design Framework for Particle Filters: Particle filter systems constitute an important class of applications used in a wide number of fields. An effi¬cient design and implementation framework for such systems has been implemented based on the observation that a large number of such applications exhibit similar prop¬erties. The key properties of such applications are identified and parameterized appro¬priately to realize different systems that represent useful trade-off points in the space of possible implementations

    Data and Design: Advancing Theory for Complex Adaptive Systems

    Get PDF
    Complex adaptive systems exhibit certain types of behaviour that are difficult to predict or understand using reductionist approaches, such as linearization or assuming conditions of optimality. This research focuses on the complex adaptive systems associated with public health. These are noted for being driven by many latent forces, shaped centrally by human behaviour. Dynamic simulation techniques, including agent-based models (ABMs) and system dynamics (SD) models, have been used to study the behaviour of complex adaptive systems, including in public health. While much has been learned, such work is still hampered by important limitations. Models of complex systems themselves can be quite complex, increasing the difficulty in explaining unexpected model behaviour, whether that behaviour comes from model code errors or is due to new learning. Model complexity also leads to model designs that are hard to adapt to growing knowledge about the subject area, further reducing model-generated insights. In the current literature of dynamic simulations of human public health behaviour, few focus on capturing explicit psychological theories of human behaviour. Given that human behaviour, especially health and risk behaviour, is so central to understanding of processes in public health, this work explores several methods to improve the utility and flexibility of dynamic models in public health. This work is undertaken in three projects. The first uses a machine learning algorithm, the particle filter, to augment a simple ABM in the presence of continuous disease prevalence data from the modelled system. It is shown that, while using the particle filter improves the accuracy of the ABM, when compared with previous work using SD with a particle filter, the ABM has some limitations, which are discussed. The second presents a model design pattern that focuses on scalability and modularity to improve the development time, testability, and flexibility of a dynamic simulation for tobacco smoking. This method also supports a general pattern of constructing hybrid models --- those that contain elements of multiple methods, such as agent-based or system dynamics. This method is demonstrated with a stylized example of tobacco smoking in a human population. The final line of work implements this modular design pattern, with differing mechanisms of addiction dynamics, within a rich behavioural model of tobacco purchasing and consumption. It integrates the results from a discrete choice experiment, which is a widely used economic method for study human preferences. It compares and contrasts four independent addiction modules under different population assumptions. A number of important insights are discussed: no single module was universally more accurate across all human subpopulations, demonstrating the benefit of exploring a diversity of approaches; increasing the number of parameters does not necessarily improve a module's predictions, since the overall least accurate module had the second highest number of parameters; and slight changes in module structure can lead to drastic improvements, implying the need to be able to iteratively learn from model behaviour

    Data Informed Health Simulation Modeling

    Get PDF
    Combining reliable data with dynamic models can enhance the understanding of health-related phenomena. Smartphone sensor data characterizing discrete states is often suitable for analysis with machine learning classifiers. For dynamic models with continuous states, high-velocity data also serves an important role in model parameterization and calibration. Particle filtering (PF), combined with dynamic models, can support accurate recurrent estimation of continuous system state. This thesis explored these and related ideas with several case studies. The first employed multivariate Hidden Markov models (HMMs) to identify smoking intervals, using time-series of smartphone-based sensor data. Findings demonstrated that multivariate HMMs can achieve notable accuracy in classifying smoking state, with performance being strongly elevated by appropriate data conditioning. Reflecting the advantages of dynamic simulation models, this thesis has contributed two applications of articulated dynamic models: An agent-based model (ABM) of smoking and E-Cigarette use and a hybrid multi-scale model of diabetes in pregnancy (DIP). The ABM of smoking and E-Cigarette use, informed by cross-sectional data, supports investigations of smoking behavior change in light of the influence of social networks and E-Cigarette use. The DIP model was evidenced by both longitudinal and cross-sectional data, and is notable for its use of interwoven ABM, system dynamics (SD), and discrete event simulation elements to explore the interaction of risk factors, coupled dynamics of glycemia regulation, and intervention tradeoffs to address the growing incidence of DIP in the Australia Capital Territory. The final study applied PF with an SD model of mosquito development to estimate the underlying Culex mosquito population using various direct observations, including time series of weather-related factors and mosquito trap counts. The results demonstrate the effectiveness of PF in regrounding the states and evolving model parameters based on incoming observations. Using PF in the context of automated model calibration allows optimization of the values of parameters to markedly reduce model discrepancy. Collectively, the thesis demonstrates how characteristics and availability of data can influence model structure and scope, how dynamic model structure directly affects the ways that data can be used, and how advanced analysis methods for calibration and filtering can enhance model accuracy and versatility

    An efficient polynomial chaos-based proxy model for history matching and uncertainty quantification of complex geological structures

    Get PDF
    A novel polynomial chaos proxy-based history matching and uncertainty quantification method is presented that can be employed for complex geological structures in inverse problems. For complex geological structures, when there are many unknown geological parameters with highly nonlinear correlations, typically more than 106 full reservoir simulation runs might be required to accurately probe the posterior probability space given the production history of reservoir. This is not practical for high-resolution geological models. One solution is to use a "proxy model" that replicates the simulation model for selected input parameters. The main advantage of the polynomial chaos proxy compared to other proxy models and response surfaces is that it is generally applicable and converges systematically as the order of the expansion increases. The Cameron and Martin theorem 2.24 states that the convergence rate of the standard polynomial chaos expansions is exponential for Gaussian random variables. To improve the convergence rate for non-Gaussian random variables, the generalized polynomial chaos is implemented that uses an Askey-scheme to choose the optimal basis for polynomial chaos expansions [199]. Additionally, for the non-Gaussian distributions that can be effectively approximated by a mixture of Gaussian distributions, we use the mixture-modeling based clustering approach where under each cluster the polynomial chaos proxy converges exponentially fast and the overall posterior distribution can be estimated more efficiently using different polynomial chaos proxies. The main disadvantage of the polynomial chaos proxy is that for high-dimensional problems, the number of the polynomial chaos terms increases drastically as the order of the polynomial chaos expansions increases. Although different non-intrusive methods have been developed in the literature to address this issue, still a large number of simulation runs is required to compute high-order terms of the polynomial chaos expansions. This work resolves this issue by proposing the reduced-terms polynomial chaos expansion which preserves only the relevant terms in the polynomial chaos representation. We demonstrated that the sparsity pattern in the polynomial chaos expansion, when used with the Karhunen-Loéve decomposition method or kernel PCA, can be systematically captured. A probabilistic framework based on the polynomial chaos proxy is also suggested in the context of the Bayesian model selection to study the plausibility of different geological interpretations of the sedimentary environments. The proposed surrogate-accelerated Bayesian inverse analysis can be coherently used in practical reservoir optimization workflows and uncertainty assessments

    Automatic Algorithm Selection for Complex Simulation Problems

    Get PDF
    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. The thesis consists of three parts. The first part surveys existing approaches to solve the algorithm selection problem and discusses techniques to analyze simulation algorithm performance.The second part introduces a software framework for automatic simulation algorithm selection, which is evaluated in the third part.Die Auswahl des passendsten Simulationsalgorithmus für eine bestimmte Aufgabe ist oftmals schwierig. Dies liegt an der komplexen Interaktion zwischen Modelleigenschaften, Implementierungsdetails und Laufzeitumgebung. Die Arbeit ist in drei Teile gegliedert. Der erste Teil befasst sich eingehend mit Vorarbeiten zur automatischen Algorithmenauswahl, sowie mit der Leistungsanalyse von Simulationsalgorithmen. Der zweite Teil der Arbeit stellt ein Rahmenwerk zur automatischen Auswahl von Simulationsalgorithmen vor, welches dann im dritten Teil evaluiert wird

    Advanced Operation and Maintenance in Solar Plants, Wind Farms and Microgrids

    Get PDF
    This reprint presents advances in operation and maintenance in solar plants, wind farms and microgrids. This compendium of scientific articles will help clarify the current advances in this subject, so it is expected that it will please the reader

    Metrics for Specification, Validation, and Uncertainty Prediction for Credibility in Simulation of Active Perception Sensor Systems

    Get PDF
    The immense effort required for the safety validation of an automated driving system of SAE level 3 or higher is known not to be feasible by real test drives alone. Therefore, simulation is key even for limited operational design domains for homologation of automated driving functions. Consequently, all simulation models used as tools for this purpose must be qualified beforehand. For this, in addition to their verification and validation, uncertainty quantification (VV&UQ) and prediction for the application domain are required for the credibility of the simulation model. To enable such VV&UQ, a particularly developed lidar sensor system simulation is utilized to present new metrics that can be used holistically to demonstrate the model credibility and -maturity for simulation models of active perception sensor systems. The holistic process towards model credibility starts with the formulation of the requirements for the models. In this context, the threshold values of the metrics as acceptance criteria are quantifiable by the relevance analysis of the cause-effect chains prevailing in different scenarios, and should intuitively be in the same unit as the simulated metric for this purpose. These relationships can be inferred via the presented aligned methods “Perception Sensor Collaborative Effect and Cause Tree” (PerCollECT) and “Cause, Effect, and Phenomenon Relevance Analysis” (CEPRA). For sample validation, each experiment must be accompanied by reference measurements, as these then serve as simulation input. Since the reference data collection is subject to epistemic as well as aleatory uncertainty, which are both propagated through the simulation in the form of input data variation, this leads to several slightly different simulation results. In the simulation of measured signals and data over time considered here, this combination of uncertainties is best expressed as superimposed cumulative distribution functions. The metric must therefore be able to handle such so-called p-boxes as a result of the large set of simulations. In the present work, the area validation metric (AVM) is selected by a detailed analysis as the best of the metrics already used and extended to be able to fulfill all the requirements. This results in the corrected AVM (CAVM), which quantifies the model scattering error with respect to the real scatter. Finally, the double validation metric (DVM) is elaborated as a double-vector of the former metric with the estimate for the model bias. The novel metric is exemplarily applied to the empirical cumulative distribution functions of lidar measurements and the p-boxes from their re-simulations. In this regard, aleatory and epistemic uncertainties are taken into account for the first time and the novel metrics are successfully established. The quantification of the uncertainties and error prediction of a sensor model based on the sample validation is also demonstrated for the first time

    Custom optimization algorithms for efficient hardware implementation

    No full text
    The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.Open Acces

    Visuelle Analyse großer Partikeldaten

    Get PDF
    Partikelsimulationen sind eine bewährte und weit verbreitete numerische Methode in der Forschung und Technik. Beispielsweise werden Partikelsimulationen zur Erforschung der Kraftstoffzerstäubung in Flugzeugturbinen eingesetzt. Auch die Entstehung des Universums wird durch die Simulation von dunkler Materiepartikeln untersucht. Die hierbei produzierten Datenmengen sind immens. So enthalten aktuelle Simulationen Billionen von Partikeln, die sich über die Zeit bewegen und miteinander interagieren. Die Visualisierung bietet ein großes Potenzial zur Exploration, Validation und Analyse wissenschaftlicher Datensätze sowie der zugrundeliegenden Modelle. Allerdings liegt der Fokus meist auf strukturierten Daten mit einer regulären Topologie. Im Gegensatz hierzu bewegen sich Partikel frei durch Raum und Zeit. Diese Betrachtungsweise ist aus der Physik als das lagrange Bezugssystem bekannt. Zwar können Partikel aus dem lagrangen in ein reguläres eulersches Bezugssystem, wie beispielsweise in ein uniformes Gitter, konvertiert werden. Dies ist bei einer großen Menge an Partikeln jedoch mit einem erheblichen Aufwand verbunden. Darüber hinaus führt diese Konversion meist zu einem Verlust der Präzision bei gleichzeitig erhöhtem Speicherverbrauch. Im Rahmen dieser Dissertation werde ich neue Visualisierungstechniken erforschen, welche speziell auf der lagrangen Sichtweise basieren. Diese ermöglichen eine effiziente und effektive visuelle Analyse großer Partikeldaten
    corecore