340 research outputs found

    A Networked Dataflow Simulation Environment for Signal Processing and Data Mining Applications

    Get PDF
    In networked signal processing systems, dataflow graphs can be used to describe the processing on individual network nodes. However, to analyze the correctness and performance of these systems, designers must understand the interactions across these individual "node-level'' dataflow graphs --- as they communicate across the network --- in addition to the characteristics of the individual graphs. In this thesis, we present a novel simulation environment, called the NS-2 -- TDIF SIMulation environment (NT-SIM). NT-SIM provides integrated co-simulation of networked systems and combines the network analysis capabilities provided by the Network Simulator (ns) with the scheduling capabilities of a dataflow-based framework, thereby providing novel features for more comprehensive simulation of networked signal processing systems. Through a novel integration of advanced tools for network and dataflow graph simulation, our NT-SIM environment allows comprehensive simulation and analysis of networked systems. We present two case studies that concretely demonstrate the utility of NT-SIM in the contexts of a heterogeneous signal processing and data mining system design

    Tuning the Computational Effort: An Adaptive Accuracy-aware Approach Across System Layers

    Get PDF
    This thesis introduces a novel methodology to realize accuracy-aware systems, which will help designers integrate accuracy awareness into their systems. It proposes an adaptive accuracy-aware approach across system layers that addresses current challenges in that domain, combining and tuning accuracy-aware methods on different system layers. To widen the scope of accuracy-aware computing including approximate computing for other domains, this thesis presents innovative accuracy-aware methods and techniques for different system layers. The required tuning of the accuracy-aware methods is integrated into a configuration layer that tunes the available knobs of the accuracy-aware methods integrated into a system

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization

    Get PDF
    Richter A. Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization. Bielefeld: Universität Bielefeld; 2019.Andreas Richter gratefully acknowledges the financial support from Honda Research Institute Europe (HRI-EU).This thesis targets efficient solutions for optimal representation setups for evolutionary design optimization problems. The representation maps the abstract parameters of an optimizer to a meaningful variation of the design model, e.g., the shape of a car. Thereby, it determines the convergence speed to and the quality of the final result. Thus, engineers are eager to employ well-tuned representations to achieve high-quality design solutions. But, setting up optimal representations is a cumbersome process because the setup procedure requires detailed knowledge about the objective functions, e.g., a fluid dynamics simulation, and the parameters of the employed representation itself. Thus, we target efficient routines to set up representations automatically to support engineers from their tedious, partly manual work. Inspired by the concept of evolvability, we present novel quality criteria for the evaluation of linear deformations commonly applied as representations. We define and analyze the criteria variability, regularity, and improvement potential which measure the expected quality and convergence speed of an evolutionary design optimization process based on the linear deformation setup. Moreover, we target the efficient optimization of deformation setups with respect to these three criteria. In dynamic design optimization scenarios a suitable compromise between exploration and exploitation is crucial for efficient solutions. We discuss the construction of optimal compromises for these dynamic scenarios with our criteria because they characterize exploration and exploitation. As a result an engineer can initialize and adjust the deformation setup for improved convergence speed of a design process and for enhanced quality of the design solutions with our methods

    Sensor Resource Management: Intelligent Multi-objective Modularized Optimization Methodology and Models

    Get PDF
    The importance of the optimal Sensor Resource Management (SRM) problem is growing. The number of Radar, EO/IR, Overhead Persistent InfraRed (OPIR), and other sensors with best capabilities, is limited in the stressing tasking environment relative to sensing needs. Sensor assets differ significantly in number, location, and capability over time. To determine on which object a sensor should collect measurements during the next observation period k, the known algorithms favor the object with the expected measurements that would result in the largest gain in relative information. We propose a new tasking paradigm OPTIMA for sensors that goes beyond information gain. It includes Sensor Resource Analyzer, and the Sensor Tasking Algorithm (Tasker). The Tasker maintains timing constraints, resolution, and geometric differences between sensors, relative to the tasking requirements on track quality and the measurements of object characterization quality. The Tasker does this using the computational intelligence approach of multi-objective optimization, which involves evolutionary methods
    • …
    corecore