62,314 research outputs found

    Markov Chain Methods For Analyzing Complex Transport Networks

    Full text link
    We have developed a steady state theory of complex transport networks used to model the flow of commodity, information, viruses, opinions, or traffic. Our approach is based on the use of the Markov chains defined on the graph representations of transport networks allowing for the effective network design, network performance evaluation, embedding, partitioning, and network fault tolerance analysis. Random walks embed graphs into Euclidean space in which distances and angles acquire a clear statistical interpretation. Being defined on the dual graph representations of transport networks random walks describe the equilibrium configurations of not random commodity flows on primary graphs. This theory unifies many network concepts into one framework and can also be elegantly extended to describe networks represented by directed graphs and multiple interacting networks.Comment: 26 pages, 4 figure

    Achieving High Speed CFD simulations: Optimization, Parallelization, and FPGA Acceleration for the unstructured DLR TAU Code

    Get PDF
    Today, large scale parallel simulations are fundamental tools to handle complex problems. The number of processors in current computation platforms has been recently increased and therefore it is necessary to optimize the application performance and to enhance the scalability of massively-parallel systems. In addition, new heterogeneous architectures, combining conventional processors with specific hardware, like FPGAs, to accelerate the most time consuming functions are considered as a strong alternative to boost the performance. In this paper, the performance of the DLR TAU code is analyzed and optimized. The improvement of the code efficiency is addressed through three key activities: Optimization, parallelization and hardware acceleration. At first, a profiling analysis of the most time-consuming processes of the Reynolds Averaged Navier Stokes flow solver on a three-dimensional unstructured mesh is performed. Then, a study of the code scalability with new partitioning algorithms are tested to show the most suitable partitioning algorithms for the selected applications. Finally, a feasibility study on the application of FPGAs and GPUs for the hardware acceleration of CFD simulations is presented

    An Automated Design-flow for FPGA-based Sequential Simulation

    Get PDF
    In this paper we describe the automated design flow that will transform and map a given homogeneous or heterogeneous hardware design into an FPGA that performs a cycle accurate simulation. The flow replaces the required manually performed transformation and can be embedded in existing standard synthesis flows. Compared to the earlier manually translated designs, this automated flow resulted in a reduced number of FPGA hardware resources and higher simulation frequencies. The implementation of the complete design flow is work in progress.\u

    A High-Performance Triple Patterning Layout Decomposer with Balanced Density

    Full text link
    Triple patterning lithography (TPL) has received more and more attentions from industry as one of the leading candidate for 14nm/11nm nodes. In this paper, we propose a high performance layout decomposer for TPL. Density balancing is seamlessly integrated into all key steps in our TPL layout decomposition, including density-balanced semi-definite programming (SDP), density-based mapping, and density-balanced graph simplification. Our new TPL decomposer can obtain high performance even compared to previous state-of-the-art layout decomposers which are not balanced-density aware, e.g., by Yu et al. (ICCAD'11), Fang et al. (DAC'12), and Kuang et al. (DAC'13). Furthermore, the balanced-density version of our decomposer can provide more balanced density which leads to less edge placement error (EPE), while the conflict and stitch numbers are still very comparable to our non-balanced-density baseline

    Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis

    Full text link
    Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmic-statistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worst-case algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large-scale databases and that also have good inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles of Database Systems (PODS 2012
    corecore