451 research outputs found

    A load-sharing architecture for high performance optimistic simulations on multi-core machines

    Get PDF
    In Parallel Discrete Event Simulation (PDES), the simulation model is partitioned into a set of distinct Logical Processes (LPs) which are allowed to concurrently execute simulation events. In this work we present an innovative approach to load-sharing on multi-core/multiprocessor machines, targeted at the optimistic PDES paradigm, where LPs are speculatively allowed to process simulation events with no preventive verification of causal consistency, and actual consistency violations (if any) are recovered via rollback techniques. In our approach, each simulation kernel instance, in charge of hosting and executing a specific set of LPs, runs a set of worker threads, which can be dynamically activated/deactivated on the basis of a distributed algorithm. The latter relies in turn on an analytical model that provides indications on how to reassign processor/core usage across the kernels in order to handle the simulation workload as efficiently as possible. We also present a real implementation of our load-sharing architecture within the ROme OpTimistic Simulator (ROOT-Sim), namely an open-source C-based simulation platform implemented according to the PDES paradigm and the optimistic synchronization approach. Experimental results for an assessment of the validity of our proposal are presented as well

    Warping Cache Simulation of Polyhedral Programs

    Get PDF
    Techniques to evaluate a program’s cache performance fall into two camps: 1. Traditional trace-based cache simulators precisely account for sophisticated real-world cache models and support arbitrary workloads, but their runtime is proportional to the number of memory accesses performed by the program under analysis. 2. Relying on implicit workload characterizations such as the polyhedral model, analytical approaches often achieve problem-size-independent runtimes, but so far have been limited to idealized cache models. We introduce a hybrid approach, warping cache simulation, that aims to achieve applicability to real-world cache models and problem-size-independent runtimes. As prior analytical approaches, we focus on programs in the polyhedral model, which allows to reason about the sequence of memory accesses analytically. Combining this analytical reasoning with information about the cache behavior obtained from explicit cache simulation allows us to soundly fast-forward the simulation. By this process of warping, we accelerate the simulation so that its cost is often independent of the number of memory accesses

    Discrete-Time continuous-dilation construction of linear scale-invariant systems and multi-dimensional self-similar signals

    Get PDF
    This dissertation presents novel models for purely discrete-time self-similar processes and scale- invariant systems. The results developed are based on the definition of a discrete-time scaling (dilation) operation through a mapping between discrete and continuous frequencies. It is shown that it is possible to have continuous scaling factors through this operation even though the signal itself is discrete-time. Both deterministic and stochastic discrete-time self-similar signals are studied. Conditions of existence for self-similar signals are provided. Construction of discrete-time linear scale-invariant (LSI) systems and white noise driven models of self-similar stochastic processes are discussed. It is shown that unlike continuous-time self-similar signals, a wide class of non-trivial discrete-time self-similar signals can be constructed through these models. The results obtained in the one-dimensional case are extended to multi-dimensional case. Constructions of discrete-space self-similar ran dom fields are shown to be potentially useful for the generation, modeling and analysis of multi-dimensional self-similar signals such as textures. Constructions of discrete-time and discrete-space self-similar signals presented in the dissertation provide potential tools for applications such as image segmentation and classification, pattern recognition, image compression, digital halftoning, computer vision, and computer graphics. The other aspect of the dissertation deals with the construction of discrete-time continuous-dilation wavelet transform and its existence condition, based on the defined discrete-time continuous-dilation scaling operator

    High-Order Flux Reconstruction on Stretched and Warped Meshes

    Get PDF
    High-order computational fluid dynamics is gathering a broadening interest as a future industrial tool, with one such approach being flux reconstruction (FR). However, due to the need to mesh complex geometries if FR is to displace current lower?order methods, FR will likely have to be applied to stretched and warped meshes. Therefore, it is proposed that the analytical and numerical behaviors of FR on deformed meshes for both the one-dimensional linear advection and the two-dimensional Euler equations are investigated. The analytical foundation of this work is based on a modified von Neumann analysis for linearly deformed grids, which is presented. The temporal stability limits for linear advection on such grids are also explored analytically and numerically, with Courant?Friedrichs?Lewy (CFL) limits set out for several Runge?Kutta schemes, with the primary trend being that contracting mesh regions give rise to higher CFL limits, whereas expansion leads to lower CFL limits. Lastly, the benchmarks of FR are compared to finite difference and finite volumes schemes, as are common in industry, with the comparison showing the increased wave propagating ability on warped and stretched meshes, and hence FR?s increased resilience to mesh deformation

    Parallel and Distributed Immersive Real-Time Simulation of Large-Scale Networks

    Get PDF

    A Programmable Display-Layer Architecture for Virtual-Reality Applications

    Get PDF
    Two important technical objectives of virtual-reality systems are to provide compelling visuals and effective 3D user interaction. In this respect, modern virtual reality system architectures suffer from a number of short-comings. The reduction of end-to-end latency, crosstalk and judder are especially difficult challenges, each of which negatively affects visual quality or user interaction. In order to provide higher quality visuals, complex scenes consisting of large models are often used. Rendering such a complex scene is a time-consuming process resulting in high end-to-end latency, thereby hampering user interaction. Classic virtual-reality architectures can not adequately address these challenges due to their inherent design principles. In particular, the tight coupling between input devices, the rendering loop and the display system inhibits these systems from addressing all the aforementioned challenges simultaneously. In this thesis, a virtual-reality architecture design is introduced that is based on the addition of a new logical layer: the Programmable Display Layer (PDL). The governing idea is that an extra layer is inserted between the rendering system and the display. In this way, the display can be updated at a fast rate and in a custom manner independent of the other components in the architecture, including the rendering system. To generate intermediate display updates at a fast rate, the PDL performs per-pixel depth-image warping by utilizing the application data. Image warping is the process of computing a new image by transforming individual depth-pixels from a closely matching previous image to their updated locations. The PDL architecture can be used for a range of algorithms and to solve problems that are not easily solved using classic architectures. In particular, techniques to reduce crosstalk, judder and latency are examined using algorithms implemented on top of the PDL. Concerning user interaction techniques, several six-degrees-of-freedom input methods exists, of which optical tracking is a popular option. However, optical tracking methods also introduce several constraints that depend on the camera setup, such as line-of-sight requirements, the volume of the interaction space and the achieved tracking accuracy. These constraints generally cause a decline in the effectiveness of user interaction. To investigate the effectiveness of optical tracking methods, an optical tracker simulation framework has been developed, including a novel optical tracker to test this framework. In this way, different optical tracking algorithms can be simulated and quantitatively evaluated under a wide range of conditions. A common approach in virtual reality is to implement an algorithm and then to evaluate the efficacy of that algorithm by either subjective, qualitative metrics or quantitative user experiments, after which an updated version of the algorithm may be implemented and the cycle repeated. A different approach is followed here. Throughout this thesis, an attempt is made to automatically detect and quantify errors using completely objective and automated quantitative methods and to subsequently attempt to resolve these errors dynamically
    • …
    corecore