22,227 research outputs found

    Lightweight Asynchronous Snapshots for Distributed Dataflows

    Full text link
    Distributed stateful stream processing enables the deployment and execution of large scale continuous computations in the cloud, targeting both low latency and high throughput. One of the most fundamental challenges of this paradigm is providing processing guarantees under potential failures. Existing approaches rely on periodic global state snapshots that can be used for failure recovery. Those approaches suffer from two main drawbacks. First, they often stall the overall computation which impacts ingestion. Second, they eagerly persist all records in transit along with the operation states which results in larger snapshots than required. In this work we propose Asynchronous Barrier Snapshotting (ABS), a lightweight algorithm suited for modern dataflow execution engines that minimises space requirements. ABS persists only operator states on acyclic execution topologies while keeping a minimal record log on cyclic dataflows. We implemented ABS on Apache Flink, a distributed analytics engine that supports stateful stream processing. Our evaluation shows that our algorithm does not have a heavy impact on the execution, maintaining linear scalability and performing well with frequent snapshots.Comment: 8 pages, 7 figure

    The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows

    Full text link
    The Gauss--Newton with approximated tensors (GNAT) method is a nonlinear model reduction method that operates on fully discretized computational models. It achieves dimension reduction by a Petrov--Galerkin projection associated with residual minimization; it delivers computational efficency by a hyper-reduction procedure based on the `gappy POD' technique. Originally presented in Ref. [1], where it was applied to implicit nonlinear structural-dynamics models, this method is further developed here and applied to the solution of a benchmark turbulent viscous flow problem. To begin, this paper develops global state-space error bounds that justify the method's design and highlight its advantages in terms of minimizing components of these error bounds. Next, the paper introduces a `sample mesh' concept that enables a distributed, computationally efficient implementation of the GNAT method in finite-volume-based computational-fluid-dynamics (CFD) codes. The suitability of GNAT for parameterized problems is highlighted with the solution of an academic problem featuring moving discontinuities. Finally, the capability of this method to reduce by orders of magnitude the core-hours required for large-scale CFD computations, while preserving accuracy, is demonstrated with the simulation of turbulent flow over the Ahmed body. For an instance of this benchmark problem with over 17 million degrees of freedom, GNAT outperforms several other nonlinear model-reduction methods, reduces the required computational resources by more than two orders of magnitude, and delivers a solution that differs by less than 1% from its high-dimensional counterpart

    Adaptive beamforming for large arrays in satellite communications systems with dispersed coverage

    Get PDF
    Conventional multibeam satellite communications systems ensure coverage of wide areas through multiple fixed beams where all users inside a beam share the same bandwidth. We consider a new and more flexible system where each user is assigned his own beam, and the users can be very geographically dispersed. This is achieved through the use of a large direct radiating array (DRA) coupled with adaptive beamforming so as to reject interferences and to provide a maximal gain to the user of interest. New fast-converging adaptive beamforming algorithms are presented, which allow to obtain good signal to interference and noise ratio (SINR) with a number of snapshots much lower than the number of antennas in the array. These beamformers are evaluated on reference scenarios

    Randomized Dynamic Mode Decomposition

    Full text link
    This paper presents a randomized algorithm for computing the near-optimal low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging techniques to compute low-rank matrix approximations at a fraction of the cost of deterministic algorithms, easing the computational challenges arising in the area of `big data'. The idea is to derive a small matrix from the high-dimensional data, which is then used to efficiently compute the dynamic modes and eigenvalues. The algorithm is presented in a modular probabilistic framework, and the approximation quality can be controlled via oversampling and power iterations. The effectiveness of the resulting randomized DMD algorithm is demonstrated on several benchmark examples of increasing complexity, providing an accurate and efficient approach to extract spatiotemporal coherent structures from big data in a framework that scales with the intrinsic rank of the data, rather than the ambient measurement dimension. For this work we assume that the dynamics of the problem under consideration is evolving on a low-dimensional subspace that is well characterized by a fast decaying singular value spectrum
    • 

    corecore