70,987 research outputs found

    On non-normality and classification of amplification mechanisms in stability and resolvent analysis

    Get PDF
    We seek to quantify non-normality of the most amplified resolvent modes and predict their features based on the characteristics of the base or mean velocity profile. A 2-by-2 model linear Navier-Stokes (LNS) operator illustrates how non-normality from mean shear distributes perturbation energy in different velocity components of the forcing and response modes. The inverse of their inner product, which is unity for a purely normal mechanism, is proposed as a measure to quantify non-normality. In flows where there is downstream spatial dependence of the base/mean, mean flow advection separates the spatial support of forcing and response modes which impacts the inner product. Success of mean stability analysis depends on the normality of amplification. If the amplification is normal, the resolvent operator written in its dyadic representation reveals that the adjoint and forward stability modes are proportional to the forcing and response resolvent modes. If the amplification is non-normal, then resolvent analysis is required to understand the origin of observed flow structures. Eigenspectra and pseudospectra are used to characterize these phenomena. Two test cases are studied: low Reynolds number cylinder flow and turbulent channel flow. The first deals mainly with normal mechanisms and quantification of non-normality using the inverse inner product of the leading forcing and response modes agrees well with the product of the resolvent norm and distance between the imaginary axis and least stable eigenvalue. In turbulent channel flow, structures result from both normal and non-normal mechanisms. Mean shear is exploited most efficiently by stationary disturbances while bounds on the pseudospectra illustrate how non-normality is responsible for the most amplified disturbances at spatial wavenumbers and temporal frequencies corresponding to well-known turbulent structures

    On Correcting Inputs: Inverse Optimization for Online Structured Prediction

    Get PDF
    Algorithm designers typically assume that the input data is correct, and then proceed to find "optimal" or "sub-optimal" solutions using this input data. However this assumption of correct data does not always hold in practice, especially in the context of online learning systems where the objective is to learn appropriate feature weights given some training samples. Such scenarios necessitate the study of inverse optimization problems where one is given an input instance as well as a desired output and the task is to adjust the input data so that the given output is indeed optimal. Motivated by learning structured prediction models, in this paper we consider inverse optimization with a margin, i.e., we require the given output to be better than all other feasible outputs by a desired margin. We consider such inverse optimization problems for maximum weight matroid basis, matroid intersection, perfect matchings, minimum cost maximum flows, and shortest paths and derive the first known results for such problems with a non-zero margin. The effectiveness of these algorithmic approaches to online learning for structured prediction is also discussed.Comment: Conference version to appear in FSTTCS, 201

    A tutorial on recursive models for analyzing and predicting path choice behavior

    Full text link
    The problem at the heart of this tutorial consists in modeling the path choice behavior of network users. This problem has been extensively studied in transportation science, where it is known as the route choice problem. In this literature, individuals' choice of paths are typically predicted using discrete choice models. This article is a tutorial on a specific category of discrete choice models called recursive, and it makes three main contributions: First, for the purpose of assisting future research on route choice, we provide a comprehensive background on the problem, linking it to different fields including inverse optimization and inverse reinforcement learning. Second, we formally introduce the problem and the recursive modeling idea along with an overview of existing models, their properties and applications. Third, we extensively analyze illustrative examples from different angles so that a novice reader can gain intuition on the problem and the advantages provided by recursive models in comparison to path-based ones

    Reed-solomon forward error correction (FEC) schemes, RFC 5510

    Get PDF
    This document describes a Fully-Specified Forward Error Correction (FEC) Scheme for the Reed-Solomon FEC codes over GF(2^^m), where m is in {2..16}, and its application to the reliable delivery of data objects on the packet erasure channel (i.e., a communication path where packets are either received without any corruption or discarded during transmission). This document also describes a Fully-Specified FEC Scheme for the special case of Reed-Solomon codes over GF(2^^8) when there is no encoding symbol group. Finally, in the context of the Under-Specified Small Block Systematic FEC Scheme (FEC Encoding ID 129), this document assigns an FEC Instance ID to the special case of Reed-Solomon codes over GF(2^^8). Reed-Solomon codes belong to the class of Maximum Distance Separable (MDS) codes, i.e., they enable a receiver to recover the k source symbols from any set of k received symbols. The schemes described here are compatible with the implementation from Luigi Rizzo

    Approximated Lax Pairs for the Reduced Order Integration of Nonlinear Evolution Equations

    Get PDF
    A reduced-order model algorithm, called ALP, is proposed to solve nonlinear evolution partial differential equations. It is based on approximations of generalized Lax pairs. Contrary to other reduced-order methods, like Proper Orthogonal Decomposition, the basis on which the solution is searched for evolves in time according to a dynamics specific to the problem. It is therefore well-suited to solving problems with progressive front or wave propagation. Another difference with other reduced-order methods is that it is not based on an off-line / on-line strategy. Numerical examples are shown for the linear advection, KdV and FKPP equations, in one and two dimensions
    • …
    corecore