47,929 research outputs found

    Optimal bounds with semidefinite programming: an application to stress driven shear flows

    Full text link
    We introduce an innovative numerical technique based on convex optimization to solve a range of infinite dimensional variational problems arising from the application of the background method to fluid flows. In contrast to most existing schemes, we do not consider the Euler--Lagrange equations for the minimizer. Instead, we use series expansions to formulate a finite dimensional semidefinite program (SDP) whose solution converges to that of the original variational problem. Our formulation accounts for the influence of all modes in the expansion, and the feasible set of the SDP corresponds to a subset of the feasible set of the original problem. Moreover, SDPs can be easily formulated when the fluid is subject to imposed boundary fluxes, which pose a challenge for the traditional methods. We apply this technique to compute rigorous and near-optimal upper bounds on the dissipation coefficient for flows driven by a surface stress. We improve previous analytical bounds by more than 10 times, and show that the bounds become independent of the domain aspect ratio in the limit of vanishing viscosity. We also confirm that the dissipation properties of stress driven flows are similar to those of flows subject to a body force localized in a narrow layer near the surface. Finally, we show that SDP relaxations are an efficient method to investigate the energy stability of laminar flows driven by a surface stress.Comment: 17 pages; typos removed; extended discussion of linear matrix inequalities in Section III; revised argument in Section IVC, results unchanged; extended discussion of computational setup and limitations in Sectios IVE-IVF. Submitted to Phys. Rev.

    Time-stepping approach for solving upper-bound problems: Application to two-dimensional Rayleigh-Benard convection

    Get PDF
    An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Benard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Benard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents alpha and beta in the presumed Nu similar to (PrRa beta)-Ra-alpha scaling relation. The computations clearly show that for Ra <= 10(10) at fixed L = 2 root 2, Nu <= 0.106Pr(0)Ra(5/12), which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.NSF DMS-0928098 DMS-1515161 DMS-0927587 PHY-1205219Simons FoundationNSFONRInstitute for Computational Engineering and Sciences (ICES

    Hyperuniformity, quasi-long-range correlations, and void-space constraints in maximally random jammed particle packings. I. Polydisperse spheres

    Full text link
    Hyperuniform many-particle distributions possess a local number variance that grows more slowly than the volume of an observation window, implying that the local density is effectively homogeneous beyond a few characteristic length scales. Previous work on maximally random strictly jammed sphere packings in three dimensions has shown that these systems are hyperuniform and possess unusual quasi-long-range pair correlations, resulting in anomalous logarithmic growth in the number variance. However, recent work on maximally random jammed sphere packings with a size distribution has suggested that such quasi-long-range correlations and hyperuniformity are not universal among jammed hard-particle systems. In this paper we show that such systems are indeed hyperuniform with signature quasi-long-range correlations by characterizing the more general local-volume-fraction fluctuations. We argue that the regularity of the void space induced by the constraints of saturation and strict jamming overcomes the local inhomogeneity of the disk centers to induce hyperuniformity in the medium with a linear small-wavenumber nonanalytic behavior in the spectral density, resulting in quasi-long-range spatial correlations. A numerical and analytical analysis of the pore-size distribution for a binary MRJ system in addition to a local characterization of the n-particle loops governing the void space surrounding the inclusions is presented in support of our argument. This paper is the first part of a series of two papers considering the relationships among hyperuniformity, jamming, and regularity of the void space in hard-particle packings.Comment: 40 pages, 15 figure

    On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters

    Full text link
    Kalman filter is a key tool for time-series forecasting and analysis. We show that the dependence of a prediction of Kalman filter on the past is decaying exponentially, whenever the process noise is non-degenerate. Therefore, Kalman filter may be approximated by regression on a few recent observations. Surprisingly, we also show that having some process noise is essential for the exponential decay. With no process noise, it may happen that the forecast depends on all of the past uniformly, which makes forecasting more difficult. Based on this insight, we devise an on-line algorithm for improper learning of a linear dynamical system (LDS), which considers only a few most recent observations. We use our decay results to provide the first regret bounds w.r.t. to Kalman filters within learning an LDS. That is, we compare the results of our algorithm to the best, in hindsight, Kalman filter for a given signal. Also, the algorithm is practical: its per-update run-time is linear in the regression depth

    A Spectral Learning Approach to Range-Only SLAM

    Full text link
    We present a novel spectral learning algorithm for simultaneous localization and mapping (SLAM) from range data with known correspondences. This algorithm is an instance of a general spectral system identification framework, from which it inherits several desirable properties, including statistical consistency and no local optima. Compared with popular batch optimization or multiple-hypothesis tracking (MHT) methods for range-only SLAM, our spectral approach offers guaranteed low computational requirements and good tracking performance. Compared with popular extended Kalman filter (EKF) or extended information filter (EIF) approaches, and many MHT ones, our approach does not need to linearize a transition or measurement model; such linearizations can cause severe errors in EKFs and EIFs, and to a lesser extent MHT, particularly for the highly non-Gaussian posteriors encountered in range-only SLAM. We provide a theoretical analysis of our method, including finite-sample error bounds. Finally, we demonstrate on a real-world robotic SLAM problem that our algorithm is not only theoretically justified, but works well in practice: in a comparison of multiple methods, the lowest errors come from a combination of our algorithm with batch optimization, but our method alone produces nearly as good a result at far lower computational cost

    Learning Linear Dynamical Systems via Spectral Filtering

    Full text link
    We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix. We circumvent the non-convex optimization problem using improper learning: carefully overparameterize the class of LDSs by a polylogarithmic factor, in exchange for convexity of the loss functions. From this arises a polynomial-time algorithm with a near-optimal regret guarantee, with an analogous sample complexity bound for agnostic learning. Our algorithm is based on a novel filtering technique, which may be of independent interest: we convolve the time series with the eigenvectors of a certain Hankel matrix.Comment: Published as a conference paper at NIPS 201

    Approximate Matrix Multiplication with Application to Linear Embeddings

    Full text link
    In this paper, we study the problem of approximately computing the product of two real matrices. In particular, we analyze a dimensionality-reduction-based approximation algorithm due to Sarlos [1], introducing the notion of nuclear rank as the ratio of the nuclear norm over the spectral norm. The presented bound has improved dependence with respect to the approximation error (as compared to previous approaches), whereas the subspace -- on which we project the input matrices -- has dimensions proportional to the maximum of their nuclear rank and it is independent of the input dimensions. In addition, we provide an application of this result to linear low-dimensional embeddings. Namely, we show that any Euclidean point-set with bounded nuclear rank is amenable to projection onto number of dimensions that is independent of the input dimensionality, while achieving additive error guarantees.Comment: 8 pages, International Symposium on Information Theor
    corecore