42,463 research outputs found

    A variational approach to modeling slow processes in stochastic dynamical systems

    Get PDF
    The slow processes of metastable stochastic dynamical systems are difficult to access by direct numerical simulation due the sampling problem. Here, we suggest an approach for modeling the slow parts of Markov processes by approximating the dominant eigenfunctions and eigenvalues of the propagator. To this end, a variational principle is derived that is based on the maximization of a Rayleigh coefficient. It is shown that this Rayleigh coefficient can be estimated from statistical observables that can be obtained from short distributed simulations starting from different parts of state space. The approach forms a basis for the development of adaptive and efficient computational algorithms for simulating and analyzing metastable Markov processes while avoiding the sampling problem. Since any stochastic process with finite memory can be transformed into a Markov process, the approach is applicable to a wide range of processes relevant for modeling complex real-world phenomena

    B-spline quasi-interpolant representations and sampling recovery of functions with mixed smoothness

    Full text link
    Let ξ={xj}j=1n\xi = \{x^j\}_{j=1}^n be a grid of nn points in the dd-cube {\II}^d:=[0,1]^d, and Φ={ϕj}j=1n\Phi = \{\phi_j\}_{j =1}^n a family of nn functions on {\II}^d. We define the linear sampling algorithm Ln(Φ,ξ,)L_n(\Phi,\xi,\cdot) for an approximate recovery of a continuous function ff on {\II}^d from the sampled values f(x1),...,f(xn)f(x^1), ..., f(x^n), by Ln(Φ,ξ,f) := j=1nf(xj)ϕjL_n(\Phi,\xi,f)\ := \ \sum_{j=1}^n f(x^j)\phi_j. For the Besov class Bp,θαB^\alpha_{p,\theta} of mixed smoothness α\alpha (defined as the unit ball of the Besov space \MB), to study optimality of Ln(Φ,ξ,)L_n(\Phi,\xi,\cdot) in L_q({\II}^d) we use the quantity rn(Bp,θα)q := infH,ξ supfBp,θαfLn(Φ,xi,f)qr_n(B^\alpha_{p,\theta})_q \ := \ \inf_{H,\xi} \ \sup_{f \in B^\alpha_{p,\theta}} \, \|f - L_n(\Phi,xi,f)\|_q, where the infimum is taken over all grids ξ={xj}j=1n\xi = \{x^j\}_{j=1}^n and all families Φ={ϕj}j=1n\Phi = \{\phi_j\}_{j=1}^n in L_q({\II}^d). We explicitly constructed linear sampling algorithms Ln(Φ,ξ,)L_n(\Phi,\xi,\cdot) on the grid \xi = \ G^d(m):= \{(2^{-k_1}s_1,...,2^{-k_d}s_d) \in \II^d : \ k_1 + ... + k_d \le m\}, with Φ\Phi a family of linear combinations of mixed B-splines which are mixed tensor products of either integer or half integer translated dilations of the centered B-spline of order rr. The grid Gd(m)G^d(m) is of the size 2mmd12^m m^{d-1} and sparse in comparing with the generating dyadic coordinate cube grid of the size 2dm2^{dm}. For various 0<p,q,θ0<p,q,\theta \le \infty and 1/p<α<r1/p < \alpha < r, we proved upper bounds for the worst case error supfBp,θαfLn(Φ,ξ,f)q \sup_{f \in B^\alpha_{p,\theta}} \, \|f - L_n(\Phi,\xi,f)\|_q which coincide with the asymptotic order of rn(Bp,θα)qr_n(B^\alpha_{p,\theta})_q in some cases. A key role in constructing these linear sampling algorithms, plays a quasi-interpolant representation of functions fBp,θαf \in B^\alpha_{p,\theta} by mixed B-spline series

    A Primer on Reproducing Kernel Hilbert Spaces

    Full text link
    Reproducing kernel Hilbert spaces are elucidated without assuming prior familiarity with Hilbert spaces. Compared with extant pedagogic material, greater care is placed on motivating the definition of reproducing kernel Hilbert spaces and explaining when and why these spaces are efficacious. The novel viewpoint is that reproducing kernel Hilbert space theory studies extrinsic geometry, associating with each geometric configuration a canonical overdetermined coordinate system. This coordinate system varies continuously with changing geometric configurations, making it well-suited for studying problems whose solutions also vary continuously with changing geometry. This primer can also serve as an introduction to infinite-dimensional linear algebra because reproducing kernel Hilbert spaces have more properties in common with Euclidean spaces than do more general Hilbert spaces.Comment: Revised version submitted to Foundations and Trends in Signal Processin
    corecore