28,364 research outputs found

    Quantitative Anderson localization of Schr\"odinger eigenstates under disorder potentials

    Full text link
    This paper concerns spectral properties of linear Schr\"odinger operators under oscillatory high-amplitude potentials on bounded domains. Depending on the degree of disorder, we prove the existence of spectral gaps amongst the lowermost eigenvalues and the emergence of exponentially localized states. We quantify the rate of decay in terms of geometric parameters that characterize the potential. The proofs are based on the convergence theory of iterative solvers for eigenvalue problems and their optimal local preconditioning by domain decomposition.Comment: accepted for publication in M3A

    Uniqueness of gradient Gibbs measures with disorder

    Full text link
    We consider - in uniformly strictly convex potential regime - two versions of random gradient models with disorder. In model (A) the interface feels a bulk term of random fields while in model (B) the disorder enters though the potential acting on the gradients. We assume a general distribution on the disorder with uniformly-bounded finite second moments. It is well known that for gradient models without disorder there are no Gibbs measures in infinite-volume in dimension d=2d = 2, while there are shift-invariant gradient Gibbs measures describing an infinite-volume distribution for the gradients of the field, as was shown by Funaki and Spohn. Van Enter and Kuelske proved in 2008 that adding a disorder term as in model (A) prohibits the existence of such gradient Gibbs measures for general interaction potentials in d=2d = 2. In Cotar and Kuelske (2012) we proved the existence of shift-covariant random gradient Gibbs measures for model (A) when d≥3d\geq 3, the disorder is i.i.d and has mean zero, and for model (B) when d≥1d\geq 1 and the disorder has stationary distribution. In the present paper, we prove existence and uniqueness of shift-covariant random gradient Gibbs measures with a given expected tilt u∈Rdu\in R^d and with the corresponding annealed measure being ergodic: for model (A) when d≥3d\geq 3 and the disordered random fields are i.i.d. and symmetrically-distributed, and for model (B) when d≥1d\geq 1 and for any stationary disorder dependence structure. We also compute for both models for any gradient Gibbs measure constructed as in Cotar and Kuelske (2012), when the disorder is i.i.d. and its distribution satisfies a Poincar\'e inequality assumption, the optimal decay of covariances with respect to the averaged-over-the-disorder gradient Gibbs measure.Comment: 39 pages. arXiv admin note: text overlap with arXiv:1012.437

    On the Wiener disorder problem

    Full text link
    In the Wiener disorder problem, the drift of a Wiener process changes suddenly at some unknown and unobservable disorder time. The objective is to detect this change as quickly as possible after it happens. Earlier work on the Bayesian formulation of this problem brings optimal (or asymptotically optimal) detection rules assuming that the prior distribution of the change time is given at time zero, and additional information is received by observing the Wiener process only. Here, we consider a different information structure where possible causes of this disorder are observed. More precisely, we assume that we also observe an arrival/counting process representing external shocks. The disorder happens because of these shocks, and the change time coincides with one of the arrival times. Such a formulation arises, for example, from detecting a change in financial data caused by major financial events, or detecting damages in structures caused by earthquakes. In this paper, we formulate the problem in a Bayesian framework assuming that those observable shocks form a Poisson process. We present an optimal detection rule that minimizes a linear Bayes risk, which includes the expected detection delay and the probability of early false alarms. We also give the solution of the ``variational formulation'' where the objective is to minimize the detection delay over all stopping rules for which the false alarm probability does not exceed a given constant.Comment: Published in at http://dx.doi.org/10.1214/09-AAP655 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Adaptive Poisson disorder problem

    Full text link
    We study the quickest detection problem of a sudden change in the arrival rate of a Poisson process from a known value to an unknown and unobservable value at an unknown and unobservable disorder time. Our objective is to design an alarm time which is adapted to the history of the arrival process and detects the disorder time as soon as possible. In previous solvable versions of the Poisson disorder problem, the arrival rate after the disorder has been assumed a known constant. In reality, however, we may at most have some prior information about the likely values of the new arrival rate before the disorder actually happens, and insufficient estimates of the new rate after the disorder happens. Consequently, we assume in this paper that the new arrival rate after the disorder is a random variable. The detection problem is shown to admit a finite-dimensional Markovian sufficient statistic, if the new rate has a discrete distribution with finitely many atoms. Furthermore, the detection problem is cast as a discounted optimal stopping problem with running cost for a finite-dimensional piecewise-deterministic Markov process. This optimal stopping problem is studied in detail in the special case where the new arrival rate has Bernoulli distribution. This is a nontrivial optimal stopping problem for a two-dimensional piecewise-deterministic Markov process driven by the same point process. Using a suitable single-jump operator, we solve it fully, describe the analytic properties of the value function and the stopping region, and present methods for their numerical calculation. We provide a concrete example where the value function does not satisfy the smooth-fit principle on a proper subset of the connected, continuously differentiable optimal stopping boundary, whereas it does on the complement of this set.Comment: Published at http://dx.doi.org/10.1214/105051606000000312 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Multisource Bayesian sequential change detection

    Full text link
    Suppose that local characteristics of several independent compound Poisson and Wiener processes change suddenly and simultaneously at some unobservable disorder time. The problem is to detect the disorder time as quickly as possible after it happens and minimize the rate of false alarms at the same time. These problems arise, for example, from managing product quality in manufacturing systems and preventing the spread of infectious diseases. The promptness and accuracy of detection rules improve greatly if multiple independent information sources are available. Earlier work on sequential change detection in continuous time does not provide optimal rules for situations in which several marked count data and continuously changing signals are simultaneously observable. In this paper, optimal Bayesian sequential detection rules are developed for such problems when the marked count data is in the form of independent compound Poisson processes, and the continuously changing signals form a multi-dimensional Wiener process. An auxiliary optimal stopping problem for a jump-diffusion process is solved by transforming it first into a sequence of optimal stopping problems for a pure diffusion by means of a jump operator. This method is new and can be very useful in other applications as well, because it allows the use of the powerful optimal stopping theory for diffusions.Comment: Published in at http://dx.doi.org/10.1214/07-AAP463 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore