27 research outputs found

    Motion of Inertial Observers Through Negative Energy

    Get PDF
    Recent research has indicated that negative energy fluxes due to quantum coherence effects obey uncertainty principle-type inequalities of the form |\Delta E|\,{\Delta \tau} \lprox 1\,. Here ΔE|\Delta E| is the magnitude of the negative energy which is transmitted on a timescale Δτ\Delta \tau. Our main focus in this paper is on negative energy fluxes which are produced by the motion of observers through static negative energy regions. We find that although a quantum inequality appears to be satisfied for radially moving geodesic observers in two and four-dimensional black hole spacetimes, an observer orbiting close to a black hole will see a constant negative energy flux. In addition, we show that inertial observers moving slowly through the Casimir vacuum can achieve arbitrarily large violations of the inequality. It seems likely that, in general, these types of negative energy fluxes are not constrained by inequalities on the magnitude and duration of the flux. We construct a model of a non-gravitational stress-energy detector, which is rapidly switched on and off, and discuss the strengths and weaknesses of such a detector.Comment: 18pp + 1 figure(not included, available on request), in LATEX, TUPT-93-

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Unruh--DeWitt detectors in spherically symmetric dynamical space-times

    Full text link
    In the present paper, Unruh--DeWitt detectors are used in order to investigate the issue of temperature associated with a spherically symmetric dynamical space-times. Firstly, we review the semi-classical tunneling method, then we introduce the Unruh--DeWitt detector approach. We show that for the generic static black hole case and the FRW de Sitter case, making use of peculiar Kodama trajectories, semiclassical and quantum field theoretic techniques give the same standard and well known thermal interpretation, with an associated temperature, corrected by appropriate Tolman factors. For a FRW space-time interpolating de Sitter space with the Einstein--de Sitter universe (that is a more realistic situation in the frame of Λ\LambdaCDM cosmologies), we show that the detector response splits into a de Sitter contribution plus a fluctuating term containing no trace of Boltzmann-like factors, but rather describing the way thermal equilibrium is reached in the late time limit. As a consequence, and unlike the case of black holes, the identification of the dynamical surface gravity of a cosmological trapping horizon as an effective temperature parameter seems lost, at least for our co-moving simplified detectors. The possibility remains that a detector performing a proper motion along a Kodama trajectory may register something more, in which case the horizon surface gravity would be associated more likely to vacuum correlations than to particle creation.Comment: 19 pages, to appear on IJTP. arXiv admin note: substantial text overlap with arXiv:1101.525

    A new duality theory for mathematical programming

    Full text link

    Descent methods with linesearch in the presence of perturbations

    Get PDF
    AbstractWe consider the class of descent algorithms for unconstrained optimization with an Armijo-type stepsize rule in the case when the gradient of the objective function is computed inexactly. An important novel feature in our theoretical analysis is that perturbations associated with the gradient are not assumed to be relatively small or to tend to zero in the limit (as a practical matter, we expect them to be reasonably small, so that a meaningful approximate solution can be obtained). This feature makes our analysis applicable to various difficult problems encounted in practice. We propose a modified Armijo-type rule for computing the stepsize which guarantees that the algorithm obtains a reasonable approximate solution. Furthermore, if perturbations are small relative to the size of the gradient, then our algorithm retains all the standard convergence properties of descent methods

    Maximal monotonicity, conjugation and the duality product

    No full text
    Recently, the authors studied the connection between each maximal monotone operator T and a family H(T) of convex functions. Each member of this family characterizes the operator and satisfies two particular inequalities. The aim of this paper is to establish the converse of the latter fact. Namely, that every convex function satisfying those two particular inequalities is associated to a unique maximal monotone operator.C

    A robust Kantorovich’s theorem on the inexact Newton method with relative residual error tolerance

    Get PDF
    AbstractWe prove that under semi-local assumptions, the inexact Newton method with a fixed relative residual error tolerance converges Q-linearly to a zero of the nonlinear operator under consideration. Using this result we show that the Newton method for minimizing a self-concordant function or to find a zero of an analytic function can be implemented with a fixed relative residual error tolerance.In the absence of errors, our analysis retrieve the classical Kantorovich Theorem on the Newton method

    Kantorovich's Theorem on Newton's Method in Riemannian Manifolds

    Get PDF
    AbstractNewton's method for finding a zero of a vectorial function is a powerful theoretical and practical tool. One of the drawbacks of the classical convergence proof is that closeness to a non-singular zero must be supposed a priori. Kantorovich's theorem on Newton's method has the advantage of proving existence of a solution and convergence to it under very mild conditions. This theorem holds in Banach spaces. Newton's method has been extended to the problem of finding a singularity of a vectorial field in Riemannian manifold. We extend Kantorovich's theorem on Newton's method to Riemannian manifolds

    Forcing strong convergence of proximal point iterations in a Hilbert space

    No full text
    This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns

    Error bounds for proximal point subproblems and associated inexact proximal point algorithms

    No full text
    corecore