10,878 research outputs found

    Efficient entanglement concentration for arbitrary less-entangled N-atom state

    Full text link
    A recent paper (Phys. Rev. A 86, 034305 (2012)) proposed an entanglement concentration protocol (ECP) for less-entangled NN-atom GHZ state with the help of the photonic Faraday rotation. It is shown that the maximally entangled atom state can be distilled from two pairs of less-entangled atom states. In this paper, we put forward an improved ECP for arbitrary less-entangled N-atom GHZ state with only one pair of less-entangled atom state, one auxiliary atom and one auxiliary photon. Moreover, our ECP can be used repeatedly to obtain a higher success probability. If consider the practical operation and imperfect detection, our protocol is more efficient. This ECP may be useful in current quantum information processing.Comment: 10 page, 5 figur

    Efficient single-photon entanglement concentration for quantum communications

    Full text link
    We present two protocols for the single-photon entanglement concentration. With the help of the 50:50 beam splitter, variable beam splitter and an auxiliary photon, we can concentrate a less-entangled single-photon state into a maximally single-photon entangled state with some probability. The first protocol is implemented with linear optics and the second protocol is implemented with the cross-Kerr nonlinearity. Our two protocols do not need two pairs of entangled states shared by the two parties, which makes our protocols more economic. Especially, in the second protocol, with the help of the cross-Kerr nonlinearity, the sophisticated single photon detector is not required. Moreover, the second protocol can be reused to get higher success probability. All these advantages may make our protocols useful in the long-distance quantum communication.Comment: 9 pages, 3 figure

    Alternative approach to derive q-potential measures of refracted spectrally L\'evy processes

    Full text link
    For a refracted L\'evy process driven by a spectrally negative L\'evy process, we use a different approach to derive expressions for its q-potential measures without killing. Unlike previous methods whose derivations depend on scale functions which are defined only for spectrally negative L\'evy processes, our approach is free of scale functions. This makes it possible to extend the result here to a quite general refracted L\'evy process by applying the approach presented below

    Random gradient extrapolation for distributed and stochastic optimization

    Full text link
    In this paper, we consider a class of finite-sum convex optimization problems defined over a distributed multiagent network with mm agents connected to a central server. In particular, the objective function consists of the average of mm (1\ge 1) smooth components associated with each network agent together with a strongly convex term. Our major contribution is to develop a new randomized incremental gradient algorithm, namely random gradient extrapolation method (RGEM), which does not require any exact gradient evaluation even for the initial point, but can achieve the optimal O(log(1/ϵ)){\cal O}(\log(1/\epsilon)) complexity bound in terms of the total number of gradient evaluations of component functions to solve the finite-sum problems. Furthermore, we demonstrate that for stochastic finite-sum optimization problems, RGEM maintains the optimal O(1/ϵ){\cal O}(1/\epsilon) complexity (up to a certain logarithmic factor) in terms of the number of stochastic gradient computations, but attains an O(log(1/ϵ)){\cal O}(\log(1/\epsilon)) complexity in terms of communication rounds (each round involves only one agent). It is worth noting that the former bound is independent of the number of agents mm, while the latter one only linearly depends on mm or even m\sqrt m for ill-conditioned problems. To the best of our knowledge, this is the first time that these complexity bounds have been obtained for distributed and stochastic optimization problems. Moreover, our algorithms were developed based on a novel dual perspective of Nesterov's accelerated gradient method

    Asynchronous decentralized accelerated stochastic gradient descent

    Full text link
    In this work, we introduce an asynchronous decentralized accelerated stochastic gradient descent type of method for decentralized stochastic optimization, considering communication and synchronization are the major bottlenecks. We establish O(1/ϵ)\mathcal{O}(1/\epsilon) (resp., O(1/ϵ)\mathcal{O}(1/\sqrt{\epsilon})) communication complexity and O(1/ϵ2)\mathcal{O}(1/\epsilon^2) (resp., O(1/ϵ)\mathcal{O}(1/\epsilon)) sampling complexity for solving general convex (resp., strongly convex) problems

    Pricing variable annuities with multi-layer expense strategy

    Full text link
    We study the problem of pricing variable annuities with a multi-layer expense strategy, under which the insurer charges fees from the policyholder's account only when the account value lies in some pre-specified disjoint intervals, where on each pre-specified interval, the fee rate is fixed and can be different from that on other interval. We model the asset that is the underlying fund of the variable annuity by a hyper-exponential jump diffusion process. Theoretically, for a jump diffusion process with hyper-exponential jumps and three-valued drift, we obtain expressions for the Laplace transforms of its distribution and its occupation times, i.e., the time that it spends below or above a pre-specified level. With these results, we derive closed-form formulas to determine the fair fee rate. Moreover, the total fees that will be collected by the insurer and the total time of deducting fees are also computed. In addition, some numerical examples are presented to illustrate our results

    Algorithms for stochastic optimization with functional or expectation constraints

    Full text link
    This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with a {\color{black} functional or expectation} constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the constraint on devision variables. We show that this algorithm exhibits the optimal O(1/ϵ2){\cal O}(1/\epsilon^2) rate of convergence, in terms of both optimality gap and constraint violation, when the objective and constraint functions are generally convex, where ϵ\epsilon denotes the optimality gap and infeasibility. Moreover, we show that this rate of convergence can be improved to O(1/ϵ){\cal O}(1/\epsilon) if the objective and constraint functions are strongly convex. We then present a variant of CSA, namely the cooperative stochastic parameter approximation (CSPA) algorithm, to deal with the situation when the constraint is defined over problem parameters and show that it exhibits similar optimal rate of convergence to CSA. It is worth noting that CSA and CSPA are primal methods which do not require the iterations on the dual space and/or the estimation on the size of the dual variables. To the best of our knowledge, this is the first time that such optimal SA methods for solving functional or expectation constrained stochastic optimization are presented in the literature

    Occupation times of generalized Ornstein-Uhlenbeck processes with two-sided exponential jumps

    Full text link
    For an Ornstein-Uhlenbeck process driven by a double exponential jump diffusion process, we obtain formulas for the joint Laplace transform of it and its occupation times. The approach used is remarkable and can be extended to investigate the occupation times of an Ornstein-Uhlenbeck process driven by a more general Levy process

    Occupation times of refracted Levy processes with jumps having rational Laplace transforms

    Full text link
    We investigate a refracted Levy process driven by a jump diffusion process, whose jumps have rational Laplace transforms. For such a stochastic process, formulas for the Laplace transform of its occupation times are deduced. To derive the main results, some modifications on our previous approach have been made. In addition, we obtain a very interesting identity, which is conjectured to hold for a general refracted Levy process

    Dynamic Stochastic Approximation for Multi-stage Stochastic Optimization

    Full text link
    In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solving these types of stochastic optimization problems. We show that DSA can achieve an optimal O(1/ϵ4){\cal O}(1/\epsilon^4) rate of convergence in terms of the total number of required scenarios when applied to a three-stage stochastic optimization problem. We further show that this rate of convergence can be improved to O(1/ϵ2){\cal O}(1/\epsilon^2) when the objective function is strongly convex. We also discuss variants of DSA for solving more general multi-stage stochastic optimization problems with the number of stages T>3T > 3. The developed DSA algorithms only need to go through the scenario tree once in order to compute an ϵ\epsilon-solution of the multi-stage stochastic optimization problem. As a result, the memory required by DSA only grows linearly with respect to the number of stages. To the best of our knowledge, this is the first time that stochastic approximation type methods are generalized for multi-stage stochastic optimization with T3T \ge 3
    corecore