5,538 research outputs found

    Well-posedness and exponential equilibration of a volume-surface reaction-diffusion system with nonlinear boundary coupling

    Full text link
    We consider a model system consisting of two reaction-diffusion equations, where one species diffuses in a volume while the other species diffuses on the surface which surrounds the volume. The two equations are coupled via a nonlinear reversible Robin-type boundary condition for the volume species and a matching reversible source term for the boundary species. As a consequence of the coupling, the total mass of the two species is conserved. The considered system is motivated for instance by models for asymmetric stem cell division. Firstly we prove the existence of a unique weak solution via an iterative method of converging upper and lower solutions to overcome the difficulties of the nonlinear boundary terms. Secondly, our main result shows explicit exponential convergence to equilibrium via an entropy method after deriving a suitable entropy entropy-dissipation estimate for the considered nonlinear volume-surface reaction-diffusion system.Comment: 31 page

    A survey of uncertainty principles and some signal processing applications

    Full text link
    The goal of this paper is to review the main trends in the domain of uncertainty principles and localization, emphasize their mutual connections and investigate practical consequences. The discussion is strongly oriented towards, and motivated by signal processing problems, from which significant advances have been made recently. Relations with sparse approximation and coding problems are emphasized

    Efficient Approximation of Quantum Channel Capacities

    Full text link
    We propose an iterative method for approximating the capacity of classical-quantum channels with a discrete input alphabet and a finite dimensional output, possibly under additional constraints on the input distribution. Based on duality of convex programming, we derive explicit upper and lower bounds for the capacity. To provide an ε\varepsilon-close estimate to the capacity, the presented algorithm requires O((NM)M3log(N)1/2ε)O(\tfrac{(N \vee M) M^3 \log(N)^{1/2}}{\varepsilon}), where NN denotes the input alphabet size and MM the output dimension. We then generalize the method for the task of approximating the capacity of classical-quantum channels with a bounded continuous input alphabet and a finite dimensional output. For channels with a finite dimensional quantum mechanical input and output, the idea of a universal encoder allows us to approximate the Holevo capacity using the same method. In particular, we show that the problem of approximating the Holevo capacity can be reduced to a multidimensional integration problem. For families of quantum channels fulfilling a certain assumption we show that the complexity to derive an ε\varepsilon-close solution to the Holevo capacity is subexponential or even polynomial in the problem size. We provide several examples to illustrate the performance of the approximation scheme in practice.Comment: 36 pages, 1 figur

    Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso

    Full text link
    We present exponential finite-sample nonasymptotic deviation inequalities for the SAA estimator's near-optimal solution set over the class of stochastic optimization problems with heavy-tailed random \emph{convex} functions in the objective and constraints. Such setting is better suited for problems where a sub-Gaussian data generating distribution is less expected, e.g., in stochastic portfolio optimization. One of our contributions is to exploit \emph{convexity} of the perturbed objective and the perturbed constraints as a property which entails \emph{localized} deviation inequalities for joint feasibility and optimality guarantees. This means that our bounds are significantly tighter in terms of diameter and metric entropy since they depend only on the near-optimal solution set but not on the whole feasible set. As a result, we obtain a much sharper sample complexity estimate when compared to a general nonconvex problem. In our analysis, we derive some localized deterministic perturbation error bounds for convex optimization problems which are of independent interest. To obtain our results, we only assume a metric regular convex feasible set, possibly not satisfying the Slater condition and not having a metric regular solution set. In this general setting, joint near feasibility and near optimality are guaranteed. If in addition the set satisfies the Slater condition, we obtain finite-sample simultaneous \emph{exact} feasibility and near optimality guarantees (for a sufficiently small tolerance). Another contribution of our work is to present, as a proof of concept of our localized techniques, a persistent result for a variant of the LASSO estimator under very weak assumptions on the data generating distribution.Comment: 34 pages. Some correction
    corecore