8,759 research outputs found

    Bethe Ansatz and Q-operator for the open ASEP

    Full text link
    In this paper, we look at the asymmetric simple exclusion process with open boundaries with a current-counting deformation. We construct a two-parameter family of transfer matrices which commute with the deformed Markov matrix of the system. We show that these transfer matrices can be factorised into two commuting matrices with one parameter each, which can be identified with Baxter's Q-operator, and that for certain values of the product of those parameters, they decompose into a sum of two commuting matrices, one of which is the Bethe transfer matrix for a given dimension of the auxiliary space. Using this, we find the T-Q equation for the open ASEP, and, through functional Bethe Ansatz techniques, we obtain an exact expression for the dominant eigenvalue of the deformed Markov matrix.Comment: 46 pages. New version: references updated and typos correcte

    Opinion-Based Centrality in Multiplex Networks: A Convex Optimization Approach

    Full text link
    Most people simultaneously belong to several distinct social networks, in which their relations can be different. They have opinions about certain topics, which they share and spread on these networks, and are influenced by the opinions of other persons. In this paper, we build upon this observation to propose a new nodal centrality measure for multiplex networks. Our measure, called Opinion centrality, is based on a stochastic model representing opinion propagation dynamics in such a network. We formulate an optimization problem consisting in maximizing the opinion of the whole network when controlling an external influence able to affect each node individually. We find a mathematical closed form of this problem, and use its solution to derive our centrality measure. According to the opinion centrality, the more a node is worth investing external influence, and the more it is central. We perform an empirical study of the proposed centrality over a toy network, as well as a collection of real-world networks. Our measure is generally negatively correlated with existing multiplex centrality measures, and highlights different types of nodes, accordingly to its definition

    Computational Complexity versus Statistical Performance on Sparse Recovery Problems

    Get PDF
    We show that several classical quantities controlling compressed sensing performance directly match classical parameters controlling algorithmic complexity. We first describe linearly convergent restart schemes on first-order methods solving a broad range of compressed sensing problems, where sharpness at the optimum controls convergence speed. We show that for sparse recovery problems, this sharpness can be written as a condition number, given by the ratio between true signal sparsity and the largest signal size that can be recovered by the observation matrix. In a similar vein, Renegar's condition number is a data-driven complexity measure for convex programs, generalizing classical condition numbers for linear systems. We show that for a broad class of compressed sensing problems, the worst case value of this algorithmic complexity measure taken over all signals matches the restricted singular value of the observation matrix which controls robust recovery performance. Overall, this means in both cases that, in compressed sensing problems, a single parameter directly controls both computational complexity and recovery performance. Numerical experiments illustrate these points using several classical algorithms.Comment: Final version, to appear in information and Inferenc

    Electron acceleration in vacuum by ultrashort and tightly focused radially polarized laser pulses

    Full text link
    Exact closed-form solutions to Maxwell's equations are used to investigate electron acceleration driven by radially polarized laser beams in the nonparaxial and ultrashort pulse regime. Besides allowing for higher energy gains, such beams could generate synchronized counterpropagating electron bunches.Comment: 3 pages, 3 figures. To appear in the proceedings of the Ultrafast Phenomena XVIII conferenc

    Calibration of One-Class SVM for MV set estimation

    Full text link
    A general approach for anomaly detection or novelty detection consists in estimating high density regions or Minimum Volume (MV) sets. The One-Class Support Vector Machine (OCSVM) is a state-of-the-art algorithm for estimating such regions from high dimensional data. Yet it suffers from practical limitations. When applied to a limited number of samples it can lead to poor performance even when picking the best hyperparameters. Moreover the solution of OCSVM is very sensitive to the selection of hyperparameters which makes it hard to optimize in an unsupervised setting. We present a new approach to estimate MV sets using the OCSVM with a different choice of the parameter controlling the proportion of outliers. The solution function of the OCSVM is learnt on a training set and the desired probability mass is obtained by adjusting the offset on a test set to prevent overfitting. Models learnt on different train/test splits are then aggregated to reduce the variance induced by such random splits. Our approach makes it possible to tune the hyperparameters automatically and obtain nested set estimates. Experimental results show that our approach outperforms the standard OCSVM formulation while suffering less from the curse of dimensionality than kernel density estimates. Results on actual data sets are also presented.Comment: IEEE DSAA' 2015, Oct 2015, Paris, Franc
    • …
    corecore