153 research outputs found

    λ\lambda_\infty & Maximum Variance Embedding: Measuring and Optimizing Connectivity of A Graph Metric

    Full text link
    Bobkov, Houdr\'e, and the last author [2000] introduced a Poincar\'e-type functional parameter, λ\lambda_\infty, of a graph and related it to connectivity of the graph via Cheeger-type inequalities. A work by the second author, Raghavendra, and Vempala [2013] related the complexity of λ\lambda_\infty to the so-called small-set expansion (SSE) problem and further set forth the desiderata for NP-hardness of this optimization problem. We confirm the conjecture that computing λ\lambda_\infty is NP-hard for weighted trees. Beyond measuring connectivity in many applications we want to optimize it. This, via convex duality, leads to a problem in machine learning known as the Maximum Variance Embedding (MVE). The output is a function from vertices to a low dim Euclidean space, subject to bounds on Euclidean distances between neighbors. The objective is to maximize output variance. Special cases of MVE into nn and 11 dims lead to absolute algebraic connectivity [1990] and spread constant [1998], that measure connectivity of the graph and its Cartesian nn-power, respectively. MVE has other applications in measuring diffusion speed and robustness of networks, clustering, and dimension reduction. We show that computing MVE in tree-width dims is NP-hard, while only one additional dim beyond width of a given tree-decomposition makes the problem in P. We show that MVE of a tree in 2 dims defines a non-convex yet benign optimization landscape, i.e., local=global optima. We further develop a linear time combinatorial algorithm for this case. Finally, we denote approximate Maximum Variance Embedding is tractable in significantly lower dims. For trees and general graphs, for which Maximum Variance Embedding cannot be solved in less than 22 and Ω(n)\Omega(n) dims, we provide 1+ε1+\varepsilon approximation algorithms for embedding into 11 and O(logn/ε2)O(\log n /\varepsilon^2) dims, respectively

    Detecting relative amplitude of IR signals with active sensors and its application to a positioning system

    Get PDF
    Nowadays, there is an increasing interest in smart systems, e.g., smart metering or smart spaces, for which active sensing plays an important role. In such systems, the sample or environment to be measured is irradiated with a signal (acoustic, infrared, radio‐frequency…) and some of their features are determined from the transmitted or reflected part of the original signal. In this work, infrared (IR) signals are emitted from different sources (four in this case) and received by a unique quadrature angular diversity aperture (QADA) sensor. A code division multiple access (CDMA) technique is used to deal with the simultaneous transmission of all the signals and their separation (depending on the source) at the receiver’s processing stage. Furthermore, the use of correlation techniques allows the receiver to determine the amount of energy received from each transmitter, by quantifying the main correlation peaks. This technique can be used in any system requiring active sensing; in the particular case of the IR positioning system presented here, the relative amplitudes of those peaks are used to determine the central incidence point of the light from each emitter on the QADA. The proposal tackles the typical phenomena, such as distortions caused by the transducer impulse response, the near‐far effect in CDMA‐based systems, multipath transmissions, the correlation degradation from non‐coherent demodulations, etc. Finally, for each emitter, the angle of incidence on the QADA receiveris estimated, assuming that it is on a horizontal plane, although with any rotation on the vertical axis Z. With the estimated angles and the known positions of the LED emitters, the position (x, y, z) of the receiver is determined. The system is validated at different positions in a volume of 3 × 3 × 3.4 m3 obtaining average errors of 7.1, 5.4, and 47.3 cm in the X, Y and Z axes, respectively.Agencia Estatal de InvestigaciónUniversidad de AlcaláJunta de Comunidades de Castilla-La Manch

    Statistical Analysis of Networks

    Get PDF
    This book is a general introduction to the statistical analysis of networks, and can serve both as a research monograph and as a textbook. Numerous fundamental tools and concepts needed for the analysis of networks are presented, such as network modeling, community detection, graph-based semi-supervised learning and sampling in networks. The description of these concepts is self-contained, with both theoretical justifications and applications provided for the presented algorithms. Researchers, including postgraduate students, working in the area of network science, complex network analysis, or social network analysis, will find up-to-date statistical methods relevant to their research tasks. This book can also serve as textbook material for courses related to the statistical approach to the analysis of complex networks. In general, the chapters are fairly independent and self-supporting, and the book could be used for course composition “à la carte”. Nevertheless, Chapter 2 is needed to a certain degree for all parts of the book. It is also recommended to read Chapter 4 before reading Chapters 5 and 6, but this is not absolutely necessary. Reading Chapter 3 can also be helpful before reading Chapters 5 and 7. As prerequisites for reading this book, a basic knowledge in probability, linear algebra and elementary notions of graph theory is advised. Appendices describing required notions from the above mentioned disciplines have been added to help readers gain further understanding

    Physarum Powered Differentiable Linear Programming Layers and Applications

    Full text link
    Consider a learning algorithm, which involves an internal call to an optimization routine such as a generalized eigenvalue problem, a cone programming problem or even sorting. Integrating such a method as layers within a trainable deep network in a numerically stable way is not simple -- for instance, only recently, strategies have emerged for eigendecomposition and differentiable sorting. We propose an efficient and differentiable solver for general linear programming problems which can be used in a plug and play manner within deep neural networks as a layer. Our development is inspired by a fascinating but not widely used link between dynamics of slime mold (physarum) and mathematical optimization schemes such as steepest descent. We describe our development and demonstrate the use of our solver in a video object segmentation task and meta-learning for few-shot learning. We review the relevant known results and provide a technical analysis describing its applicability for our use cases. Our solver performs comparably with a customized projected gradient descent method on the first task and outperforms the very recently proposed differentiable CVXPY solver on the second task. Experiments show that our solver converges quickly without the need for a feasible initial point. Interestingly, our scheme is easy to implement and can easily serve as layers whenever a learning procedure needs a fast approximate solution to a LP, within a larger network

    Statistical Analysis of Networks

    Get PDF
    This book is a general introduction to the statistical analysis of networks, and can serve both as a research monograph and as a textbook. Numerous fundamental tools and concepts needed for the analysis of networks are presented, such as network modeling, community detection, graph-based semi-supervised learning and sampling in networks. The description of these concepts is self-contained, with both theoretical justifications and applications provided for the presented algorithms. Researchers, including postgraduate students, working in the area of network science, complex network analysis, or social network analysis, will find up-to-date statistical methods relevant to their research tasks. This book can also serve as textbook material for courses related to the statistical approach to the analysis of complex networks. In general, the chapters are fairly independent and self-supporting, and the book could be used for course composition “à la carte”. Nevertheless, Chapter 2 is needed to a certain degree for all parts of the book. It is also recommended to read Chapter 4 before reading Chapters 5 and 6, but this is not absolutely necessary. Reading Chapter 3 can also be helpful before reading Chapters 5 and 7. As prerequisites for reading this book, a basic knowledge in probability, linear algebra and elementary notions of graph theory is advised. Appendices describing required notions from the above mentioned disciplines have been added to help readers gain further understanding

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF
    corecore