566 research outputs found

    Fusion of Head and Full-Body Detectors for Multi-Object Tracking

    Full text link
    In order to track all persons in a scene, the tracking-by-detection paradigm has proven to be a very effective approach. Yet, relying solely on a single detector is also a major limitation, as useful image information might be ignored. Consequently, this work demonstrates how to fuse two detectors into a tracking system. To obtain the trajectories, we propose to formulate tracking as a weighted graph labeling problem, resulting in a binary quadratic program. As such problems are NP-hard, the solution can only be approximated. Based on the Frank-Wolfe algorithm, we present a new solver that is crucial to handle such difficult problems. Evaluation on pedestrian tracking is provided for multiple scenarios, showing superior results over single detector tracking and standard QP-solvers. Finally, our tracker ranks 2nd on the MOT16 benchmark and 1st on the new MOT17 benchmark, outperforming over 90 trackers.Comment: 10 pages, 4 figures; Winner of the MOT17 challenge; CVPRW 201

    Conditional Gradient Methods

    Full text link
    The purpose of this survey is to serve both as a gentle introduction and a coherent overview of state-of-the-art Frank--Wolfe algorithms, also called conditional gradient algorithms, for function minimization. These algorithms are especially useful in convex optimization when linear optimization is cheaper than projections. The selection of the material has been guided by the principle of highlighting crucial ideas as well as presenting new approaches that we believe might become important in the future, with ample citations even of old works imperative in the development of newer methods. Yet, our selection is sometimes biased, and need not reflect consensus of the research community, and we have certainly missed recent important contributions. After all the research area of Frank--Wolfe is very active, making it a moving target. We apologize sincerely in advance for any such distortions and we fully acknowledge: We stand on the shoulder of giants.Comment: 238 pages with many figures. The FrankWolfe.jl Julia package (https://github.com/ZIB-IOL/FrankWolfe.jl) providces state-of-the-art implementations of many Frank--Wolfe method

    Robust Mixing for Ab-Initio Quantum Mechanical Calculations

    Full text link
    We study the general problem of mixing for ab-initio quantum-mechanical problems. Guided by general mathematical principles and the underlying physics, we propose a multisecant form of Broydens second method for solving the self-consistent field equations of Kohn-Sham density functional theory. The algorithm is robust, requires relatively little finetuning and appears to outperform the current state of the art, converging for cases that defeat many other methods. We compare our technique to the conventional methods for problems ranging from simple to nearly pathological.Comment: 32 Pages, 4 figure

    Unveiling Biases in Word Embeddings: An Algorithmic Approach for Comparative Analysis Based on Alignment

    Get PDF
    openWord embeddings are state-of-the-art vectorial representation of words with the goal of preserving semantic similarity. They are the result of specific learning algorithms trained on usually large corpora. Consequently, they inherit all biases of the corpora on which they have been trained on. The goal of the thesis is to devise and adapt an efficient algorithm to compare two different word embeddings in order to highlight the biases they are subjected to. Specifically, we look for an alignment between the two vector spaces, corresponding to the two word embeddings, that minimises the difference between the stable words, i.e. the ones that have not changed in the two embeddings, thus highlighting the differences between the ones that did changed. In this work, we test this idea adapting a machine translation framework called MUSE that, after some improvements, can run over multiple cores in a HPC framework, specifically managed with SLURM. We also provide an amplpy implementation of linear and convex programming algorithms adapted to our case. We then test these techniques on a corpus of text taken from Italian newspapers in order to identify which words are more subject to change among the different pairs of corpora.Word embeddings are state-of-the-art vectorial representation of words with the goal of preserving semantic similarity. They are the result of specific learning algorithms trained on usually large corpora. Consequently, they inherit all biases of the corpora on which they have been trained on. The goal of the thesis is to devise and adapt an efficient algorithm to compare two different word embeddings in order to highlight the biases they are subjected to. Specifically, we look for an alignment between the two vector spaces, corresponding to the two word embeddings, that minimises the difference between the stable words, i.e. the ones that have not changed in the two embeddings, thus highlighting the differences between the ones that did changed. In this work, we test this idea adapting a machine translation framework called MUSE that, after some improvements, can run over multiple cores in a HPC framework, specifically managed with SLURM. We also provide an amplpy implementation of linear and convex programming algorithms adapted to our case. We then test these techniques on a corpus of text taken from Italian newspapers in order to identify which words are more subject to change among the different pairs of corpora

    Exploring the Power of Rescaling

    Get PDF
    The goal of our research is a comprehensive exploration of the power of rescaling to improve the efficiency of various algorithms for linear optimization and related problems. Linear optimization and linear feasibility problemsarguably yield the fundamental problems of optimization. Advances in solvingthese problems impact the core of optimization theory, and consequently itspractical applications. The development and analysis of solution methods for linear optimization is one of the major topics in optimization research. Although the polynomial time ellipsoid method has excellent theoretical properties,however it turned out to be inefficient in practice.Still today, in spite of the dominance of interior point methods, various algorithms, such as perceptron algorithms, rescaling perceptron algorithms,von Neumann algorithms, Chubanov\u27s method, and linear optimization related problems,such as the colorful feasibility problem -- whose complexity status is still undecided --are studied.Motivated by the successful application of a rescaling principle on the perceptron algorithm,our research aims to explore the power of rescaling on other algorithms too,and improve their computational complexity. We focus on algorithms forsolving linear feasibility and related problems, whose complexity depend on a quantity ρ\rho, which is a condition number for measuring the distance to the feasibility or infeasibility of the problem.These algorithms include the von Neumann algorithm and the perceptron algorithm. First, we discuss the close duality relationship between the perceptron and the von Neumann algorithms. This observation allows us to transit one algorithm as a variant of the other, as well as we can transit their complexity results. The discovery of this duality not only provides a profound insight into both of the algorithms, but also results in new variants of the algorithms.Based on this duality relationship, we propose a deterministic rescaling von Neumann algorithm. It computationally outperforms the original von Neumann algorithm. Though its complexity has not been proved yet, we construct a von Neumann example which shows that the rescaling steps cannot keep the quantity ρ\rho increasing monotonically. Showing a monotonic increase of ρ\rho is a common technique used to prove the complexity of rescaling algorithms. Therefore, this von Neumann example actually shows that another proof method needs to be discovered in order to obtain the complexity of this deterministic rescaling von Neumann algorithm. Furthermore, this von Neumann example serves as the foundation of a perceptron example, which verifies that ρ\rho is not always increasing after one rescaling step in the polynomial time deterministic rescaling perceptron algorithm either.After that, we adapt the idea of Chubanov\u27s method to our rescaling frame and develop a polynomial-time column-wise rescaling von Neumann algorithm. Chubanov recently proposed a simple polynomial-time algorithm for solving homogeneous linear systems with positive variables. The Basic Procedure of Chubanov\u27s method can either find a feasible solution, or identify an upper bound for at least one coordinate of any feasible solution. The column-wise rescaling von Neumann algorithm combines the Basic Procedure with column-wise rescaling to identify zero coordinates in all feasible solutions and remove the corresponding columns from the coefficient matrix. This is the first variant of the von Neumann algorithm with polynomial-time complexity. Furthermore, compared with the original von Neumann algorithm which returns an approximate solution, this rescaling variant guarantees an exact solution for feasible problems.Finally, we develop the methodology of higher order rescaling and propose a higher-order perceptron algorithm.We implement the perceptron improvement phase by utilizing parallel processors.Therefore, in a multi-core environment we may obtain several rescaling vectors without extra wall-clock time.Once we use these rescaling vectors in a single higher-order rescaling step, better rescaling ratesmay be expected and thus computational efficiency is improved

    The Shallow and the Deep:A biased introduction to neural networks and old school machine learning

    Get PDF
    The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility
    • 

    corecore