3,767 research outputs found
Classical and quantum algorithms for scaling problems
This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
A First Course in Causal Inference
I developed the lecture notes based on my ``Causal Inference'' course at the
University of California Berkeley over the past seven years. Since half of the
students were undergraduates, my lecture notes only require basic knowledge of
probability theory, statistical inference, and linear and logistic regressions
Neural Architecture Search for Image Segmentation and Classification
Deep learning (DL) is a class of machine learning algorithms that relies on deep neural networks (DNNs) for computations. Unlike traditional machine learning algorithms, DL can learn from raw data directly and effectively. Hence, DL has been successfully applied to tackle many real-world problems. When applying DL to a given problem, the primary task is designing the optimum DNN. This task relies heavily on human expertise, is time-consuming, and requires many trial-and-error experiments.
This thesis aims to automate the laborious task of designing the optimum DNN by exploring the neural architecture search (NAS) approach. Here, we propose two new NAS algorithms for two real-world problems: pedestrian lane detection for assistive navigation and hyperspectral image segmentation for biosecurity scanning. Additionally, we also introduce a new dataset-agnostic predictor of neural network performance, which can be used to speed-up NAS algorithms that require the evaluation of candidate DNNs
A One Stop 3D Target Reconstruction and multilevel Segmentation Method
3D object reconstruction and multilevel segmentation are fundamental to
computer vision research. Existing algorithms usually perform 3D scene
reconstruction and target objects segmentation independently, and the
performance is not fully guaranteed due to the challenge of the 3D
segmentation. Here we propose an open-source one stop 3D target reconstruction
and multilevel segmentation framework (OSTRA), which performs segmentation on
2D images, tracks multiple instances with segmentation labels in the image
sequence, and then reconstructs labelled 3D objects or multiple parts with
Multi-View Stereo (MVS) or RGBD-based 3D reconstruction methods. We extend
object tracking and 3D reconstruction algorithms to support continuous
segmentation labels to leverage the advances in the 2D image segmentation,
especially the Segment-Anything Model (SAM) which uses the pretrained neural
network without additional training for new scenes, for 3D object segmentation.
OSTRA supports most popular 3D object models including point cloud, mesh and
voxel, and achieves high performance for semantic segmentation, instance
segmentation and part segmentation on several 3D datasets. It even surpasses
the manual segmentation in scenes with complex structures and occlusions. Our
method opens up a new avenue for reconstructing 3D targets embedded with rich
multi-scale segmentation information in complex scenes. OSTRA is available from
https://github.com/ganlab/OSTRA
Towards Reliable and Accurate Global Structure-from-Motion
Reconstruction of objects or scenes from sparse point detections across multiple views is one of the most tackled problems in computer vision. Given the coordinates of 2D points tracked in multiple images, the problem consists of estimating the corresponding 3D points and cameras\u27 calibrations (intrinsic and pose), and can be solved by minimizing reprojection errors using bundle adjustment. However, given bundle adjustment\u27s nonlinear objective function and iterative nature, a good starting guess is required to converge to global minima. Global and Incremental Structure-from-Motion methods appear as ways to provide good initializations to bundle adjustment, each with different properties. While Global Structure-from-Motion has been shown to result in more accurate reconstructions compared to Incremental Structure-from-Motion, the latter has better scalability by starting with a small subset of images and sequentially adding new views, allowing reconstruction of sequences with millions of images. Additionally, both Global and Incremental Structure-from-Motion methods rely on accurate models of the scene or object, and under noisy conditions or high model uncertainty might result in poor initializations for bundle adjustment. Recently pOSE, a class of matrix factorization methods, has been proposed as an alternative to conventional Global SfM methods. These methods use VarPro - a second-order optimization method - to minimize a linear combination of an approximation of reprojection errors and a regularization term based on an affine camera model, and have been shown to converge to global minima with a high rate even when starting from random camera calibration estimations.This thesis aims at improving the reliability and accuracy of global SfM through different approaches. First, by studying conditions for global optimality of point set registration, a point cloud averaging method that can be used when (incomplete) 3D point clouds of the same scene in different coordinate systems are available. Second, by extending pOSE methods to different Structure-from-Motion problem instances, such as Non-Rigid SfM or radial distortion invariant SfM. Third and finally, by replacing the regularization term of pOSE methods with an exponential regularization on the projective depth of the 3D point estimations, resulting in a loss that achieves reconstructions with accuracy close to bundle adjustment
Pairwise versus mutual independence: visualisation, actuarial applications and central limit theorems
Accurately capturing the dependence between risks, if it exists, is an increasingly relevant topic of actuarial research. In recent years, several authors have started to relax the traditional 'independence assumption', in a variety of actuarial settings. While it is known that 'mutual independence' between random variables is not equivalent to their 'pairwise independence', this thesis aims to provide a better understanding of the materiality of this difference. The distinction between mutual and pairwise independence matters because, in practice, dependence is often assessed via pairs only, e.g., through correlation matrices, rank-based measures of association, scatterplot matrices, heat-maps, etc. Using such pairwise methods, it is possible to miss some forms of dependence. In this thesis, we explore how material the difference between pairwise and mutual independence is, and from several angles.
We provide relevant background and motivation for this thesis in Chapter 1, then conduct a literature review in Chapter 2.
In Chapter 3, we focus on visualising the difference between pairwise and mutual independence. To do so, we propose a series of theoretical examples (some of them new) where random variables are pairwise independent but (mutually) dependent, in short, PIBD. We then develop new visualisation tools and use them to illustrate what PIBD variables can look like. We showcase that the dependence involved is possibly very strong. We also use our visualisation tools to identify subtle forms of dependence, which would otherwise be hard to detect.
In Chapter 4, we review common dependence models (such has elliptical distributions and Archimedean copulas) used in actuarial science and show that they do not allow for the possibility of PIBD data. We also investigate concrete consequences of the 'nonequivalence' between pairwise and mutual independence. We establish that many results which hold for mutually independent variables do not hold under sole pairwise independent. Those include results about finite sums of random variables, extreme value theory and bootstrap methods. This part thus illustrates what can potentially 'go wrong' if one assumes mutual independence where only pairwise independence holds.
Lastly, in Chapters 5 and 6, we investigate the question of what happens for PIBD variables 'in the limit', i.e., when the sample size goes to infi nity. We want to see if the 'problems' caused by dependence vanish for sufficiently large samples. This is a broad question, and we concentrate on the important classical Central Limit Theorem (CLT), for which we fi nd that the answer is largely negative. In particular, we construct new sequences of PIBD variables (with arbitrary margins) for which a CLT does not hold. We derive explicitly the asymptotic distribution of the standardised mean of our sequences, which allows us to illustrate the extent of the 'failure' of a CLT for PIBD variables. We also propose a general methodology to construct dependent K-tuplewise independent (K an arbitrary integer) sequences of random variables with arbitrary margins. In the case K = 3, we use this methodology to derive explicit examples of triplewise independent sequences for which no CLT hold. Those results illustrate that mutual independence is a crucial assumption within CLTs, and that having larger samples is not always a viable solution to the problem of non-independent data
Consistency of scalar and vector effective field theories
In the absence of a theory of everything, modern physicists need to rely on other predictive tools and turned to Effective Field Theories (EFTs) in a number of fields, including but not limited to statistical mechanics, condensed matter, particle physics, cosmology and gravity. The coefficients of an EFT can be constrained with high precision by experiments, which can involve high-energy particle colliders for instance but are generally left free from the theoretical point of view. The focus of this thesis is to use various consistency criteria to get theoretical constraints on the low-energy coefficients of EFTs. In particular, we construct a new model of massive spin-1 field by requiring that the theory is free of any ghostly degree of freedom. We then study its cosmological perturbations and ask that all propagating modes are stable and subluminal, reducing the space of viable cosmological solutions. Finally, we implement a method to get ‘causality bounds’, which are obtained by requiring infrared causality. This is imposed by forbidding any resolvable time advance in the EFT. We derive such ‘causality bounds’ for shift-symmetric and Galileon scalar EFTs, before turning to gauge-symmetric vector fields. We prove that our causality bounds can be competitive with positivity bounds and can even be used in scenarios that are out of reach of the positivity approach. The result of this thesis, by exploring several consistency criteria, is to provide compact causality bounds for low-energy EFT coefficients, in addition to constraints coming from the absence of ghosts, stability and cosmological viability.Open Acces
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Estimating Appearance Models for Image Segmentation via Tensor Factorization
Image Segmentation is one of the core tasks in Computer Vision and solving it
often depends on modeling the image appearance data via the color distributions
of each it its constituent regions. Whereas many segmentation algorithms handle
the appearance models dependence using alternation or implicit methods, we
propose here a new approach to directly estimate them from the image without
prior information on the underlying segmentation. Our method uses local high
order color statistics from the image as an input to tensor factorization-based
estimator for latent variable models. This approach is able to estimate models
in multiregion images and automatically output the regions proportions without
prior user interaction, overcoming the drawbacks from a prior attempt to this
problem. We also demonstrate the performance of our proposed method in many
challenging synthetic and real imaging scenarios and show that it leads to an
efficient segmentation algorithm
- …