225 research outputs found

    Multiple structure recovery with maximum coverage

    Get PDF
    We present a general framework for geometric model fitting based on a set coverage formulation that caters for intersecting structures and outliers in a simple and principled manner. The multi-model fitting problem is formulated in terms of the optimization of a consensus-based global cost function, which allows to sidestep the pitfalls of preference approaches based on clustering and to avoid the difficult trade-off between data fidelity and complexity of other optimization formulations. Two especially appealing characteristics of this method are the ease with which it can be implemented and its modularity with respect to the solver and to the sampling strategy. Few intelligible parameters need to be set and tuned, namely the inlier threshold and the number of desired models. The summary of the experiments is that our method compares favourably with its competitors overall, and it is always either the best performer or almost on par with the best performer in specific scenarios

    Fairness-Aware Hyperparameter Optimization

    Get PDF
    In recent years, increased usage of machine learning algorithms has been accompanied by several reports of machine bias in areas from recidivism assessment, to job-applicant screening tools, and estimating mortgage default risk. Additionally, recent advances in machine learning have prominently featured so-called "black-box" models (e.g. neural networks), in which we can see its inputs and outputs, but with limited capability for inspecting its decision-making process. As a result, it is increasingly imperative to monitor and control fairness of developed models for detecting discrimination against sub-groups of the population (e.g. based on race, gender, or age). State-of-the-art machine learning algorithms require the definition of a large number of hyperparameters to govern how they learn and generalize to unseen data. Current hyperparameter search algorithms aim to tune these knobs in order to optimize for a global performance metric (e.g. accuracy). At the same time, fairness metrics are equally impacted by varying hyperparameter values, but there is comparatively little research on optimizing for multiple objectives. Consequently, we aim to study how to achieve efficient hyperparameter optimization for multi-objective goals, and corresponding trade-offs. We develop a hyperparameter optimization framework that supports the definition of secondary objectives or constraints, and experiment with multiple fairness metrics (e.g. equality of opportunity). Furthermore, we explore a fraud detection case study, and assess the framework's effectiveness in this context

    Structured Sparsity Promoting Functions: Theory and Applications

    Get PDF
    Motivated by the minimax concave penalty based variable selection in high-dimensional linear regression, we introduce a simple scheme to construct structured semiconvex sparsity promoting functions from convex sparsity promoting functions and their Moreau envelopes. Properties of these functions are developed by leveraging their structure. In particular, we show that the behavior of the constructed function can be easily controlled by assumptions on the original convex function. We provide sparsity guarantees for the general family of functions via the proximity operator. Results related to the Fenchel Conjugate and Łojasiewicz exponent of these functions are also provided. We further study the behavior of the proximity operators of several special functions including indicator functions of closed convex sets, piecewise quadratic functions, and linear combinations of the two. To demonstrate these properties, several concrete examples are presented and existing instances are featured as special cases. We explore the effect of these functions on the penalized least squares problem and discuss several algorithms for solving this problem which rely on the particular structure of our functions. We then apply these methods to the total variation denoising problem from signal processing

    An integrated theory of language production and comprehension

    Get PDF
    Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal

    MULTIPLE STRUCTURE RECOVERY VIA PREFERENCE ANALYSIS IN CONCEPTUAL SPACE

    Get PDF
    Finding multiple models (or structures) that fit data corrupted by noise and outliers is an omnipresent problem in empirical sciences, includingComputer Vision, where organizing unstructured visual data in higher level geometric structures is a necessary and basic step to derive better descriptions and understanding of a scene. This challenging problem has a chicken-and-egg pattern: in order to estimate models one needs to first segment the data, and in order to segment the data it is necessary to know which structure points belong to. Most of the multi-model fitting techniques proposed in the literature can be divided in two classes, according to which horn of the chicken-egg-dilemma is addressed first, namely consensus and preference analysis. Consensus-based methods put the emphasis on the estimation part of the problem and focus on models that describe has many points as possible. On the other side, preference analysis concentrates on the segmentation side in order to find a proper partition of the data, from which model estimation follows. The research conducted in this thesis attempts to provide theoretical footing to the preference approach and to elaborate it in term of performances and robustness. In particular, we derive a conceptual space in which preference analysis is robustly performed thanks to three different formulations of multiple structures recovery, i.e. linkage clustering, spectral analysis and set coverage. In this way we are able to propose new and effective strategies to link together consensus and preferences based criteria to overcome the limitation of both. In order to validate our researches, we have applied our methodologies to some significant Computer Vision tasks including: geometric primitive fitting (e.g. line fitting; circle fitting; 3D plane fitting), multi-body segmentation, plane segmentation, and video motion segmentation

    Programming tools for intelligent systems

    Full text link
    Les outils de programmation sont des programmes informatiques qui aident les humains à programmer des ordinateurs. Les outils sont de toutes formes et tailles, par exemple les éditeurs, les compilateurs, les débogueurs et les profileurs. Chacun de ces outils facilite une tâche principale dans le flux de travail de programmation qui consomme des ressources cognitives lorsqu’il est effectué manuellement. Dans cette thèse, nous explorons plusieurs outils qui facilitent le processus de construction de systèmes intelligents et qui réduisent l’effort cognitif requis pour concevoir, développer, tester et déployer des systèmes logiciels intelligents. Tout d’abord, nous introduisons un environnement de développement intégré (EDI) pour la programmation d’applications Robot Operating System (ROS), appelé Hatchery (Chapter 2). Deuxièmement, nous décrivons Kotlin∇, un système de langage et de type pour la programmation différenciable, un paradigme émergent dans l’apprentissage automatique (Chapter 3). Troisièmement, nous proposons un nouvel algorithme pour tester automatiquement les programmes différenciables, en nous inspirant des techniques de tests contradictoires et métamorphiques (Chapter 4), et démontrons son efficacité empirique dans le cadre de la régression. Quatrièmement, nous explorons une infrastructure de conteneurs basée sur Docker, qui permet un déploiement reproductible des applications ROS sur la plateforme Duckietown (Chapter 5). Enfin, nous réfléchissons à l’état actuel des outils de programmation pour ces applications et spéculons à quoi pourrait ressembler la programmation de systèmes intelligents à l’avenir (Chapter 6).Programming tools are computer programs which help humans program computers. Tools come in all shapes and forms, from editors and compilers to debuggers and profilers. Each of these tools facilitates a core task in the programming workflow which consumes cognitive resources when performed manually. In this thesis, we explore several tools that facilitate the process of building intelligent systems, and which reduce the cognitive effort required to design, develop, test and deploy intelligent software systems. First, we introduce an integrated development environment (IDE) for programming Robot Operating System (ROS) applications, called Hatchery (Chapter 2). Second, we describe Kotlin∇, a language and type system for differentiable programming, an emerging paradigm in machine learning (Chapter 3). Third, we propose a new algorithm for automatically testing differentiable programs, drawing inspiration from techniques in adversarial and metamorphic testing (Chapter 4), and demonstrate its empirical efficiency in the regression setting. Fourth, we explore a container infrastructure based on Docker, which enables reproducible deployment of ROS applications on the Duckietown platform (Chapter 5). Finally, we reflect on the current state of programming tools for these applications and speculate what intelligent systems programming might look like in the future (Chapter 6)

    Towards a complete multiple-mechanism account of predictive language processing [Commentary on Pickering & Garrod]

    Get PDF
    Although we agree with Pickering & Garrod (P&G) that prediction-by-simulation and prediction-by-association are important mechanisms of anticipatory language processing, this commentary suggests that they: (1) overlook other potential mechanisms that might underlie prediction in language processing, (2) overestimate the importance of prediction-by-association in early childhood, and (3) underestimate the complexity and significance of several factors that might mediate prediction during language processing
    corecore