100,654 research outputs found

    Robust matching in an uncertain world

    Get PDF
    ISSN: 1051-4651 Print ISBN: 978-1-4244-7542-1International audienceFinding point correspondences which are consistent with a geometric constraint is one of the cornerstones of many computer vision problems. This is a difficult task because of spurious measurements leading to ambiguously matched points and because of uncertainty in point location. In this article we address these problems and propose a new robust algorithm that explicitly takes account of location uncertainty. We propose applications to Sift matching and 3D data fusion

    Robust matching in an uncertain world

    Get PDF
    Finding a registration between two sets of corresponding 2D or 3D points is one of the keystones of many computer vision tasks. This is difficult since some points may not have correspondences at all, and points are often spoilt by noisy measurements. In this report we propose new robust algorithms, namely an adaptation of Msac algorithm and a new a contrario model. Both of them are based on statistics over the Mahalanobis distance and explicitly take account of location uncertainty. We outline applications to SIFT keypoint matching and 3D data fusion.L'estimation d'un recalage entre deux ensembles de points 2D ou 3D en correspondance est un des principaux problèmes rencontrés dans le domaine de la vision par ordinateur. Il s'agit d'un problème difficile car certains points peuvent n'avoir aucune correspondance dans l'autre ensemble, et la localisation des points est généralement connue à une erreur près. Dans ce report, nous proposons de nouveaux algorithmes: une adaptation de Msac et un nouveau modèle a contrario. Ils sont tous deux basés sur des statistiques sur la distance de Mahalanobis et ils tiennent explicitement compte de l'incertitude dans la localisation des points. Nous suggérons des applications à l'appariement de points Sift et à la fusion de données 3D

    Scalable Robust Kidney Exchange

    Full text link
    In barter exchanges, participants directly trade their endowed goods in a constrained economic setting without money. Transactions in barter exchanges are often facilitated via a central clearinghouse that must match participants even in the face of uncertainty---over participants, existence and quality of potential trades, and so on. Leveraging robust combinatorial optimization techniques, we address uncertainty in kidney exchange, a real-world barter market where patients swap (in)compatible paired donors. We provide two scalable robust methods to handle two distinct types of uncertainty in kidney exchange---over the quality and the existence of a potential match. The latter case directly addresses a weakness in all stochastic-optimization-based methods to the kidney exchange clearing problem, which all necessarily require explicit estimates of the probability of a transaction existing---a still-unsolved problem in this nascent market. We also propose a novel, scalable kidney exchange formulation that eliminates the need for an exponential-time constraint generation process in competing formulations, maintains provable optimality, and serves as a subsolver for our robust approach. For each type of uncertainty we demonstrate the benefits of robustness on real data from a large, fielded kidney exchange in the United States. We conclude by drawing parallels between robustness and notions of fairness in the kidney exchange setting.Comment: Presented at AAAI1

    A General Spatio-Temporal Clustering-Based Non-local Formulation for Multiscale Modeling of Compartmentalized Reservoirs

    Full text link
    Representing the reservoir as a network of discrete compartments with neighbor and non-neighbor connections is a fast, yet accurate method for analyzing oil and gas reservoirs. Automatic and rapid detection of coarse-scale compartments with distinct static and dynamic properties is an integral part of such high-level reservoir analysis. In this work, we present a hybrid framework specific to reservoir analysis for an automatic detection of clusters in space using spatial and temporal field data, coupled with a physics-based multiscale modeling approach. In this work a novel hybrid approach is presented in which we couple a physics-based non-local modeling framework with data-driven clustering techniques to provide a fast and accurate multiscale modeling of compartmentalized reservoirs. This research also adds to the literature by presenting a comprehensive work on spatio-temporal clustering for reservoir studies applications that well considers the clustering complexities, the intrinsic sparse and noisy nature of the data, and the interpretability of the outcome. Keywords: Artificial Intelligence; Machine Learning; Spatio-Temporal Clustering; Physics-Based Data-Driven Formulation; Multiscale Modelin

    Named Entity Extraction and Disambiguation: The Reinforcement Effect.

    Get PDF
    Named entity extraction and disambiguation have received much attention in recent years. Typical fields addressing these topics are information retrieval, natural language processing, and semantic web. Although these topics are highly dependent, almost no existing works examine this dependency. It is the aim of this paper to examine the dependency and show how one affects the other, and vice versa. We conducted experiments with a set of descriptions of holiday homes with the aim to extract and disambiguate toponyms as a representative example of named entities. We experimented with three approaches for disambiguation with the purpose to infer the country of the holiday home. We examined how the effectiveness of extraction influences the effectiveness of disambiguation, and reciprocally, how filtering out ambiguous names (an activity that depends on the disambiguation process) improves the effectiveness of extraction. Since this, in turn, may improve the effectiveness of disambiguation again, it shows that extraction and disambiguation may reinforce each other.\u

    Indeterministic Handling of Uncertain Decisions in Duplicate Detection

    Get PDF
    In current research, duplicate detection is usually considered as a deterministic approach in which tuples are either declared as duplicates or not. However, most often it is not completely clear whether two tuples represent the same real-world entity or not. In deterministic approaches, however, this uncertainty is ignored, which in turn can lead to false decisions. In this paper, we present an indeterministic approach for handling uncertain decisions in a duplicate detection process by using a probabilistic target schema. Thus, instead of deciding between multiple possible worlds, all these worlds can be modeled in the resulting data. This approach minimizes the negative impacts of false decisions. Furthermore, the duplicate detection process becomes almost fully automatic and human effort can be reduced to a large extent. Unfortunately, a full-indeterministic approach is by definition too expensive (in time as well as in storage) and hence impractical. For that reason, we additionally introduce several semi-indeterministic methods for heuristically reducing the set of indeterministic handled decisions in a meaningful way

    Controlling for Unobserved Confounds in Classification Using Correlational Constraints

    Full text link
    As statistical classifiers become integrated into real-world applications, it is important to consider not only their accuracy but also their robustness to changes in the data distribution. In this paper, we consider the case where there is an unobserved confounding variable zz that influences both the features x\mathbf{x} and the class variable yy. When the influence of zz changes from training to testing data, we find that the classifier accuracy can degrade rapidly. In our approach, we assume that we can predict the value of zz at training time with some error. The prediction for zz is then fed to Pearl's back-door adjustment to build our model. Because of the attenuation bias caused by measurement error in zz, standard approaches to controlling for zz are ineffective. In response, we propose a method to properly control for the influence of zz by first estimating its relationship with the class variable yy, then updating predictions for zz to match that estimated relationship. By adjusting the influence of zz, we show that we can build a model that exceeds competing baselines on accuracy as well as on robustness over a range of confounding relationships.Comment: 9 page
    corecore