12 research outputs found

    Supervised Feature Compression based on Counterfactual Analysis

    Full text link
    Counterfactual Explanations are becoming a de-facto standard in post-hoc interpretable machine learning. For a given classifier and an instance classified in an undesired class, its counterfactual explanation corresponds to small perturbations of that instance that allows changing the classification outcome. This work aims to leverage Counterfactual Explanations to detect the important decision boundaries of a pre-trained black-box model. This information is used to build a supervised discretization of the features in the dataset with a tunable granularity. Using the discretized dataset, a smaller, therefore more interpretable Decision Tree can be trained, which, in addition, enhances the stability and robustness of the baseline Decision Tree. Numerical results on real-world datasets show the effectiveness of the approach in terms of accuracy and sparsity compared to the baseline Decision Tree.Comment: 29 pages, 12 figure

    Une approche totalement instanciée pour la planification HTN

    No full text
    International audienceDe nombreuses techniques de planification ont eté développées pour permettrè a des syst emes autonomes d'agir et de prendre des décisions en fonction de leurs perceptions de l'environnement. Parmi ces techniques, la planification HTN (Hierarchical Task Network) est l'une des techniques les plus utilisées en pratique. Contrairement aux approches classiques de la planifi-cation, la planification HTN fonctionne par décomposition récursive d'une tâche complexe en sous tâches jusqu'` a ce que chaque sous-tâche puissê etre réalisée par l' exécution d'une action. Cette vision hiérarchique de la planification permet une représentation plus riche desprobì emes de planification tout en guidant la recherche d'un plan solution et en apportant de la connaissance aux algorithmes sous-jacents. Dans cet article, nous proposons une nouvelle approche de la planification HTN dans laquelle, comme en planification classique, nous instancions l'ensemble des opérateurs de planification avant d'effectuer la recherche d'un plan solution. Cette approche a fait ses preuves en planification clas-sique. Elle est utilisée par la plupart des planificateurs contemporains mais n'a, ` a notre connaissance, jamais eté appliquée dans le cadre de la planification HTN. L'instanciation des opérateurs de planifica-tion est pourtant nécessaire au développement d'heuristiques efficaces et a l'encodage deprobì emes de planification HTN dans d'autres formalismes tels que SAT ou CSP. Nous présentons dans la suite de l'article un mécanisme générique d'instanciation. Ce mécanisme implémente des techniques de simplification permettant de réduire la complexité du processus d'instanciation inspirées de celles utilisées en planification classique. Pour finir nous présentons des résultats obtenus sur un ensemble deprobì emes issus des compétitions internationales de planification avec une version modifiée du planificateur SHOP utilisant notre technique d'instanciation

    Towards a Cognitive Probabilistic Representation of Space for Mobile Robots

    Get PDF
    Robots are rapidly evolving from factory workhorses to robot-companions. The future of robots, as our companions, is highly dependent on their abilities to understand, interpret and represent the environment in an efficient and consistent fashion, in a way that is comprehensible to humans. This paper is oriented in this direction. It suggests a hierarchical probabilistic representation of space that is based on objects. A global topological representation of places with object graphs serving as local maps is suggested. Experiments on place classification and place recognition are also reported in order to demonstrate the applicability of such a representation in the context of understanding space and thereby performing spatial cognition. Thus the theme of the work is representation for spatial cognition

    INTELLIGENT VISION-BASED NAVIGATION SYSTEM

    Get PDF
    This thesis presents a complete vision-based navigation system that can plan and follow an obstacle-avoiding path to a desired destination on the basis of an internal map updated with information gathered from its visual sensor. For vision-based self-localization, the system uses new floor-edges-specific filters for detecting floor edges and their pose, a new algorithm for determining the orientation of the robot, and a new procedure for selecting the initial positions in the self-localization procedure. Self-localization is based on matching visually detected features with those stored in a prior map. For planning, the system demonstrates for the first time a real-world application of the neural-resistive grid method to robot navigation. The neural-resistive grid is modified with a new connectivity scheme that allows the representation of the collision-free space of a robot with finite dimensions via divergent connections between the spatial memory layer and the neuro-resistive grid layer. A new control system is proposed. It uses a Smith Predictor architecture that has been modified for navigation applications and for intermittent delayed feedback typical of artificial vision. A receding horizon control strategy is implemented using Normalised Radial Basis Function nets as path encoders, to ensure continuous motion during the delay between measurements. The system is tested in a simplified environment where an obstacle placed anywhere is detected visually and is integrated in the path planning process. The results show the validity of the control concept and the crucial importance of a robust vision-based self-localization process

    An evolutionary approach to optimising neural network predictors for passive sonar target tracking

    Get PDF
    Object tracking is important in autonomous robotics, military applications, financial time-series forecasting, and mobile systems. In order to correctly track through clutter, algorithms which predict the next value in a time series are essential. The competence of standard machine learning techniques to create bearing prediction estimates was examined. The results show that the classification based algorithms produce more accurate estimates than the state-of-the-art statistical models. Artificial Neural Networks (ANNs) and K-Nearest Neighbour were used, demonstrating that this technique is not specific to a single classifier. [Continues.

    Gestion de mémoire pour la détection de fermeture de boucle pour la cartographie temps réel par un robot mobile

    Get PDF
    Pour permettre à un robot autonome de faire des tâches complexes, il est important qu'il puisse cartographier son environnement pour s'y localiser. À long terme, pour corriger sa carte globale, il est nécessaire qu'il détecte les endroits déjà visités. C'est une des caractéristiques les plus importantes en localisation et cartographie simultanée (SLAM), mais aussi sa principale limitation. La charge de calcul augmente en fonction de la taille de l'environnement, et alors les algorithmes n'arrivent plus à s'exécuter en temps réel. Pour résoudre cette problématique, l'objectif est de développer un nouvel algorithme de détection en temps réel d'endroits déjà visités, et qui fonctionne peu importe la taille de l'environnement. La détection de fermetures de boucle, c'est-à-dire la reconnaissance des endroits déjà visités, est réalisée par un algorithme probabiliste robuste d'évaluation de la similitude entre les images acquises par une caméra à intervalles réguliers. Pour gérer efficacement la charge de calcul de cet algorithme, la mémoire du robot est divisée en mémoires à long terme (base de données), à court terme et de travail (mémoires vives). La mémoire de travail garde les images les plus caractéristiques de l'environnement afin de respecter la contrainte d'exécution temps réel. Lorsque la contrainte de temps réel est atteinte, les images des endroits vus les moins souvent depuis longtemps sont transférées de la mémoire de travail à la mémoire à long terme. Ces images transférées peuvent être récupérées de la mémoire à long terme à la mémoire de travail lorsqu'une image voisine dans la mémoire de travail reçoit une haute probabilité que le robot soit déjà passé par cet endroit, augmentant ainsi la capacité de détecter des endroits déjà visités avec les prochaines images acquises. Le système a été testé avec des données préalablement prises sur le campus de l'Université de Sherbrooke afin d'évaluer sa performance sur de longues distances, ainsi qu'avec quatre autres ensembles de données standards afin d'évaluer sa capacité d'adaptation avec différents environnements. Les résultats suggèrent que l'algorithme atteint les objectifs fixés et permet d'obtenir des performances supérieures que les approches existantes. Ce nouvel algorithme de détection de fermeture de boucle peut être utilisé directement comme une technique de SLAM topologique ou en parallèle avec une technique de SLAM existante afin de détecter les endroits déjà visités par un robot autonome. Lors d'une détection de boucle, la carte globale peut alors être corrigée en utilisant la nouvelle contrainte créée entre le nouveau et l'ancien endroit semblable

    Some Bayes methods for biclustering and vector data with binary coordinates

    Get PDF
    We consider Bayes methods for two problems that share a common need to partition index sets encoding commonalities between observations. The first is a biclustering problem. The second is inference for mixture models for pp-vectors with binary coordinates. Standard one-way clustering methods form homogeneous groups in a set of objects. Biclustering methods simultaneously cluster rows and columns of a rectangular dataset in such a way that responses are homogeneous for all row-cluster by column-cluster groups. Assuming that data entries follow a normal distribution with a bicluster-specific mean term and a common variance, we propose a Bayes methodology for biclustering and corresponding Markov Chain Monte Carlo (MCMC) algorithms. Our proposed method not only identifies homogeneous biclusters, but also generates plausible predictions for missing/unobserved entries in the potential rectangular dataset as illustrated through simulation studies and applications to real datasets. In the second problem, we propose a tractable symmetric distribution for modeling multivariate vectors of 0\u27s and 1\u27s on pp dimensions that allows for nontrivial amounts of variation around some central value. We then consider Bayesian analysis of mixture models where the component distributions have this above form. Inferences are made from the posterior samples generated by MCMC algorithms. We also extend our proposed Bayesian mixture model analysis to datasets with missing entries. Model performance is illustrated through simulation studies and applications to real datasets

    Cold-Start Collaborative Filtering

    Get PDF
    Collaborative Filtering (CF) is a technique to generate personalised recommendations for a user from a collection of correlated preferences in the past. In general, the effectiveness of CF greatly depends on the amount of available information about the target user and the target item. The cold-start problem, which describes the difficulty of making recommendations when the users or the items are new, remains a great challenge for CF. Traditionally, this problem is tackled by resorting to an additional interview process to establish the user (item) profile before making any recommendations. During this process the user’s information need is not addressed. In this thesis, however, we argue that recommendations would be preferably provided right from the beginning. And the goal of solving the cold-start problem should be maximising the overall recommendation utility during all interactions with the recommender system. In other words, we should not distinguish between the information-gathering and recommendation-making phases, but seamlessly integrate them together. This mechanism naturally addresses the cold-start problem as any user (item) can immediately receive sequential recommendations without providing extra information beforehand. This thesis solves the cold-start problem in an interactive setting by focusing on four interconnected aspects. First, we consider a continuous sequential recommendation process with CF and relate it to the exploitation-exploration (EE) trade-off. By employing probabilistic matrix factorization, we obtain a structured decision space and are thus able to leverage several EE algorithms, such as Thompson sampling and upper confidence bounds, to select items. Second, we extend the sequential recommendation process to a batch mode where multiple recommendations are made at each interaction stage. We specifically discuss the case of two consecutive interaction stages, and model it with the partially observable Markov decision process (POMDP) to obtain its exact theoretical solution. Through an in-depth analysis of the POMDP value iteration solution, we identify that an exact solution can be abstracted as selecting users (items) that are not only highly relevant to the target according to the initial-stage information, but also highly correlated with other potential users (items) for the next stage. Third, we consider the intra-stage recommendation optimisation and focus on the problem of personalised item diversification. We reformulate the latent factor models using the mean-variance analysis from the portfolio theory in economics. The resulting portfolio ranking algorithm naturally captures the user’s interest range and the uncertainty of the user preference by employing the variance of the learned user latent factors, leading to a diversified item list adapted to the individual user. And, finally, we relate the diversification algorithm back to the interactive process by considering inter-stage joint portfolio diversification, where the recommendations are optimised jointly with the user’s past preference records
    corecore