9 research outputs found

    UJM at CLEF in Author Verification based on optimized classification trees

    Get PDF
    http://ceur-ws.org/Vol-1180/CLEF2014wn-Pan-FreryEt2014.pdfInternational audienceThis article describes our proposal for the Author Identification task in the PAN CLEF Challenge 2014. We have adopted a machine learning ap- proach based on several representations of the texts and on optimized decision trees which have as entry various attributes and which are learned for every train- ing corpus separately for this classification task. Our method ranked us at the 2nd place with an overall AUC of 70.7%, and C@1 of 68.4% and, between the 1st and the 6th place on the six corpora

    Deep Neural Networks for Encrypted Inference with TFHE

    Get PDF
    Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption. FHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information. A common way to provide a valuable service on such data is through machine learning and, at this time, Neural Networks are the dominant machine learning model for unstructured data. In this work we show how to construct Deep Neural Networks (DNN) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits. We discuss the constraints and show the architecture of DNNs for two computer vision tasks. We benchmark the architectures using the Concrete stack, an open-source implementation of TFHE

    Privacy-Preserving Tree-Based Inference with Fully Homomorphic Encryption

    Get PDF
    Privacy enhancing technologies (PETs) have been proposed as a way to protect the privacy of data while still allowing for data analysis. In this work, we focus on Fully Homomorphic Encryption (FHE), a powerful tool that allows for arbitrary computations to be performed on encrypted data. FHE has received lots of attention in the past few years and has reached realistic execution times and correctness. More precisely, we explain in this paper how we apply FHE to tree-based models and get state-of-the-art solutions over encrypted tabular data. We show that our method is applicable to a wide range of tree-based models, including decision trees, random forests, and gradient boosted trees, and has been implemented within the Concrete-ML library, which is open-source at https://github.com/zama-ai/concrete-ml. With a selected set of use-cases, we demonstrate that our FHE version is very close to the unprotected version in terms of accuracy

    Apprentissage Ensembliste sur des flux de données extrêmement déséquilibrés

    No full text
    Machine learning is the study of designing algorithms that learn from trainingdata to achieve a specific task. The resulting model is then used to predict overnew (unseen) data points without any outside help. This data can be of manyforms such as images (matrix of pixels), signals (sounds,...), transactions (age,amount, merchant,...), logs (time, alerts, ...). Datasets may be defined to addressa specific task such as object recognition, voice identification, anomaly detection,etc. In these tasks, the knowledge of the expected outputs encourages a supervisedlearning approach where every single observed data is assigned to a label thatdefines what the model predictions should be. For example, in object recognition,an image could be associated with the label "car" which suggests that the learningalgorithm has to learn that a car is contained in this picture, somewhere. This is incontrast with unsupervised learning where the task at hand does not have explicitlabels. For example, one popular topic in unsupervised learning is to discoverunderlying structures contained in visual data (images) such as geometric formsof objects, lines, depth, before learning a specific task. This kind of learning isobviously much harder as there might be potentially an infinite number of conceptsto grasp in the data. In this thesis, we focus on a specific scenario of thesupervised learning setting: 1) the label of interest is under represented (e.g.anomalies) and 2) the dataset increases with time as we receive data from real-lifeevents (e.g. credit card transactions). In fact, these settings are very common inthe industrial domain in which this thesis takes place.L'apprentissage machine est l'étude de la conception d'algorithmes qui apprennent à partir des données d'apprentissage pour réaliser une tâche spécifique. Le modèle résultant est ensuite utilisé pour prédire de nouveaux points de données (invisibles) sans aucune aide extérieure. Ces données peuvent prendre de nombreuses formes telles que des images (matrice de pixels), des signaux (sons,...), des transactions (âge, montant, commerçant,...), des journaux (temps, alertes, ...). Les ensembles de données peuvent être définis pour traiter une tâche spécifique telle que la reconnaissance d'objets, l'identification vocale, la détection d'anomalies, etc. Dans ces tâches, la connaissance des résultats escomptés encourage une approche d'apprentissage supervisé où chaque donnée observée est assignée à une étiquette qui définit ce que devraient être les prédictions du modèle. Par exemple, dans la reconnaissance d'objets, une image pourrait être associée à l'étiquette "voiture" qui suggère que l'algorithme d'apprentissage doit apprendre qu'une voiture est contenue dans cette image, quelque part. Cela contraste avec l'apprentissage non supervisé où la tâche à accomplir n'a pas d'étiquettes explicites. Par exemple, un sujet populaire dans l'apprentissage non supervisé est de découvrir les structures sous-jacentes contenues dans les données visuelles (images) telles que les formes géométriques des objets, les lignes, la profondeur, avant d'apprendre une tâche spécifique. Ce type d'apprentissage est évidemment beaucoup plus difficile car il peut y avoir un nombre infini de concepts à saisir dans les données. Dans cette thèse, nous nous concentrons sur un scénario spécifique du cadre d'apprentissage supervisé : 1) l'étiquette d'intérêt est sous-représentée (p. ex. anomalies) et 2) l'ensemble de données augmente avec le temps à mesure que nous recevons des données d'événements réels (p. ex. transactions par carte de crédit). En fait, ces deux problèmes sont très fréquents dans le domaine industriel dans lequel cette thèse se déroule

    Author Identification by Automatic Learning

    No full text
    International audienc

    Efficient top rank optimization with gradient boosting for supervised anomaly detection

    No full text
    International audienceIn this paper we address the anomaly detection problem in a supervised setting where positive examples might be very sparse. We tackle this task with a learning to rank strategy by optimizing a differentiable smoothed surrogate of the so-called Average Precision (AP). Despite its non-convexity, we show how to use it efficiently in a stochastic gradient boosting framework. We show that using AP is much better to optimize the top rank alerts than the state of the art measures. We demonstrate on anomaly detection tasks that the interest of our method is even reinforced in highly unbalanced scenarios

    Online Non-Linear Gradient Boosting in Multi-Latent Spaces

    No full text
    International audienceGradient Boosting is a popular ensemble method that combines linearly diverse and weak hypotheses to build a strong classifier. In this work, we propose a new Online Non-Linear gradient Boosting (ONLB) algorithm where we suggest to jointly learn different combinations of the same set of weak classifiers in order to learn the idiosyncrasies of the target concept. To expand the expressiveness of the final model, our method leverages the non linear complementarity of these combinations. We perform an experimental study showing that ONLB (i) outperforms most recent online boosting methods in both terms of convergence rate and accuracy and (ii) learns diverse and useful new latent spaces
    corecore