19 research outputs found

    On the Usefulness of Similarity Based Projection Spaces for Transfer Learning

    No full text
    talk: http://videolectures.net/simbad2011_morvant_transfer/, 16 pagesInternational audienceSimilarity functions are widely used in many machine learning or pattern recognition tasks. We consider here a recent framework for binary classication, proposed by Balcan et al., allowing to learn in a potentially non geometrical space based on good similarity functions. This framework is a generalization of the notion of kernels used in support vector machines in the sense that allows ne to use similarity functions that do not need to be positive semi-de nite nor symmetric. The similarities are then used to de ne an xplicit projection space where a linear classi er with good generalization properties can be learned. In this paper, we propose to study experimentally the usefulness of similarity based projection spaces for transfer learning issues. More precisely, we consider the problem of domain adaptation where the distributions generating learning data and test data are somewhat different. We stand in the case where no information on the test labels is available. We show that a simple renormalization of a good similarity function taking into account the test data allows us to learn classifiers more performing on the target distribution for difficult adaptation problems. Moreover, this normalization always helps to improve the model when we try to regularize the similarity based projection space in order to move closer the two distributions. We provide experiments on a toy problem and on a real image annotation task

    A Review of Supervised Machine Learning Applied to Ageing Research

    Get PDF
    Broadly speaking, supervised machine learning is the computational task of learning correlations between variables in annotated data (the training set), and using this information to create a predictive model capable of inferring annotations for new data, whose annotations are not known. Ageing is a complex process that affects nearly all animal species. This process can be studied at several levels of abstraction, in different organisms and with different objectives in mind. Not surprisingly, the diversity of the supervised machine learning algorithms applied to answer biological questions reflects the complexities of the underlying ageing processes being studied. Many works using supervised machine learning to study the ageing process have been recently published, so it is timely to review these works, to discuss their main findings and weaknesses. In summary, the main findings of the reviewed papers are: the link between specific types of DNA repair and ageing, ageing-related proteins tend to be highly connected and seem to play a central role in molecular pathways; ageing/longevity is linked with autophagy and apoptosis, nutrient receptor genes, and copper and iron ion transport. Additionally, several biomarkers of ageing were found by machine learning. Despite some interesting machine learning results, we also identified a weakness of current works on this topic: only one of the reviewed papers has corroborated the computational results of machine learning algorithms through wet-lab experiments. In conclusion, supervised machine learning has contributed to advance our knowledge and has provided novel insights on ageing, yet future work should have a greater emphasis in validating the predictions

    Transfer Learning with Local Smoothness Regularizer

    No full text

    AADS: A Noise-Robust Anomaly Detection Framework for Industrial Control Systems

    No full text
    Deep Neural Networks are emerging as effective techniques to detect sophisticated cyber-attacks targeting Industrial Control Systems (ICSs). In general, these techniques focus on learning a “normal” behavior of the system, to be then able to label noteworthy deviations from it as anomalies. However, during operations, ICSs inevitably and continuously evolve their behavior, due to e.g., replacement of devices, workflow modifications, or other reasons. As a consequence, the quality of the anomaly detection process may be dramatically affected with a considerable amount of false alarms being generated. This paper presents AADS (Adaptive Anomaly Detection in industrial control Systems), a novel framework based on neural networks and greedy-algorithms that tailors the learning-based anomaly detection process to the changing nature of ICSs. AADS efficiently adapts a pre-trained model to learn new changes in the system behavior with a small number of data samples (i.e., time steps) and a few gradient updates. The performance of AADS is evaluated using the Secure Water Treatment (SWaT) dataset, and its sensitivity to additive noise is investigated. Our results show an increased detection rate compared to state of the art approaches, as well as more robustness to additive noise

    Robustness of classifiers to changing environments

    No full text
    In this paper, we test some of the most commonly used classifiers to identify which ones are the most robust to changing environments. The environment may change over time due to some contextual or definitional changes. The environment may change with location. It would be surprising if the performance of common classifiers did not degrade with these changes. The question, we address here, is whether or not some types of classifier are inherently more immune than others to these effects. In this study, we simulate the changing of environment by reducing the in uence on the class of the most significant attributes. Based on our analysis, K-Nearest Neighbor and Artificial Neural Networks are the most robust learners, ensemble algorithms are somewhat robust, whereas Naive Bayes, Logistic Regression and particularly Decision Trees are the most affected.Peer reviewed: YesNRC publication: Ye
    corecore