100 research outputs found

    Multi-task Deep Neural Networks in Automated Protein Function Prediction

    Full text link
    In recent years, deep learning algorithms have outperformed the state-of-the art methods in several areas thanks to the efficient methods for training and for preventing overfitting, advancement in computer hardware, the availability of vast amount data. The high performance of multi-task deep neural networks in drug discovery has attracted the attention to deep learning algorithms in bioinformatics area. Here, we proposed a hierarchical multi-task deep neural network architecture based on Gene Ontology (GO) terms as a solution to protein function prediction problem and investigated various aspects of the proposed architecture by performing several experiments. First, we showed that there is a positive correlation between performance of the system and the size of training datasets. Second, we investigated whether the level of GO terms on GO hierarchy related to their performance. We showed that there is no relation between the depth of GO terms on GO hierarchy and their performance. In addition, we included all annotations to the training of a set of GO terms to investigate whether including noisy data to the training datasets change the performance of the system. The results showed that including less reliable annotations in training of deep neural networks increased the performance of the low performed GO terms, significantly. We evaluated the performance of the system using hierarchical evaluation method. Mathews correlation coefficient was calculated as 0.75, 0.49 and 0.63 for molecular function, biological process and cellular component categories, respectively. We showed that deep learning algorithms have a great potential in protein function prediction area. We plan to further improve the DEEPred by including other types of annotations from various biological data sources. We plan to construct DEEPred as an open access online tool.Comment: 19 pages, 4 figures, 4 table

    Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

    Full text link
    One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.Comment: 8 pages, 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 201

    Нейронні мережі: Дослідження правил прийняття ними рішень

    Get PDF
    Питання отримання більшої зрозумілості поведінки нейронних мереж є досить актуальним, особливо у галузях із високим рівнем ризиків. Для вирішення цієї задачі досліджено можливості нового алгоритму декомпозиції DeepRED, здатного витягувати правила прийняття рішень глибинними нейронними мережами з декількома прихованими шарами DNN (Deep Neural Networks). Дослідження алгоритму DeepRED проводилося на прикладі вилучення правил експериментальної нейронної мережі за виконання класифікації зображень бази даних MNIST рукописних цифр, що дозволило виявити ряд обмежень алгоритму DeepRED

    Вилучення правил прийняття рішень нейронними мережами

    Get PDF
    Дана дисертаційна робота присвячена питанню вилучення правил прийняття рішень із нейронних мереж, що вирішують задачу класифікації, за допомогою декомпозиційного підходу DeepRED. Метою роботи є дослідження можливостей практичного використання алгоритму DeepRED для вилучення та аналізу правил. В роботі розглядаються основні принципи процесу вилучення правил з нейронних мереж та проводиться дослідження алгоритму DeepRED. В ході дослідження сфери вилучення правил, проведено досить детальний розбір елементів архітектури нейронних мереж та принципу їх роботи (включаючи процес навчання). Задля кращого розуміння можливих підходів до вилучення правил в роботі також розглядаються існуючі на сьогоднішній день методи вилучення правил. В процесі виконання основної частини даної роботи, проводиться дослідження алгоритму DeepRED та можливості його практичного застосування. DeepRED – це найбільш перспективний декомпозиційний алгоритм вилучення правил на сьогоднішній день. Розгляд алгоритму і ряду його покращень, а також, аналіз результатів вилучення правил та порівняння серії його запусків за різних умов, дозволили отримати загальне уявлення щодо можливості його практичного використання та ряду обмежень, що присутні на даний момент. Загальний обсяг роботи — 82 сторінки, 23 рисунки, 26 таблиць, 28 посилань.This work is devoted to the question of removing decision rules from neural networks that solve classification tasks, using the decomposition approach - DeepRED. The work aims to study the possibilities of practical use of the DeepRED algorithm to extract and analyze rules. The paper considers the basic principles of the process of extracting rules from neural networks and conducts a study of the DeepRED algorithm. A detailed analysis of the architecture of neural network elements and principles of operation (including the learning process) was conducted while studying the area of rule extraction. To better understand the possibilities of rule extraction, we also looked through the available methods. In the process of performing the main part of this work, a study of the DeepRED algorithm and the possibility of its practical application was conducted. DeepRED is the most promising decomposition rule extraction algorithm. Consideration of the algorithm and a few of its improvements, as well as analysis of the results of removing rules and comparing a series of its launches under different conditions, gave us a general idea of its practical use and some limitations that are currently present. The total volume of work is 82 pages, 23 figures, 26 tables, and 28 references

    Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach

    Full text link
    Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness. Embedding models attain state-of-the-art accuracy in knowledge base completion, but their predictions are notoriously hard to interpret. In this paper, we adapt "pedagogical approaches" (from the literature on neural networks) so as to interpret embedding models by extracting weighted Horn rules from them. We show how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses.Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Swede

    Symbolic XAI: automatic programming II

    Full text link
    Explainable artificial intelligence (XAI) is a field blooming right now. With the popularity of opaque systems, the need of explanation methods that shed light on how this systems works has risen as well. In this work, we propose the usage of symbolic machine learning systems as explanation methods, a line that is yet to be fully explored. We will do this by reviewing this symbolic systems, analyzing the existing taxonomies of explanation methods and fitting the systems within the taxonomies. Finally, we will also do some testing on solving numerical problems with symbolic systems
    corecore