6 research outputs found

    Explicative Deep Learning with Probabilistic Formal Concepts in a Natural Language Processing Task

    No full text
    Despite the high effectiveness of the Deep Learning methods, they remain a "thing in themselves", a "black box" decisions of which can not be trusted. This is critical for such areas as medicine, financial investments, military applications and others, where the price of the error is too high. In this regard, the European Union is going to demand in 2018 from companies that they give users an explanation of the solutions obtained by automatic systems. In this paper, we offer an alternative, logical-probabilistic method of Deep Learning that can explain its decisions. This is a method of hierarchical clustering, based on the original logical-probabilistic generalization of formal concepts \cite{PFC}. For comparison with deep learning based on neural networks, the work \cite {UMLS} was chosen, in which the task of processing natural language on a set of data \textit{UMLS} is solved. To apply the logical-probabilistic generalization of formal concepts, a classification algorithm based on the energy of the contradictions Energy Learning \cite{DL_Energy} is defined. Logical-probabilistic formal concepts are defined through fixed points, as well as the formal concepts themselves, only certain rules of probability are used as rules. The energy of contradictions allows us to resolve the contradictions arising at fixed points that form probabilistic formal concepts. It is shown that this clustering algorithm is not inferior in accuracy to the method of Deep Learning \cite{UMLS}; nevertheless, the solutions obtained by it are explained by the set of probabilistic fixed-point rules
    corecore