42 research outputs found

    Graph edit distance or graph edit pseudo-distance?

    Get PDF
    Graph Edit Distance has been intensively used since its appearance in 1983. This distance is very appropriate if we want to compare a pair of attributed graphs from any domain and obtain not only a distance, but also the best correspondence between nodes of the involved graphs. In this paper, we want to analyse if the Graph Edit Distance can be really considered a distance or a pseudo-distance, since some restrictions of the distance function are not fulfilled. Distinguishing between both cases is important because the use of a distance is a restriction in some methods to return exact instead of approximate results. This occurs, for instance, in some graph retrieval techniques. Experimental validation shows that in most of the cases, it is not appropriate to denominate the Graph Edit Distance as a distance, but a pseudo-distance instead, since the triangle inequality is not fulfilled. Therefore, in these cases, the graph retrieval techniques not always return the optimal graph

    An Algebraic Framework to Represent Finite State Machines in Single-Layer Recurrent Neural Networks

    Get PDF
    In this paper we present an algebraic framework to represent finite state machines (FSMs) in single-layer recurrent neural networks (SLRNNs), which unifies and generalizes some of the previous proposals. This framework is based on the formulation of both the state transition function and the output function of a FSM as a linear system of equations, and it permits an analytical explanation of the representational capabilities of first-order and higher-order SLRNNs. The framework can be used to insert symbolic knowledge in RNNs prior to learning from examples and to keep this knowledge while training the network. This approach is valid for a wide range of activation functions, whenever some stability conditions are met. The framework has already been used in practice in a hybrid method for grammatical inference reported elsewhere (Sanfeliu and Alquezar, 1994). 1 Introduction The representation of finite-state machines (FSMs) in recurrent neural networks (RNNs) has attracted the attention..

    Solving Partially Observable Markov Decision Processes by Optimization Neural Networks

    No full text
    Partially Observable Markov Decision Processes (POMDPs) cope with sequential decision processes where an agent tries to maximize some reward without complete knowledge of the process. These models are of interest for quality control, machine maintenance, reinforcement learning, etc. More generally, Monahan [9] has shown that many tasks in partially observable environments can be viewed as POMDPs. A solution for the POMDP gives the best behavior of the agent face to the environment. This gives a solution over all the state space, which is continuous and inside of an integral polytope. The approaches proposed until now use linear programming (LP) to solve the optimization problem in this type of processes. By other side, Neural Networks (NNs) have shown a promising potentiality for finding solutions to optimization problems; particularly, they have been used to solve quadratic 0-1 programming problems [4, 6]. In this paper, we use optimization neural networks as a different way to solve the optimization problem in the POMDP, which allows a parallel hardware implementation

    How considering incompatible state mergings may reduce the DFA induction search tree

    No full text
    corecore