27,863 research outputs found

    The Grand Challenges and Myths of Neural-Symbolic Computation

    Get PDF
    The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logic-based inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several non-classical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and non-classical logics have had a great impact on theory and real-world applications. Several challenges for neural-symbolic computation are pointed out, in particular for classical and non-classical computation in connectionist systems. We also analyse myths about neural-symbolic computation and shed new light on them considering recent research advances

    Dynamic PRA: an Overview of New Algorithms to Generate, Analyze and Visualize Data

    Get PDF
    State of the art PRA methods, i.e. Dynamic PRA (DPRA) methodologies, largely employ system simulator codes to accurately model system dynamics. Typically, these system simulator codes (e.g., RELAP5 ) are coupled with other codes (e.g., ADAPT, RAVEN that monitor and control the simulation. The latter codes, in particular, introduce both deterministic (e.g., system control logic, operating procedures) and stochastic (e.g., component failures, variable uncertainties) elements into the simulation. A typical DPRA analysis is performed by: 1. Sampling values of a set of parameters from the uncertainty space of interest 2. Simulating the system behavior for that specific set of parameter values 3. Analyzing the set of simulation runs 4. Visualizing the correlations between parameter values and simulation outcome Step 1 is typically performed by randomly sampling from a given distribution (i.e., Monte-Carlo) or selecting such parameter values as inputs from the user (i.e., Dynamic Event Tre

    Spatial groundings for meaningful symbols

    Get PDF
    The increasing availability of ontologies raises the need to establish relationships and make inferences across heterogeneous knowledge models. The approach proposed and supported by knowledge representation standards consists in establishing formal symbolic descriptions of a conceptualisation, which, it has been argued, lack grounding and are not expressive enough to allow to identify relations across separate ontologies. Ontology mapping approaches address this issue by exploiting structural or linguistic similarities between symbolic entities, which is costly, error-prone, and in most cases lack cognitive soundness. We argue that knowledge representation paradigms should have a better support for similarity and propose two distinct approaches to achieve it. We first present a representational approach which allows to ground symbolic ontologies by using Conceptual Spaces (CS), allowing for automated computation of similarities between instances across ontologies. An alternative approach is presented, which considers symbolic entities as contextual interpretations of processes in spacetime or Differences. By becoming a process of interpretation, symbols acquire the same status as other processes in the world and can be described (tagged) as well, which allows the bottom-up production of meaning

    Computing Vertex Centrality Measures in Massive Real Networks with a Neural Learning Model

    Full text link
    Vertex centrality measures are a multi-purpose analysis tool, commonly used in many application environments to retrieve information and unveil knowledge from the graphs and network structural properties. However, the algorithms of such metrics are expensive in terms of computational resources when running real-time applications or massive real world networks. Thus, approximation techniques have been developed and used to compute the measures in such scenarios. In this paper, we demonstrate and analyze the use of neural network learning algorithms to tackle such task and compare their performance in terms of solution quality and computation time with other techniques from the literature. Our work offers several contributions. We highlight both the pros and cons of approximating centralities though neural learning. By empirical means and statistics, we then show that the regression model generated with a feedforward neural networks trained by the Levenberg-Marquardt algorithm is not only the best option considering computational resources, but also achieves the best solution quality for relevant applications and large-scale networks. Keywords: Vertex Centrality Measures, Neural Networks, Complex Network Models, Machine Learning, Regression ModelComment: 8 pages, 5 tables, 2 figures, version accepted at IJCNN 2018. arXiv admin note: text overlap with arXiv:1810.1176
    corecore