699 research outputs found

    Data based identification and prediction of nonlinear and complex dynamical systems

    Get PDF
    We thank Dr. R. Yang (formerly at ASU), Dr. R.-Q. Su (formerly at ASU), and Mr. Zhesi Shen for their contributions to a number of original papers on which this Review is partly based. This work was supported by ARO under Grant No. W911NF-14-1-0504. W.-X. Wang was also supported by NSFC under Grants No. 61573064 and No. 61074116, as well as by the Fundamental Research Funds for the Central Universities, Beijing Nova Programme.Peer reviewedPostprin

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    Graceful Degradation and Related Fields

    Full text link
    When machine learning models encounter data which is out of the distribution on which they were trained they have a tendency to behave poorly, most prominently over-confidence in erroneous predictions. Such behaviours will have disastrous effects on real-world machine learning systems. In this field graceful degradation refers to the optimisation of model performance as it encounters this out-of-distribution data. This work presents a definition and discussion of graceful degradation and where it can be applied in deployed visual systems. Following this a survey of relevant areas is undertaken, novelly splitting the graceful degradation problem into active and passive approaches. In passive approaches, graceful degradation is handled and achieved by the model in a self-contained manner, in active approaches the model is updated upon encountering epistemic uncertainties. This work communicates the importance of the problem and aims to prompt the development of machine learning strategies that are aware of graceful degradation

    Análisis de datos etnográficos, antropológicos y arqueológicos: una aproximación desde las humanidades digitales y los sistemas complejos

    Get PDF
    La llegada de las Ciencias de la Computación, el Big Data, el Análisis de Datos, el Aprendizaje Automático y la Minería de Datos ha modificado la manera en que se hace ciencia en todos los campos científicos, dando lugar, a su vez, a la aparición de nuevas disciplinas tales como la Mecánica Computacional, la Bioinformática, la Ingeniería de la Salud, las Ciencias Sociales Computacionales, la Economía Computacional, la Arqueología Computacional y las Humanidades Digitales –entre otras. Cabe destacar que todas estas nuevas disciplinas son todavía muy jóvenes y están en continuo crecimiento, por lo que contribuir a su avance y consolidación tiene un gran valor científico. En esta tesis doctoral contribuimos al desarrollo de una nueva línea de investigación dedicada al uso de modelos formales, métodos analíticos y enfoques computacionales para el estudio de las sociedades humanas tanto actuales como del pasado.El Ministerio de Ciencia e Innovación • Proyecto SimulPast – “Transiciones sociales y ambientales: simulando el pasado para entender el comportamiento humano” (CSD2010-00034 CONSOLIDER-INGENIO 2010). • Proyecto CULM – “Modelado del cultivo en la prehistoria” (HAR2016-77672-P). • Red de Excelencia SimPastNet – “Simular el pasado para entender el comportamiento humano” (HAR2017-90883-REDC). • Red de Excelencia SocioComplex – “Sistemas Complejos Socio-Tecnológicos” (RED2018-102518-T). La Consejería de Educación de la Junta de Castilla y León • Subvención a la línea de investigación “Entendiendo el comportamiento humano, una aproximación desde los sistemas complejos y las humanidades digitales” dentro del programa de apoyo a los grupos de investigación reconocidos (GIR) de las universidades públicas de Castilla y León (BDNS 425389

    A Defense of Pure Connectionism

    Full text link
    Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association

    A Unified Framework for Gradient-based Hyperparameter Optimization and Meta-learning

    Get PDF
    Machine learning algorithms and systems are progressively becoming part of our societies, leading to a growing need of building a vast multitude of accurate, reliable and interpretable models which should possibly exploit similarities among tasks. Automating segments of machine learning itself seems to be a natural step to undertake to deliver increasingly capable systems able to perform well in both the big-data and the few-shot learning regimes. Hyperparameter optimization (HPO) and meta-learning (MTL) constitute two building blocks of this growing effort. We explore these two topics under a unifying perspective, presenting a mathematical framework linked to bilevel programming that captures existing similarities and translates into procedures of practical interest rooted in algorithmic differentiation. We discuss the derivation, applicability and computational complexity of these methods and establish several approximation properties for a class of objective functions of the underlying bilevel programs. In HPO, these algorithms generalize and extend previous work on gradient-based methods. In MTL, the resulting framework subsumes classic and emerging strategies and provides a starting basis from which to build and analyze novel techniques. A series of examples and numerical simulations offer insight and highlight some limitations of these approaches. Experiments on larger-scale problems show the potential gains of the proposed methods in real-world applications. Finally, we develop two extensions of the basic algorithms apt to optimize a class of discrete hyperparameters (graph edges) in an application to relational learning and to tune online learning rate schedules for training neural network models, an old but crucially important issue in machine learning

    More is Different: Modern Computational Modeling for Heterogeneous Catalysis

    Get PDF
    La combinació d'observacions experimentals i estudis de la Density Functional Theory (DFT) és un dels pilars de la investigació química moderna. Atès que permeten recopilar informació física addicional d'un sistema químic, difícilment accessible a través de l'entorn experimental, aquests estudis es fan servir àmpliament per modelar i predir el comportament d'una gran varietat de compostos químics en entorns únics. A la catàlisi heterogènia, els models DFT s'utilitzen habitualment per avaluar la interacció entre els compostos moleculars i els catalitzadors, vinculant aquestes interpretacions amb els resultats experimentals. Tanmateix, l'alta complexitat trobada tant als escenaris catalítics com a la reactivitat, implica la necessitat de metodologies sofisticades que requereixen automatització, emmagatzematge i anàlisi per estudiar correctament aquests sistemes. Aquest treball presenta el desenvolupament i la combinació de múltiples metodologies per avaluar correctament la complexitat d'aquests sistemes químics. A més, aquest treball mostra com s'han utilitzat les tècniques proporcionades per estudiar noves configuracions catalítiques d'interès acadèmic i industrial.La combinación de observaciones experimentales y estudios de la Density Functional Theory (DFT) es uno de los pilares de la investigación química moderna. Dado que permiten recopilar información física adicional de un sistema químico, difícilmente accesible a través del entorno experimental, estos estudios se emplean ampliamente para modelar y predecir el comportamiento de una gran variedad de compuestos químicos en entornos únicos. En la catálisis heterogénea, los modelos DFT se emplean habitualmente para evaluar la interacción entre los compuestos moleculares y los catalizadores, vinculando estas interpretaciones con los resultados experimentales. Sin embargo, la alta complejidad encontrada tanto en los escenarios catalíticos como en la reactividad, implica la necesidad de metodologías sofisticadas que requieren de automatización, almacenamiento y análisis para estudiar correctamente estos sistemas. Este trabajo presenta el desarrollo y la combinación de múltiples metodologías con el objetivo de evaluar correctamente la complejidad de estos sistemas químicos. Además, este trabajo muestra cómo las técnicas proporcionadas se han utilizado para estudiar nuevas configuraciones catalíticas de interés académico e industrial.The combination of Experimental observations and Density Functional Theory studies is one of the pillars of modern chemical research. As they enable the collection of additional physical information of a chemical system, hardly accessible via the experimental setting, Density Functional Theory studies are widely employed to model and predict the behavior of a diverse variety of chemical compounds under unique environments. Particularly, in heterogeneous catalysis, Density Functional Theory models are commonly employed to evaluate the interaction between molecular compounds and catalysts, lately linking these interpretations with experimental results. However, high complexity found in both, catalytic settings and reactivity, implies the need of sophisticated methodologies involving automation, storage and analysis to correctly study these systems. Here, I present the development and combination of multiple methodologies, aiming at correctly asses complexity. Also, this work shows how the provided techniques have been actively used to study novel catalytic settings of academic and industrial interest
    corecore