508 research outputs found

    AI: Limits and Prospects of Artificial Intelligence

    Get PDF
    The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence

    Thermodynamic AI and the fluctuation frontier

    Full text link
    Many Artificial Intelligence (AI) algorithms are inspired by physics and employ stochastic fluctuations. We connect these physics-inspired AI algorithms by unifying them under a single mathematical framework that we call Thermodynamic AI. Seemingly disparate algorithmic classes can be described by this framework, for example, (1) Generative diffusion models, (2) Bayesian neural networks, (3) Monte Carlo sampling and (4) Simulated annealing. Such Thermodynamic AI algorithms are currently run on digital hardware, ultimately limiting their scalability and overall potential. Stochastic fluctuations naturally occur in physical thermodynamic systems, and such fluctuations can be viewed as a computational resource. Hence, we propose a novel computing paradigm, where software and hardware become inseparable. Our algorithmic unification allows us to identify a single full-stack paradigm, involving Thermodynamic AI hardware, that could accelerate such algorithms. We contrast Thermodynamic AI hardware with quantum computing where noise is a roadblock rather than a resource. Thermodynamic AI hardware can be viewed as a novel form of computing, since it uses a novel fundamental building block. We identify stochastic bits (s-bits) and stochastic modes (s-modes) as the respective building blocks for discrete and continuous Thermodynamic AI hardware. In addition to these stochastic units, Thermodynamic AI hardware employs a Maxwell's demon device that guides the system to produce non-trivial states. We provide a few simple physical architectures for building these devices and we develop a formalism for programming the hardware via gate sequences. We hope to stimulate discussion around this new computing paradigm. Beyond acceleration, we believe it will impact the design of both hardware and algorithms, while also deepening our understanding of the connection between physics and intelligence.Comment: 47 pages, 18 figures, Added relevant reference

    Sparse Distributed Memory is a Continual Learner

    Full text link
    Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving. Building on work using Sparse Distributed Memory (SDM) to connect a core neural circuit with the powerful Transformer model, we create a modified Multi-Layered Perceptron (MLP) that is a strong continual learner. We find that every component of our MLP variant translated from biology is necessary for continual learning. Our solution is also free from any memory replay or task information, and introduces novel methods to train sparse networks that may be broadly applicable.Comment: 9 Pages. ICLR Acceptanc

    A Deep Learning Approach to Evaluating Disease Risk in Coronary Bifurcations

    Full text link
    Cardiovascular disease represents a large burden on modern healthcare systems, requiring significant resources for patient monitoring and clinical interventions. It has been shown that the blood flow through coronary arteries, shaped by the artery geometry unique to each patient, plays a critical role in the development and progression of heart disease. However, the popular and well tested risk models such as Framingham and QRISK3 current cardiovascular disease risk models are not able to take these differences when predicting disease risk. Over the last decade, medical imaging and image processing have advanced to the point that non-invasive high-resolution 3D imaging is routinely performed for any patient suspected of coronary artery disease. This allows for the construction of virtual 3D models of the coronary anatomy, and in-silico analysis of blood flow within the coronaries. However, several challenges still exist which preclude large scale patient-specific simulations, necessary for incorporating haemodynamic risk metrics as part of disease risk prediction. In particular, despite a large amount of available coronary medical imaging, extraction of the structures of interest from medical images remains a manual and laborious task. There is significant variation in how geometric features of the coronary arteries are measured, which makes comparisons between different studies difficult. Modelling blood flow conditions in the coronary arteries likewise requires manual preparation of the simulations and significant computational cost. This thesis aims to solve these challenges. The "Automated Segmentation of Coronary Arteries (ASOCA)" establishes a benchmark dataset of coronary arteries and their associated 3D reconstructions, which is currently the largest openly available dataset of coronary artery models and offers a wide range of applications such as computational modelling, 3D printed for experiments, developing, and testing medical devices such as stents, and Virtual Reality applications for education and training. An automated computational modelling workflow is developed to set up, run and postprocess simulations on the Left Main Bifurcation and calculate relevant shape metrics. A convolutional neural network model is developed to replace the computational fluid dynamics process, which can predict haemodynamic metrics such as wall shear stress in minutes, compared to several hours using traditional computational modelling reducing the computation and labour cost involved in performing such simulations

    Data Driven Forecast for Fire

    Get PDF
    Being able to forecast the evolution of a fire is essential for fire safety design and fire response strategies. Despite advances in understanding fire dynamics and improvements in computational capability, the ability to predict the evolution of a fire remains limited due to large uncertainties associated to multiple scales and the non-linearities. The data-driven approach provides a viable technique from models corrected by observations. However, the complicated coupling between gaseous and condensed phases has, in the past, limited proper prediction with a positive leading time. This work proposes and investigates a series of approaches to data-driven hybrid modelling that integrate analytical and numerical descriptions to address the coupling effects. The data-driven hybrid model is developed for different scenarios covering various complexity and scales. Different approaches are evaluated to reflect the dominant physics; nevertheless, they are structured by differentiating the condensed and gas phases. The initial scenario corresponds to one-dimensional convective-diffusive droplet combustion in micro and normal gravity. Then, concurrent flame spread in micro and normal gravity where a two-dimensional boundary layer combustion approach is implemented. Finally, the Malveira fire test represents a large-scale, three-dimensional, travelling fire. Coefficients assimilated with their experimental observations are used to alter analytical formulations describing the gas and condensed phases. By separating the phases, the data-driven hybrid model can forecast various types of variables while reducing processing resources. Convergence of the assimilated coefficients is used as an indicator for an appropriate representation of the model and therefore is suitable for predictions. The proposed methodology still requires ongoing research, however. This work provides evidence for specific approaches and of areas where additional attention is necessary. It has become apparent that to adequately predict real-scale fire, it is necessary for more sophisticated explanations of heat and mass transfer and descriptions of the interactions between fire and its environment

    Choosing observation operators to mitigate model error in Bayesian inverse problems

    Full text link
    In statistical inference, a discrepancy between the parameter-to-observable map that generates the data and the parameter-to-observable map that is used for inference can lead to misspecified likelihoods and thus to incorrect estimates. In many inverse problems, the parameter-to-observable map is the composition of a linear state-to-observable map called an `observation operator' and a possibly nonlinear parameter-to-state map called the `model'. We consider such Bayesian inverse problems where the discrepancy in the parameter-to-observable map is due to the use of an approximate model that differs from the best model, i.e. to nonzero `model error'. Multiple approaches have been proposed to address such discrepancies, each leading to a specific posterior. We show how to use local Lipschitz stability estimates of posteriors with respect to likelihood perturbations to bound the Kullback--Leibler divergence of the posterior of each approach with respect to the posterior associated to the best model. Our bounds lead to criteria for choosing observation operators that mitigate the effect of model error for Bayesian inverse problems of this type. We illustrate the feasibility of one such criterion on an advection-diffusion-reaction PDE inverse problem, and use this example to discuss the importance and challenges of model error-aware inference.Comment: 33 pages, 5 figure

    Modelling the interaction between induced pluripotent stem cells derived cardiomyocytes patches and the recipient hearts

    Get PDF
    Cardiovascular diseases are the main cause of death worldwide. The single biggest killer is represented by ischemic heart disease. Myocardial infarction causes the formation of non-conductive and non-contractile, scar-like tissue in the heart, which can hamper the heart's physiological function and cause pathologies ranging from arrhythmias to heart failure. The heart can not recover the tissue lost due to myocardial infarction due to the myocardium's limited ability to regenerate. The only available treatment is heart transpalant, which is limited by the number of donors and can elicit an adverse response from the recipients immune system. Recently, regenerative medicine has been proposed as an alternative approach to help post-myocardial infarction hearts recover their functionality. Among the various techniques, the application of cardiac patches of engineered heart tissue in combination with electroactive materials constitutes a promising technology. However, many challenges need to be faced in the development of this treatment. One of the main concerns is represented by the immature phenotype of the stem cells-derived cardiomyocytes used to fabricate the engineered heart tissue. Their electrophysiological differences with respect to the host myocardium may contribute to an increased arrhythmia risk. A large number of animal experiments are needed to optimize the patches' characteristics and to better understand the implications of the electrical interaction between patches and host myocardium. In this Thesis we leveraged cardiac computational modelling to simulate \emph{in silico} electrical propagation in scarred heart tissue in the presence of a patch of engineered heart tissue and conductive polymer engrafted at the epicardium. This work is composed by two studies. In the first study we designed a tissue model with simplified geometry and used machine learning and global sensitivity analysis techniques to identify engineered heart tissue patch design variables that are important for restoring physiological electrophysiology in the host myocardium. Additionally, we showed how engineered heart tissue properties could be tuned to restore physiological activation while reducing arrhythmic risk. In the second study we moved to more realistic geometries and we devised a way to manipulate ventricle meshes obtained from magnetic resonance images to apply \emph{in silico} engineered heart tissue epicardial patches. We then investigated how patches with different conduction velocity and action potential duration influence the host ventricle electrophysiology. Specifically, we showed that appropriately located patches can reduce the predisposition to anatomical isthmus mediated re-entry and that patches with a physiological action potential duration and higher conduction velocity were most effective in reducing this risk. We also demonstrated that patches with conduction velocity and action potential duration typical of immature stem cells-derived cardiomyocytes were associated with the onset of sustained functional re-entry in an ischemic cardiomyopathy model with a large transmural scar. Finally, we demonstrated that patches electrically coupled to host myocardium reduce the likelihood of propagation of focal ectopic impulses. This Thesis demonstrates how computational modelling can be successfully applied to the field of regenerative medicine and constitutes the first step towards the creation of patient-specific models for developing and testing patches for cardiac regeneration.Open Acces

    The Fifteenth Marcel Grossmann Meeting

    Get PDF
    The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity
    corecore