574 research outputs found
Deeply Rational Machines -- What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence -- Sample Chapter 1 -- "Moderate Empiricism and Machine Learning"
This book provides a framework for thinking about foundational philosophical questions surrounding the use of deep artificial neural networks (âdeep learningâ) to achieve artificial intelligence. Specifically, it links recent breakthroughs in deep learning to classical empiricist philosophy of mind. In recent assessments of deep learningâs current capabilities and future potential, prominent scientists have cited historical figures from the perennial philosophical debate between nativism and empiricism, which primarily concerns the origins of abstract knowledge. These empiricists were generally faculty psychologists; that is, they argued that the extraction of abstract knowledge from perceptual experience involves the active engagement of general psychological facultiesâsuch as perception, memory, imagination, attention, and empathy. This book explains how recent headline-grabbing deep learning achievements were enabled by adding functionality to these networks that model forms of processing attributed to these faculties by philosophers such as Aristotle, Ibn Sina (Avicenna), John Locke, David Hume, William James, and Sophie de Grouchy. It illustrates the utility of this interdisciplinary connection by showing how it can provide benefits to both philosophy and computer science: computer scientists can continue to mine the history of philosophy for ideas and aspirational targets to hit on the way to building more robustly rational artificial agents, and philosophers can see how some of the historical empiricistsâ most ambitious speculations can now be realized in specific computational systems
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Informationsströme in digitalen Kulturen
Wir sind umgeben von einer Vielzahl an Informationsströmen, die uns selbstverstĂ€ndlich erscheinen. Um diese digitalen Kulturen zu beschreiben, entwickeln medienwissenschaftliche Arbeiten Theorien einer Welt im Fluss. Dabei erliegen ihre Diagnosen oftmals einem Technikfetisch und vernachlĂ€ssigen gesellschaftliche Strukturen. Mathias Denecke legt eine systematische Kritik dieser Theoriebildung vor. Dazu zeichnet er die Geschichte der Rede von strömenden Informationen in der Entwicklung digitaler Computer nach und diskutiert, wie der Begriff fĂŒr Gegenwartsbeschreibungen produktiv gemacht werden kann
Revisiting neural information, computing and linking capacity
Neural information theory represents a fundamental method to model dynamic relations in biological systems. However, the notion of information, its representation, its content and how it is processed are the subject of fierce debates. Since the limiting capacity of neuronal links strongly depends on how neurons are hypothesized to work, their operating modes are revisited by analyzing the differences between the results of the communication models published during the past seven decades and those of the recently developed generalization of the classical information theory. It is pointed out that the operating mode of neurons is in resemblance with an appropriate combination of the formerly hypothesized analog and digital working modes; furthermore that not only the notion of neural information and its processing must be reinterpreted. Given that the transmission channel is passive in Shannon's model, the active role of the transfer channels (the axons) may introduce further transmission limits in addition to the limits concluded from the information theory. The time-aware operating model enables us to explain why (depending on the researcher's point of view) the operation can be considered either purely analog or purely digital
Harnessing Evolution in-Materio as an Unconventional Computing Resource
This thesis illustrates the use and development of physical conductive analogue systems for unconventional computing using the Evolution in-Materio (EiM) paradigm. EiM uses an Evolutionary Algorithm to configure and exploit a physical material (or medium) for computation. While EiM processors show promise, fundamental questions and scaling issues remain. Additionally, their development is hindered by slow manufacturing and physical experimentation. This work addressed these issues by implementing simulated models to speed up research efforts, followed by investigations of physically implemented novel in-materio devices.
Initial work leveraged simulated conductive networks as single substrate âmonolithicâ EiM processors, performing classification by formulating the system as an optimisation problem, solved using Differential Evolution. Different material properties and algorithm parameters were isolated and investigated; which explained the capabilities of configurable parameters and showed ideal nanomaterial choice depended upon problem complexity. Subsequently, drawing from concepts in the wider Machine Learning field, several enhancements to monolithic EiM processors were proposed and investigated. These ensured more efficient use of training data, better classification decision boundary placement, an independently optimised readout layer, and a smoother search space. Finally, scalability and performance issues were addressed by constructing in-Materio Neural Networks (iM-NNs), where several EiM processors were stacked in parallel and operated as physical realisations of Hidden Layer neurons. Greater flexibility in system implementation was achieved by re-using a single physical substrate recursively as several virtual neurons, but this sacrificed faster parallelised execution. These novel iM-NNs were first implemented using Simulated in-Materio neurons, and trained for classification as Extreme Learning Machines, which were found to outperform artificial networks of a similar size. Physical iM-NN were then implemented using a Raspberry Pi, custom Hardware Interface and Lambda Diode based Physical in-Materio neurons, which were trained successfully with neuroevolution. A more complex AutoEncoder structure was then proposed and implemented physically to perform dimensionality reduction on a handwritten digits dataset, outperforming both Principal Component Analysis and artificial AutoEncoders.
This work presents an approach to exploit systems with interesting physical dynamics, and leverage them as a computational resource. Such systems could become low power, high speed, unconventional computing assets in the future
Beyond Quantity: Research with Subsymbolic AI
How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately
Molecular dynamics simulations of nanoclusters in neuromorphic systems
Neuromorphic computing is a new computing paradigm that deals with computing tasks using inter-connected artificial neurons inspired by the natural neurons in the human brain. This computational architecture is more efficient in performing many complex tasks such a pattern recognition and has promise at overcoming some of the limitations of conventional computers. Among the emerging types of artificial neurons, a cluster-based neuromorphic device is a promising system with good costefficiency because of a simple fabrication process. This device functions using the formation and breakage of the connections (âsynapsesâ) between clusters, driven by the bias voltage applied to the clusters. The mechanisms of the formation and breakage of these connections are thus of the utmost interest. In this thesis, the molecular dynamics simulation method is used to explore the mechanisms of the formation and breakage of the connections (âfilamentsâ) between the clusters in a model of neuromorphic device. First, the Joule heating mechanism of filament breakage is explored using a model consisting of Au nanowire that connects two Au1415 clusters. Upon heating, the atoms of the nanofilament gradually aggregate towards the clusters, causing the middle of the wire to graduallythin and then suddenly break. Most of the system remains crystalline during this process, but the centre becomes molten. The terminal clusters increase the melting point of the nanowires by fixing them and act as recrystallisation regions. A strong dependence of the breaking temperature is found not only on the width of the nanowires but also their length and atomic structure. Secondly, the bridge formation and thermal breaking processes between Au1415 clusters on a graphite substrate are also simulated. The bridging process , which can heal a broken filament, is driven by diffusion of gold along the graphite substrate. The characteristic times of bridge formation are explored at elevated simulation temperatures to estimate the longer characteristic times of formation at room-temperature conditions. The width of the bridge formed has a power-law dependence on the simulation time, and the mechanism is a combination of diffusion and viscous flow. Simulations of bridgebreaking are also conducted and reveal the existence of a voltage threshold that must be reached to activate the breakage. The role of the substrate in the bridge formation and breakage processes is revealed as a medium of diffusion. Lastly, to explore future potential cluster materials, the thermal behaviour of Pb-Al core-shell clusters is studied. The core and shell are found to melt separately. In fact, the core atoms of nanoclusters tend to escape their shells and partially cover them, leading to a preference for a segregated state. The melting point of the core can either be depressed or elevated, depending on the thickness of the shell due to different mechanisms
Implementation-as: From Art & Science to Computing
This paper vindicates interpretational accounts of physical computation. Specifically, recent agential approaches that couch implementation in terms of scientific representation are corroborated. Such accounts are strengthened by the introduction of a novel notion: Implementation-as. Implementation-as is theoretically underpinned by the DEKI-account (Frigg&Nguyen 2018), a formalized account of scientific representation relying on Goodmanâs and Elginâs notion of representation-as. The ensuing result is a philosophically robust account, satisfying the most important desiderata for accounts of computation in physical systems. The upshot is that physical computation occurs when agents use material systems as epistemic tools to compute a function. Application of this new framework is illustrated for the case of the MONIAC (an analog device) and the IAS-machine (a digital computer)
- âŠ