1,185 research outputs found
Recommended from our members
Synaptic plasticity and memory addressing in biological and artificial neural networks
Biological brains are composed of neurons, interconnected by synapses to create large complex networks. Learning and memory occur, in large part, due to synaptic plasticity -- modifications in the efficacy of information transmission through these synaptic connections. Artificial neural networks model these with neural "units" which communicate through synaptic weights. Models of learning and memory propose synaptic plasticity rules that describe and predict the weight modifications. An equally important but under-evaluated question is the selection of \textit{which} synapses should be updated in response to a memory event. In this work, we attempt to separate the questions of synaptic plasticity from that of memory addressing.
Chapter 1 provides an overview of the problem of memory addressing and a summary of the solutions that have been considered in computational neuroscience and artificial intelligence, as well as those that may exist in biology. Chapter 2 presents in detail a solution to memory addressing and synaptic plasticity in the context of familiarity detection, suggesting strong feedforward weights and anti-Hebbian plasticity as the respective mechanisms. Chapter 3 proposes a model of recall, with storage performed by addressing through local third factors and neo-Hebbian plasticity, and retrieval by content-based addressing. In Chapter 4, we consider the problem of concurrent memory consolidation and memorization. Both storage and retrieval are performed by content-based addressing, but the plasticity rule itself is implemented by gradient descent, modulated according to whether an item should be stored in a distributed manner or memorized verbatim. However, the classical method for computing gradients in recurrent neural networks, backpropagation through time, is generally considered unbiological. In Chapter 5 we suggest a more realistic implementation through an approximation of recurrent backpropagation.
Taken together, these results propose a number of potential mechanisms for memory storage and retrieval, each of which separates the mechanism of synaptic updating -- plasticity -- from that of synapse selection -- addressing. Explicit studies of memory addressing may find applications not only in artificial intelligence but also in biology. In artificial networks, for example, selectively updating memories in large language models can help improve user privacy and security. In biological ones, understanding memory addressing can help with health outcomes and treating memory-based illnesses such as Alzheimers or PTSD
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Vibration-based damage localisation: Impulse response identification and model updating methods
Structural health monitoring has gained more and more interest over the recent decades. As the technology has matured and monitoring systems are employed commercially, the development of more powerful and precise methods is the logical next step in this field. Especially vibration sensor networks with few measurement points combined with utilisation of ambient vibration sources are attractive for practical applications, as this approach promises to be cost-effective while requiring minimal modification to the monitored structures. Since efficient methods for damage detection have already been developed for such sensor networks, the research focus shifts towards extracting more information from the measurement data, in particular to the localisation and quantification of damage.
Two main concepts have produced promising results for damage localisation. The first approach involves a mechanical model of the structure, which is used in a model updating scheme to find the damaged areas of the structure. Second, there is a purely data-driven approach, which relies on residuals of vibration estimations to find regions where damage is probable. While much research has been conducted following these two concepts, different approaches are rarely directly compared using the same data sets. Therefore, this thesis presents advanced methods for vibration-based damage localisation using model updating as well as a data-driven method and provides a direct comparison using the same vibration measurement data.
The model updating approach presented in this thesis relies on multiobjective optimisation. Hence, the applied numerical optimisation algorithms are presented first. On this basis, the model updating parameterisation and objective function formulation is developed. The data-driven approach employs residuals from vibration estimations obtained using multiple-input finite impulse response filters. Both approaches are then verified using a simulated cantilever beam considering multiple damage scenarios. Finally, experimentally obtained data from an outdoor girder mast structure is used to validate the approaches. In summary, this thesis provides an assessment of model updating and residual-based damage localisation by means of verification and validation cases. It is found that the residual-based method exhibits numerical performance sufficient for real-time applications while providing a high sensitivity towards damage. However, the localisation accuracy is found to be superior using the model updating method
Merkle Tree Ladder Mode: Reducing the Size Impact of NIST PQC Signature Algorithms in Practice
We introduce the Merkle Tree Ladder (MTL) mode of operation for signature schemes. MTL mode signs messages using an underlying signature scheme in such a way that the resulting signatures are condensable: a set of MTL mode signatures can be conveyed from a signer to a verifier in fewer bits than if the MTL mode signatures were sent individually. In MTL mode, the signer sends a shorter condensed signature for each message of interest and occasionally provides a longer reference value that helps the verifier process the condensed signatures. We show that in a practical scenario involving random access to an initial series of 10,000 signatures that expands gradually over time, MTL mode can reduce the size impact of the NIST PQC signature algorithms, which have signature sizes of 666 to 7856 bytes with example parameter sets, to a condensed signature size of 472 bytes per message. Even adding the overhead of the reference values, MTL mode signatures still reduce the overall signature size impact under a range of operational assumptions. Because MTL mode itself is quantum-safe, the mode can support long-term cryptographic resiliency in applications where signature size impact is a concern without limiting cryptographic diversity only to algorithms whose signatures are naturally short
Decision-making with gaussian processes: sampling strategies and monte carlo methods
We study Gaussian processes and their application to decision-making in the real world. We begin by reviewing the foundations of Bayesian decision theory and show how these ideas give rise to methods such as Bayesian optimization. We investigate practical techniques for carrying out these strategies, with an emphasis on estimating and maximizing acquisition functions. Finally, we introduce pathwise approaches to conditioning Gaussian processes and demonstrate key benefits for representing random variables in this manner.Open Acces
Inverse Global Illumination using a Neural Radiometric Prior
Inverse rendering methods that account for global illumination are becoming
more popular, but current methods require evaluating and automatically
differentiating millions of path integrals by tracing multiple light bounces,
which remains expensive and prone to noise. Instead, this paper proposes a
radiometric prior as a simple alternative to building complete path integrals
in a traditional differentiable path tracer, while still correctly accounting
for global illumination. Inspired by the Neural Radiosity technique, we use a
neural network as a radiance function, and we introduce a prior consisting of
the norm of the residual of the rendering equation in the inverse rendering
loss. We train our radiance network and optimize scene parameters
simultaneously using a loss consisting of both a photometric term between
renderings and the multi-view input images, and our radiometric prior (the
residual term). This residual term enforces a physical constraint on the
optimization that ensures that the radiance field accounts for global
illumination. We compare our method to a vanilla differentiable path tracer,
and more advanced techniques such as Path Replay Backpropagation. Despite the
simplicity of our approach, we can recover scene parameters with comparable and
in some cases better quality, at considerably lower computation times.Comment: Homepage: https://inverse-neural-radiosity.github.i
Toward Dynamic Social-Aware Networking Beyond Fifth Generation
The rise of the intelligent information world presents significant challenges for the telecommunication industry in meeting the service-level requirements of future applications and incorporating societal and behavioral awareness into the Internet of Things (IoT) objects. Social Digital Twins (SDTs), or Digital Twins augmented with social capabilities, have the potential to revolutionize digital transformation and meet the connectivity, computing, and storage needs of IoT devices in dynamic Fifth-Generation (5G) and Beyond Fifth-Generation (B5G) networks.
This research focuses on enabling dynamic social-aware B5G networking. The main contributions of this work include(i) the design of a reference architecture for the orchestration of SDTs at the network edge to accelerate the service discovery procedure across the Social Internet of Things (SIoT); (ii) a methodology to evaluate the highly dynamic system performance considering jointly communication and computing resources; (iii) a set of practical conclusions and outcomes helpful in designing future digital twin-enabled B5G networks. Specifically, we propose an orchestration for SDTs and an SIoT-Edge framework aligned with the Multi-access Edge Computing (MEC) architecture ratified by the European Telecommunications Standards Institute (ETSI). We formulate the optimal placement of SDTs as a Quadratic Assignment Problem (QAP) and propose a graph-based approximation scheme considering the different types of IoT devices, their social features, mobility patterns, and the limited computing resources of edge servers. We also study the appropriate intervals for re-optimizing the SDT deployment at the network edge. The results demonstrate that accounting for social features in SDT placement offers considerable improvements in the SIoT browsing procedure. Moreover, recent advancements in wireless communications, edge computing, and intelligent device technologies are expected to promote the growth of SIoT with pervasive sensing and computing capabilities, ensuring seamless connections among SIoT objects.
We then offer a performance evaluation methodology for eXtended Reality (XR) services in edge-assisted wireless networks and propose fluid approximations to characterize the XR content evolution. The approach captures the time and space dynamics of the content distribution process during its transient phase, including time-varying loads, which are affected by arrival, transition, and departure processes. We examine the effects of XR user mobility on both communication and computing patterns. The results demonstrate that communication and computing planes are the key barriers to meeting the requirement for real-time transmissions. Furthermore, due to the trend toward immersive, interactive, and contextualized experiences, new use cases affect user mobility patterns and, therefore, system performance.Cotutelle -yhteisväitöskirj
- …