1,081 research outputs found
Joint Design of Source-Channel Codes with Linear Source Encoding Complexity and Good Channel Thresholds Based on Double-Protograph LDPC Codes
We propose the use of a lower or upper triangular sub-base matrix to replace
the identity matrix in the source-check-channel-variable linking protomatrix of
a double-protograph low-density parity-check joint-source-channel code (DP-LDPC
JSCC). The elements along the diagonal of the proposed lower or upper
triangular sub-base matrix are assigned as "1" and the other non-zero elements
can take any non-negative integral values. Compared with the traditional
DP-LDPC JSCC designs, the new designs show a theoretical channel threshold
improvement of up to 0.41 dB and a simulated source symbol error rate
improvement of up to 0.5 dB at an error rate of 1e-6.Comment: 7 pages, 5 figures, 3 tables, to appear in IEEE Communications
Letter
The Public Performance Of Sanctions In Insolvency Cases: The Dark, Humiliating, And Ridiculous Side Of The Law Of Debt In The Italian Experience. A Historical Overview Of Shaming Practices
This study provides a diachronic comparative overview of how the law of debt has been applied by certain institutions in Italy. Specifically, it offers historical and comparative insights into the public performance of sanctions for insolvency through shaming and customary practices in Roman Imperial Law, in the Middle Ages, and in later periods.
The first part of the essay focuses on the Roman bonorum cessio culo nudo super lapidem and on the medieval customary institution called pietra della vergogna (stone of shame), which originates from the Roman model.
The second part of the essay analyzes the social function of the zecca and the pittima Veneziana during the Republic of Venice, and of the practice of lu soldate a castighe (no translation is possible).
The author uses a functionalist approach to apply some arguments and concepts from the current context to this historical analysis of ancient institutions that we would now consider ridiculous.
The article shows that the customary norms that play a crucial regulatory role in online interactions today can also be applied to the public square in the past. One of these tools is shaming. As is the case in contemporary online settings, in the public square in historic periods, shaming practices were used to enforce the rules of civility in a given community. Such practices can be seen as virtuous when they are intended for use as a tool to pursue positive change in forces entrenched in the culture, and thus to address social wrongs considered outside the reach of the law, or to address human rights abuses
Characterization and mass formulas of symplectic self-orthogonal and LCD codes and their application
The object of this paper is to study two very important classes of codes in
coding theory, namely self-orthogonal (SO) and linear complementary dual (LCD)
codes under the symplectic inner product, involving characterization,
constructions, and their application. Using such a characterization, we
determine the mass formulas of symplectic SO and LCD codes by considering the
action of the symplectic group, and further obtain some asymptotic results.
Finally, under the Hamming distance, we obtain some symplectic SO (resp. LCD)
codes with improved parameters directly compared with Euclidean SO (resp. LCD)
codes. Under the symplectic distance, we obtain some additive SO (resp.
additive complementary dual) codes with improved parameters directly compared
with Hermitian SO (resp. LCD) codes. Further, we also construct many good
additive codes outperform the best-known linear codes in Grassl's code table.
As an application, we construct a number of record-breaking
(entanglement-assisted) quantum error-correcting codes, which improve Grassl's
code table
Visual Cortical Traveling Waves: From Spontaneous Spiking Populations to Stimulus-Evoked Models of Short-Term Prediction
Thanks to recent advances in neurotechnology, waves of activity sweeping across entire cortical regions are now routinely observed. Moreover, these waves have been found to impact neural responses as well as perception, and the responses themselves are found to be structured as traveling waves. How exactly do these waves arise? Do they confer any computational advantages? These traveling waves represent an opportunity for an expanded theory of neural computation, in which their dynamic local network activity may complement the moment-to-moment variability of our sensory experience.
This thesis aims to help uncover the origin and role of traveling waves in the visual cortex through three Works. In Work 1, by simulating a network of conductance-based spiking neurons with realistically large network size and synaptic density, distance-dependent horizontal axonal time delays were found to be important for the widespread emergence of spontaneous traveling waves consistent with those in vivo. Furthermore, these waves were found to be a dynamic mechanism of gain modulation that may explain the in-vivo result of their modulation of perception. In Work 2, the Kuramoto oscillator model was formulated in the complex domain to study a network possessing distance-dependent time delays. Like in Work 1, these delays produced traveling waves, and the eigenspectrum of the complex-valued delayed matrix, containing a delay operator, provided an analytical explanation of them. In Work 3, the model from Work 2 was adapted into a recurrent neural network for the task of forecasting the frames of videos, with the question of how such a biologically constrained model may be useful in visual computation. We found that the wave activity emergent in this network was helpful, as they were tightly linked with high forecast performance, and shuffle controls revealed simultaneous abolishment of both the waves and performance.
All together, these works shed light on the possible origins and uses of traveling waves in the visual cortex. In particular, time delays profoundly shape the spatiotemporal dynamics into traveling waves. This was confirmed numerically (Work 1) and analytically (Work 2). In Work 3, these waves were found to aid in the dynamic computation of visual forecasting
Amortized Bootstrapping Revisited: Simpler, Asymptotically-faster, Implemented
Micciancio and Sorrel (ICALP 2018) proposed a bootstrapping algorithm that can refresh many messages at once with sublinearly many homomorphic operations per message. However, despite the attractive asymptotic cost, it is unclear if their algorithm could ever be practical, which reduces the impact of their results. In this work, we follow their general framework, but propose an amortized bootstrapping that is conceptually simpler and asymptotically cheaper. We reduce the number of homomorphic operations per refreshed message from to , and the noise overhead from to . We also make it more general, by handling non-binary messages and applying programmable bootstrapping.
To obtain a concrete instantiation of our bootstrapping algorithm, we propose a double-CRT (aka RNS) version of the GSW scheme, including a new operation, called shrinking, used to speed-up homomorphic operations by reducing the dimension and ciphertext modulus of the ciphertexts. We also provide a C++ implementation of our algorithm, thus showing for the first time the practicability of the amortized bootstrapping. Moreover, it is competitive with existing bootstrapping algorithms, being even around 3.4 times faster than an equivalent non-amortized version of our bootstrapping
Adaptive dynamical networks
It is a fundamental challenge to understand how the function of a network is related to its structural organization. Adaptive dynamical networks represent a broad class of systems that can change their connectivity over time depending on their dynamical state. The most important feature of such systems is that their function depends on their structure and vice versa. While the properties of static networks have been extensively investigated in the past, the study of adaptive networks is much more challenging. Moreover, adaptive dynamical networks are of tremendous importance for various application fields, in particular, for the models for neuronal synaptic plasticity, adaptive networks in chemical, epidemic, biological, transport, and social systems, to name a few. In this review, we provide a detailed description of adaptive dynamical networks, show their applications in various areas of research, highlight their dynamical features and describe the arising dynamical phenomena, and give an overview of the available mathematical methods developed for understanding adaptive dynamical networks
Synergies between Numerical Methods for Kinetic Equations and Neural Networks
The overarching theme of this work is the efficient computation of large-scale systems. Here we deal with two types of mathematical challenges, which are quite different at first glance but offer similar opportunities and challenges upon closer examination.
Physical descriptions of phenomena and their mathematical modeling are performed on diverse scales, ranging from nano-scale interactions of single atoms to the macroscopic dynamics of the earth\u27s atmosphere. We consider such systems of interacting particles and explore methods to simulate them efficiently and accurately, with a focus on the kinetic and macroscopic description of interacting particle systems.
Macroscopic governing equations describe the time evolution of a system in time and space, whereas the more fine-grained kinetic description additionally takes the particle velocity into account.
The study of discretizing kinetic equations that depend on space, time, and velocity variables is a challenge due to the need to preserve physical solution bounds, e.g. positivity, avoiding spurious artifacts and computational efficiency.
In the pursuit of overcoming the challenge of computability in both kinetic and multi-scale modeling, a wide variety of approximative methods have been established in the realm of reduced order and surrogate modeling, and model compression. For kinetic models, this may manifest in hybrid numerical solvers, that switch between macroscopic and mesoscopic simulation, asymptotic preserving schemes, that bridge the gap between both physical resolution levels, or surrogate models that operate on a kinetic level but replace computationally heavy operations of the simulation by fast approximations.
Thus, for the simulation of kinetic and multi-scale systems with a high spatial resolution and long temporal horizon, the quote by Paul Dirac is as relevant as it was almost a century ago.
The first goal of the dissertation is therefore the development of acceleration strategies for kinetic discretization methods, that preserve the structure of their governing equations. Particularly, we investigate the use of convex neural networks, to accelerate the minimal entropy closure method. Further, we develop a neural network-based hybrid solver for multi-scale systems, where kinetic and macroscopic methods are chosen based on local flow conditions.
Furthermore, we deal with the compression and efficient computation of neural networks. In the meantime, neural networks are successfully used in different forms in countless scientific works and technical systems, with well-known applications in image recognition, and computer-aided language translation, but also as surrogate models for numerical mathematics.
Although the first neural networks were already presented in the 1950s, the scientific discipline has enjoyed increasing popularity mainly during the last 15 years, since only now sufficient computing capacity is available. Remarkably, the increasing availability of computing resources is accompanied by a hunger for larger models, fueled by the common conception of machine learning practitioners and researchers that more trainable parameters equal higher performance and better generalization capabilities. The increase in model size exceeds the
growth of available computing resources by orders of magnitude. Since , the computational resources used in the largest neural network models doubled every months\footnote{\url{https://openai.com/blog/ai-and-compute/}}, opposed to Moore\u27s Law that proposes a -year doubling period in available computing power.
To some extent, Dirac\u27s statement also applies to the recent computational challenges in the machine-learning community. The desire to evaluate and train on resource-limited devices sparked interest in model compression, where neural networks are sparsified or factorized, typically after training. The second goal of this dissertation is thus a low-rank method, originating from numerical methods for kinetic equations, to compress neural networks already during training by low-rank factorization.
This dissertation thus considers synergies between kinetic models, neural networks, and numerical methods in both disciplines to develop time-, memory- and energy-efficient computational methods for both research areas
- …