161,277 research outputs found
Character-level Recurrent Neural Networks in Practice: Comparing Training and Sampling Schemes
Recurrent neural networks are nowadays successfully used in an abundance of
applications, going from text, speech and image processing to recommender
systems. Backpropagation through time is the algorithm that is commonly used to
train these networks on specific tasks. Many deep learning frameworks have
their own implementation of training and sampling procedures for recurrent
neural networks, while there are in fact multiple other possibilities to choose
from and other parameters to tune. In existing literature this is very often
overlooked or ignored. In this paper we therefore give an overview of possible
training and sampling schemes for character-level recurrent neural networks to
solve the task of predicting the next token in a given sequence. We test these
different schemes on a variety of datasets, neural network architectures and
parameter settings, and formulate a number of take-home recommendations. The
choice of training and sampling scheme turns out to be subject to a number of
trade-offs, such as training stability, sampling time, model performance and
implementation effort, but is largely independent of the data. Perhaps the most
surprising result is that transferring hidden states for correctly initializing
the model on subsequences often leads to unstable training behavior depending
on the dataset.Comment: 23 pages, 11 figures, 4 table
Learning Large-scale Neural Fields via Context Pruned Meta-Learning
We introduce an efficient optimization-based meta-learning technique for
large-scale neural field training by realizing significant memory savings
through automated online context point selection. This is achieved by focusing
each learning step on the subset of data with the highest expected immediate
improvement in model quality, resulting in the almost instantaneous modeling of
global structure and subsequent refinement of high-frequency details. We
further improve the quality of our meta-learned initialization by introducing a
bootstrap correction resulting in the minimization of any error introduced by
reduced context sets while simultaneously mitigating the well-known myopia of
optimization-based meta-learning. Finally, we show how gradient re-scaling at
meta-test time allows the learning of extremely high-quality neural fields in
significantly shortened optimization procedures. Our framework is
model-agnostic, intuitive, straightforward to implement, and shows significant
reconstruction improvements for a wide range of signals. We provide an
extensive empirical evaluation on nine datasets across multiple multiple
modalities, demonstrating state-of-the-art results while providing additional
insight through careful analysis of the algorithmic components constituting our
method. Code is available at https://github.com/jihoontack/GradNCPComment: Published as a conference proceeding for NeurIPS 202
Objective acceleration for unconstrained optimization
Acceleration schemes can dramatically improve existing optimization
procedures. In most of the work on these schemes, such as nonlinear Generalized
Minimal Residual (N-GMRES), acceleration is based on minimizing the
norm of some target on subspaces of . There are many numerical
examples that show how accelerating general purpose and domain-specific
optimizers with N-GMRES results in large improvements. We propose a natural
modification to N-GMRES, which significantly improves the performance in a
testing environment originally used to advocate N-GMRES. Our proposed approach,
which we refer to as O-ACCEL (Objective Acceleration), is novel in that it
minimizes an approximation to the \emph{objective function} on subspaces of
. We prove that O-ACCEL reduces to the Full Orthogonalization
Method for linear systems when the objective is quadratic, which differentiates
our proposed approach from existing acceleration methods. Comparisons with
L-BFGS and N-CG indicate the competitiveness of O-ACCEL. As it can be combined
with domain-specific optimizers, it may also be beneficial in areas where
L-BFGS or N-CG are not suitable.Comment: 18 pages, 6 figures, 5 table
Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures
Quantum computers have recently made great strides and are on a long-term
path towards useful fault-tolerant computation. A dominant overhead in
fault-tolerant quantum computation is the production of high-fidelity encoded
qubits, called magic states, which enable reliable error-corrected computation.
We present the first detailed designs of hardware functional units that
implement space-time optimized magic-state factories for surface code
error-corrected machines. Interactions among distant qubits require surface
code braids (physical pathways on chip) which must be routed. Magic-state
factories are circuits comprised of a complex set of braids that is more
difficult to route than quantum circuits considered in previous work [1]. This
paper explores the impact of scheduling techniques, such as gate reordering and
qubit renaming, and we propose two novel mapping techniques: braid repulsion
and dipole moment braid rotation. We combine these techniques with graph
partitioning and community detection algorithms, and further introduce a
stitching algorithm for mapping subgraphs onto a physical machine. Our results
show a factor of 5.64 reduction in space-time volume compared to the best-known
previous designs for magic-state factories.Comment: 13 pages, 10 figure
- …