1,435 research outputs found
Classical and quantum algorithms for scaling problems
This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
On the path integration system of insects: there and back again
Navigation is an essential capability of animate organisms and robots. Among animate organisms of particular interest are insects because they are capable of a variety of navigation competencies solving challenging problems with limited resources, thereby providing inspiration for robot navigation.
Ants, bees and other insects are able to return to their nest using a navigation strategy known as path integration. During path integration, the animal maintains a running estimate of the distance and direction to its nest as it travels. This estimate, known as the `home vector', enables the animal to return to its nest.
Path integration was the technique used by sea navigators to cross the open seas in the past. To perform path integration, both sailors and insects need access to two pieces of information, their direction and their speed of motion over time. Neurons encoding the heading and speed have been found to converge on a highly conserved region of the insect brain, the central complex. It is, therefore, believed that the central complex is key to the computations pertaining to path integration.
However, several questions remain about the exact structure of the neuronal circuit that tracks the animal's heading, how it differs between insect species, and how the speed and direction are integrated into a home vector and maintained in memory. In this thesis, I have combined behavioural, anatomical, and physiological data with computational modelling and agent simulations to tackle these questions.
Analysis of the internal compass circuit of two insect species with highly divergent ecologies, the fruit fly Drosophila melanogaster and the desert locust Schistocerca gregaria, revealed that despite 400 million years of evolutionary divergence, both species share a fundamentally common internal compass circuit that keeps track of the animal's heading. However, subtle differences in the neuronal morphologies result in distinct circuit dynamics adapted to the ecology of each species, thereby providing insights into how neural circuits evolved to accommodate species-specific behaviours.
The fast-moving insects need to update their home vector memory continuously as they move, yet they can remember it for several hours. This conjunction of fast updating and long persistence of the home vector does not directly map to current short, mid, and long-term memory accounts. An extensive literature review revealed a lack of available memory models that could support the home vector memory requirements.
A comparison of existing behavioural data with the homing behaviour of simulated robot agents illustrated that the prevalent hypothesis, which posits that the neural substrate of the path integration memory is a bump attractor network, is contradicted by behavioural evidence.
An investigation of the type of memory utilised during path integration revealed that cold-induced anaesthesia disrupts the ability of ants to return to their nest, but it does not eliminate their ability to move in the correct homing direction. Using computational modelling and simulated agents, I argue that the best explanation for this phenomenon is not two separate memories differently affected by temperature but a shared memory that encodes both the direction and distance.
The results presented in this thesis shed some more light on the labyrinth that researchers of animal navigation have been exploring in their attempts to unravel a few more rounds of Ariadne's thread back to its origin. The findings provide valuable insights into the path integration system of insects and inspiration for future memory research, advancing path integration techniques in robotics, and developing novel neuromorphic solutions to computational problems
Nonlocal games and their device-independent quantum applications
Device-independence is a property of certain protocols that allows one to ensure their proper execution given only classical interaction with devices and assuming the correctness of the laws of physics. This scenario describes the most general form of cryptographic security, in which no trust is placed in the hardware involved; indeed, one may even take it to have been prepared by an adversary.
Many quantum tasks have been shown to admit device-independent protocols by augmentation with "nonlocal games". These are games in which noncommunicating parties jointly attempt to fulfil some conditions imposed by a referee. We introduce examples of such games and examine the optimal strategies of players who are allowed access to different possible shared resources, such as entangled quantum states. We then study their role in self-testing, private random number generation, and secure delegated quantum computation. Hardware imperfections are naturally incorporated in the device-independent scenario as adversarial, and we thus also perform noise robustness analysis where feasible.
We first study a generalization of the MerminâPeres magic square game to arbitrary rectangular dimensions. After exhibiting some general properties, these "magic rectangle" games are fully characterized in terms of their optimal win probabilities for quantum strategies. We find that for mĂn magic rectangle games with dimensions m,nâ„3, there are quantum strategies that win with certainty, while for dimensions 1Ăn quantum strategies do not outperform classical strategies. The final case of dimensions 2Ăn is richer, and we give upper and lower bounds that both outperform the classical strategies. As an initial usage scenario, we apply our findings to quantum certified randomness expansion to find noise tolerances and rates for all magic rectangle games. To do this, we use our previous results to obtain the winning probabilities of games with a distinguished input for which the devices give a deterministic outcome and follow the analysis of C. A. Miller and Y. Shi [SIAM J. Comput. 46, 1304 (2017)].
Self-testing is a method to verify that one has a particular quantum state from purely classical statistics. For practical applications, such as device-independent delegated verifiable quantum computation, it is crucial that one self-tests multiple Bell states in parallel while keeping the quantum capabilities required of one side to a minimum. We use our 3Ăn magic rectangle games to obtain a self-test for n Bell states where one side needs only to measure single-qubit Pauli observables. The protocol requires small input sizes [constant for Alice and O(log n) bits for Bob] and is robust with robustness O(nâ”/ÂČâΔ), where Δ is the closeness of the ideal (perfect) correlations to those observed. To achieve the desired self-test, we introduce a one-side-local quantum strategy for the magic square game that wins with certainty, we generalize this strategy to the family of 3Ăn magic rectangle games, and we supplement these nonlocal games with extra check rounds (of single and pairs of observables).
Finally, we introduce a device-independent two-prover scheme in which a classical verifier can use a simple untrusted quantum measurement device (the client device) to securely delegate a quantum computation to an untrusted quantum server. To do this, we construct a parallel self-testing protocol to perform device-independent remote state preparation of n qubits and compose this with the unconditionally secure universal verifiable blind quantum computation (VBQC) scheme of J. F. Fitzsimons and E. Kashefi [Phys. Rev. A 96, 012303 (2017)]. Our self-test achieves a multitude of desirable properties for the application we consider, giving rise to practical and fully device-independent VBQC. It certifies parallel measurements of all cardinal and intercardinal directions in the XY-plane as well as the computational basis, uses few input questions (of size logarithmic in n for the client and a constant number communicated to the server), and requires only single-qubit measurements to be performed by the client device
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steerâa gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
Functional connectivity and dendritic integration of feedback in visual cortex
A fundamental question in neuroscience is how different brain regions communicate with each other. Sensory processing engages distributed circuits across many brain areas and involves information flow in the feedforward and feedback direction. While feedforward processing is conceptually well understood, feedback processing has remained mysterious. Cortico-cortical feedback axons are enriched in layer 1, where they form synapses with the apical dendrites of pyramidal neurons. The organization and dendritic integration of information conveyed by these axons, however, are unknown. This thesis describes my efforts to link the circuit-level and dendritic-level organization of cortico-cortical feedback in the mouse visual system. First, using cellular resolution all-optical interrogation across cortical areas, I characterized the functional connectivity between the lateromedial higher visual area (LM) and primary visual cortex (V1). Feedback influence had both facilitating and suppressive effects on visually-evoked activity in V1 neurons, and was spatially organized: retinotopically aligned feedback was relatively more suppressive, while retinotopically offset feedback was relatively more facilitating. Second, to examine how feedback inputs are integrated in apical dendrites, I optogenetically stimulated presynaptic neurons in LM while using 2-photon calcium imaging to map feedback-recipient spines in the apical tufts of layer 5 neurons in V1. Activation of a single feedback-providing input was sufficient to boost calcium signals and recruit branch-specific local events in the recipient dendrite, suggesting that feedback can engage dendritic nonlinearities directly. Finally, I measured the recruitment of apical dendrites during visual stimulus processing. Surround visual stimuli, which should recruit relatively more facilitating feedback, drove local calcium events in apical tuft branches. Moreover, global dendritic event size was not purely determined by somatic activity but modulated by visual stimuli and behavioural state, in a manner consistent with the spatial organization of feedback. In summary, these results point toward a possible involvement of active dendritic processing in the integration of feedback signals. Active dendrites could thus provide a biophysical substrate for the integration of essential top-down information streams, including contextual or predictive processing
Swoosh: Practical Lattice-Based Non-Interactive Key Exchange
The advent of quantum computers has sparked significant interest in post-quantum cryptographic schemes, as a replacement for currently used cryptographic primitives. In this context, lattice-based cryptography has emerged as the leading paradigm to build post-quantum cryptography. However, all existing viable replacements of the classical Diffie-Hellman key exchange require additional rounds of interactions, thus failing to achieve all the benefits of this protocol. Although earlier work has shown that lattice-based Non-Interactive Key Exchange~(NIKE) is theoretically possible, it has been considered too inefficient for real-life applications.
In this work, we challenge this folklore belief and provide the first evidence against it. We construct a practical lattice-based NIKE whose security is based on the standard module learning with errors (M-LWE) problem in the quantum random oracle model. Our scheme is obtained in two steps: (i) A passively-secure construction that achieves a strong notion of correctness, coupled with (ii) a generic compiler that turns any such scheme into an actively-secure one. To substantiate our efficiency claim, we provide an optimised implementation of our construction in Rust and Jasmin. Our implementation demonstrates the scheme\u27s applicability to real-world scenarios, yielding public keys of approximately \,KBs. Moreover, the computation of shared keys takes fewer than million cycles on an Intel Skylake CPU, offering a post-quantum security level exceeding bits
One-out-of-Many Unclonable Cryptography: Definitions, Constructions, and More
The no-cloning principle of quantum mechanics enables us to achieve amazing unclonable cryptographic primitives, which is impossible in classical cryptography. However, the security definitions for unclonable cryptography are tricky. Achieving desirable security notions for unclonability is a challenging task. In particular, there is no indistinguishable-secure unclonable encryption and quantum copy-protection for single-bit output point functions in the standard model. To tackle this problem, we introduce and study relaxed but meaningful security notions for unclonable cryptography in this work. We call the new security notion one-out-of-many unclonable security.
We obtain the following results.
- We show that one-time strong anti-piracy secure secret key single-decryptor encryption (SDE) implies one-out-of-many indistinguishable-secure unclonable encryption.
- We construct a one-time strong anti-piracy secure secret key SDE scheme in the standard model from the LWE assumption.
- We construct one-out-of-many copy-protection for single-bit output point functions from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption.
- We construct one-out-of-many unclonable predicate encryption (PE) from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption.
Thus, we obtain one-out-of-many indistinguishable-secure unclonable encryption, one-out-of-many copy-protection for single-bit output point functions, and one-out-of-many unclonable PE in the standard model from the LWE assumption. In addition, our one-time SDE scheme is the first SDE scheme that does not rely on any oracle heuristics and strong assumptions such as indistinguishability obfuscation and witness encryption
Des-q: a quantum algorithm to construct and efficiently retrain decision trees for regression and binary classification
Decision trees are widely used in machine learning due to their simplicity in
construction and interpretability. However, as data sizes grow, traditional
methods for constructing and retraining decision trees become increasingly
slow, scaling polynomially with the number of training examples. In this work,
we introduce a novel quantum algorithm, named Des-q, for constructing and
retraining decision trees in regression and binary classification tasks.
Assuming the data stream produces small increments of new training examples, we
demonstrate that our Des-q algorithm significantly reduces the time required
for tree retraining, achieving a poly-logarithmic time complexity in the number
of training examples, even accounting for the time needed to load the new
examples into quantum-accessible memory. Our approach involves building a
decision tree algorithm to perform k-piecewise linear tree splits at each
internal node. These splits simultaneously generate multiple hyperplanes,
dividing the feature space into k distinct regions. To determine the k suitable
anchor points for these splits, we develop an efficient quantum-supervised
clustering method, building upon the q-means algorithm of Kerenidis et al.
Des-q first efficiently estimates each feature weight using a novel quantum
technique to estimate the Pearson correlation. Subsequently, we employ weighted
distance estimation to cluster the training examples in k disjoint regions and
then proceed to expand the tree using the same procedure. We benchmark the
performance of the simulated version of our algorithm against the
state-of-the-art classical decision tree for regression and binary
classification on multiple data sets with numerical features. Further, we
showcase that the proposed algorithm exhibits similar performance to the
state-of-the-art decision tree while significantly speeding up the periodic
tree retraining.Comment: 48 pager, 4 figures, 4 table
Deep Learning Models of Learning in the Brain
This thesis considers deep learning theories of brain function, and in particular biologically plausible deep learning. The idea is to treat a standard deep network as a high-level model of a neural circuit (e.g., the visual stream), adding biological constraints to some clearly artificial features. Two big questions are possible. First, how to train deep networks in a biologically realistic manner? The standard approach, supervised training via backpropagation, needs overly complicated machinery for backpropagation and precise labels (that are somewhat scarce in the real world). The first result in this thesis approaches the first problem, backpropagation, by avoiding it completely. A layer-wise objective is proposed, which results in local, Hebbian weight updates that use a global error signal. The second result approaches the need for precise labels. It is focused on a principled approach to self-supervised learning, framing the problem as dependence maximisation using kernel methods. Although this is a deep learning study, it is relevant to neuroscience: self-supervised learning appears to be a suitable learning paradigm for the brain as it only requires binary (same source or not) teaching signals for pairs of inputs. Second, how realistic is the architecture itself? For instance, most well-performing networks have some form of weight sharing - having the same weights for different neurons at all times. Convolutional networks share filter weights among neurons, and transformers do so for matrix-matrix products. While the operation is biologically implausible, the third result of this thesis shows that it can be successfully approximated with a separate phase of weight-sharing-inducing Hebbian learning
- âŠ