681 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Nonlocal games and their device-independent quantum applications
Device-independence is a property of certain protocols that allows one to ensure their proper execution given only classical interaction with devices and assuming the correctness of the laws of physics. This scenario describes the most general form of cryptographic security, in which no trust is placed in the hardware involved; indeed, one may even take it to have been prepared by an adversary.
Many quantum tasks have been shown to admit device-independent protocols by augmentation with "nonlocal games". These are games in which noncommunicating parties jointly attempt to fulfil some conditions imposed by a referee. We introduce examples of such games and examine the optimal strategies of players who are allowed access to different possible shared resources, such as entangled quantum states. We then study their role in self-testing, private random number generation, and secure delegated quantum computation. Hardware imperfections are naturally incorporated in the device-independent scenario as adversarial, and we thus also perform noise robustness analysis where feasible.
We first study a generalization of the MerminâPeres magic square game to arbitrary rectangular dimensions. After exhibiting some general properties, these "magic rectangle" games are fully characterized in terms of their optimal win probabilities for quantum strategies. We find that for mĂn magic rectangle games with dimensions m,nâ„3, there are quantum strategies that win with certainty, while for dimensions 1Ăn quantum strategies do not outperform classical strategies. The final case of dimensions 2Ăn is richer, and we give upper and lower bounds that both outperform the classical strategies. As an initial usage scenario, we apply our findings to quantum certified randomness expansion to find noise tolerances and rates for all magic rectangle games. To do this, we use our previous results to obtain the winning probabilities of games with a distinguished input for which the devices give a deterministic outcome and follow the analysis of C. A. Miller and Y. Shi [SIAM J. Comput. 46, 1304 (2017)].
Self-testing is a method to verify that one has a particular quantum state from purely classical statistics. For practical applications, such as device-independent delegated verifiable quantum computation, it is crucial that one self-tests multiple Bell states in parallel while keeping the quantum capabilities required of one side to a minimum. We use our 3Ăn magic rectangle games to obtain a self-test for n Bell states where one side needs only to measure single-qubit Pauli observables. The protocol requires small input sizes [constant for Alice and O(log n) bits for Bob] and is robust with robustness O(nâ”/ÂČâΔ), where Δ is the closeness of the ideal (perfect) correlations to those observed. To achieve the desired self-test, we introduce a one-side-local quantum strategy for the magic square game that wins with certainty, we generalize this strategy to the family of 3Ăn magic rectangle games, and we supplement these nonlocal games with extra check rounds (of single and pairs of observables).
Finally, we introduce a device-independent two-prover scheme in which a classical verifier can use a simple untrusted quantum measurement device (the client device) to securely delegate a quantum computation to an untrusted quantum server. To do this, we construct a parallel self-testing protocol to perform device-independent remote state preparation of n qubits and compose this with the unconditionally secure universal verifiable blind quantum computation (VBQC) scheme of J. F. Fitzsimons and E. Kashefi [Phys. Rev. A 96, 012303 (2017)]. Our self-test achieves a multitude of desirable properties for the application we consider, giving rise to practical and fully device-independent VBQC. It certifies parallel measurements of all cardinal and intercardinal directions in the XY-plane as well as the computational basis, uses few input questions (of size logarithmic in n for the client and a constant number communicated to the server), and requires only single-qubit measurements to be performed by the client device
Recommended from our members
Foundations of Node Representation Learning
Low-dimensional node representations, also called node embeddings, are a cornerstone in the modeling and analysis of complex networks. In recent years, advances in deep learning have spurred development of novel neural network-inspired methods for learning node representations which have largely surpassed classical \u27spectral\u27 embeddings in performance. Yet little work asks the central questions of this thesis: Why do these novel deep methods outperform their classical predecessors, and what are their limitations?
We pursue several paths to answering these questions. To further our understanding of deep embedding methods, we explore their relationship with spectral methods, which are better understood, and show that some popular deep methods are equivalent to spectral methods in a certain natural limit. We also introduce the problem of inverting node embeddings in order to probe what information they contain. Further, we propose a simple, non-deep method for node representation learning, and find it to often be competitive with modern deep graph networks in downstream performance.
To better understand the limitations of node embeddings, we prove some upper and lower bounds on their capabilities. Most notably, we prove that node embeddings are capable of exact low-dimensional representation of networks with bounded max degree or arboricity, and we further show that a simple algorithm can find such exact embeddings for real-world networks. By contrast, we also prove inherent bounds on random graph models, including those derived from node embeddings, to capture key structural properties of networks without simply memorizing a given graph
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Discovering structure without labels
The scarcity of labels combined with an abundance of data makes unsupervised learning more attractive than ever. Without annotations, inductive biases must guide the identification of the most salient structure in the data. This thesis contributes to two aspects of unsupervised learning: clustering and dimensionality reduction.
The thesis falls into two parts. In the first part, we introduce Mod Shift, a clustering method for point data that uses a distance-based notion of attraction and repulsion to determine the number of clusters and the assignment of points to clusters. It iteratively moves points towards crisp clusters like Mean Shift but also has close ties to the Multicut problem via its loss function. As a result, it connects signed graph partitioning to clustering in Euclidean space.
The second part treats dimensionality reduction and, in particular, the prominent neighbor embedding methods UMAP and t-SNE. We analyze the details of UMAP's implementation and find its actual loss function. It differs drastically from the one usually stated. This discrepancy allows us to explain some typical artifacts in UMAP plots, such as the dataset size-dependent tendency to produce overly crisp substructures. Contrary to existing belief, we find that UMAP's high-dimensional similarities are not critical to its success.
Based on UMAP's actual loss, we describe its precise connection to the other state-of-the-art visualization method, t-SNE. The key insight is a new, exact relation between the contrastive loss functions negative sampling, employed by UMAP, and noise-contrastive estimation, which has been used to approximate t-SNE. As a result, we explain that UMAP embeddings appear more compact than t-SNE plots due to increased attraction between neighbors. Varying the attraction strength further, we obtain a spectrum of neighbor embedding methods, encompassing both UMAP- and t-SNE-like versions as special cases. Moving from more attraction to more repulsion shifts the focus of the embedding from continuous, global to more discrete and local structure of the data. Finally, we emphasize the link between contrastive neighbor embeddings and self-supervised contrastive learning. We show that different flavors of contrastive losses can work for both of them with few noise samples
Automorphisms of rank-one generated hyperbolicity cones and their derivative relaxations
A hyperbolicity cone is said to be rank-one generated (ROG) if all its
extreme rays have rank one, where the rank is computed with respect to the
underlying hyperbolic polynomial. This is a natural class of hyperbolicity
cones which are strictly more general than the ROG spectrahedral cones. In this
work, we present a study of the automorphisms of ROG hyperbolicity cones and
their derivative relaxations. One of our main results states that the
automorphisms of the derivative relaxations are exactly the automorphisms of
the original cone fixing a certain direction. As an application, we completely
determine the automorphisms of the derivative relaxations of the nonnegative
orthant and of the cone of positive semidefinite matrices. More generally, we
also prove relations between the automorphisms of a spectral cone and the
underlying permutation-invariant set, which might be of independent interest.Comment: 25 pages. Some minor fixes and changes. To appear at the SIAM Journal
on Applied Algebra and Geometr
Parallel and Flow-Based High Quality Hypergraph Partitioning
Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits.
Given a hypergraph and an integer , the task is to divide the vertices into disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks.
In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge.
The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases.
In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs.
Once sufficiently small, an initial partition is computed.
Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level.
An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time.
The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem.
Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality.
While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible.
We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways.
Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines.
In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof.
We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation.
For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements.
For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly.
Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework.
It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner.
Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level.
This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential.
We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening.
In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio.
This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening.
The last ingredient for high quality is an iterative improvement algorithm based on maximum flows.
In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts.
Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel.
Beyond the strive for highest quality, we present a deterministically parallel partitioning framework.
We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement.
Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small.
All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets.
To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar.
While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain.
With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense.
Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm
Geometric optimization problems in quantum computation and discrete mathematics: Stabilizer states and lattices
This thesis consists of two parts:
Part I deals with properties of stabilizer states and their convex
hull, the stabilizer polytope. Stabilizer states, Pauli measurements
and Clifford unitaries are the three building blocks of the stabilizer
formalism whose computational power is limited by the Gottesman-
Knill theorem. This model is usually enriched by a magic state to get
a universal model for quantum computation, referred to as quantum
computation with magic states (QCM). The first part of this thesis
will investigate the role of stabilizer states within QCM from three
different angles.
The first considered quantity is the stabilizer extent, which provides
a tool to measure the non-stabilizerness or magic of a quantum state.
It assigns a quantity to each state roughly measuring how many stabilizer
states are required to approximate the state. It has been shown
that the extent is multiplicative under taking tensor products when
the considered state is a product state whose components are composed
of maximally three qubits. In Chapter 2, we will prove that
this property does not hold in general, more precisely, that the stabilizer
extent is strictly submultiplicative. We obtain this result as
a consequence of rather general properties of stabilizer states. Informally
our result implies that one should not expect a dictionary to be
multiplicative under taking tensor products whenever the dictionary
size grows subexponentially in the dimension.
In Chapter 3, we consider QCM from a resource theoretic perspective.
The resource theory of magic is based on two types of quantum
channels, completely stabilizer preserving maps and stabilizer operations.
Both classes have the property that they cannot generate additional
magic resources. We will show that these two classes of quantum
channels do not coincide, specifically, that stabilizer operations are a
strict subset of the set of completely stabilizer preserving channels.
This might have the consequence that certain tasks which are usually
realized by stabilizer operations could in principle be performed better
by completely stabilizer preserving maps.
In Chapter 4, the last one of Part I, we consider QCM via the polar
dual stabilizer polytope (also called the Lambda-polytope). This polytope
is a superset of the quantum state space and every quantum state
can be written as a convex combination of its vertices. A way to
classically simulate quantum computing with magic states is based on
simulating Pauli measurements and Clifford unitaries on the vertices
of the âLambda-polytope. The complexity of classical simulation with respect
to the polytope â is determined by classically simulating the updates
of vertices under Clifford unitaries and Pauli measurements. However,
a complete description of this polytope as a convex hull of its vertices is
only known in low dimensions (for up to two qubits or one qudit when
odd dimensional systems are considered). We make progress on this
question by characterizing a certain class of operators that live on the
boundary of the âLambda-polytope when the underlying dimension is an odd
prime. This class encompasses for instance Wigner operators, which
have been shown to be vertices of âLambda. We conjecture that this class
contains even more vertices of âLambda. Eventually, we will shortly sketch
why applying Clifford unitaries and Pauli measurements to this class
of operators can be efficiently classically simulated.
Part II of this thesis deals with lattices. Lattices are discrete subgroups
of the Euclidean space. They occur in various different areas of
mathematics, physics and computer science. We will investigate two
types of optimization problems related to lattices.
In Chapter 6 we are concerned with optimization within the space of
lattices. That is, we want to compare the Gaussian potential energy
of different lattices. To make the energy of lattices comparable we
focus on lattices with point density one. In particular, we focus on
even unimodular lattices and show that, up to dimension 24, they are
all critical for the Gaussian potential energy. Furthermore, we find
that all n-dimensional even unimodular lattices with n â 24 are local
minima or saddle points. In contrast in dimension 32, there are even
unimodular lattices which are local maxima and others which are not
even critical.
In Chapter 7 we consider flat tori R^n/L, where L is an n-dimensional
lattice. A flat torus comes with a metric and our goal is to approximate
this metric with a Hilbert space metric. To achieve this, we
derive an infinite-dimensional semidefinite optimization program that
computes the least distortion embedding of the metric space R^n/L into
a Hilbert space. This program allows us to make several interesting
statements about the nature of least distortion embeddings of flat tori.
In particular, we give a simple proof for a lower bound which gives
a constant factor improvement over the previously best lower bound
on the minimal distortion of an embedding of an n-dimensional flat
torus. Furthermore, we show that there is always an optimal embedding
into a finite-dimensional Hilbert space. Finally, we construct
optimal least distortion embeddings for the standard torus R^n/Z^n and
all 2-dimensional flat tori
- âŠ