253 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Recommended from our members
Foundations of Node Representation Learning
Low-dimensional node representations, also called node embeddings, are a cornerstone in the modeling and analysis of complex networks. In recent years, advances in deep learning have spurred development of novel neural network-inspired methods for learning node representations which have largely surpassed classical \u27spectral\u27 embeddings in performance. Yet little work asks the central questions of this thesis: Why do these novel deep methods outperform their classical predecessors, and what are their limitations?
We pursue several paths to answering these questions. To further our understanding of deep embedding methods, we explore their relationship with spectral methods, which are better understood, and show that some popular deep methods are equivalent to spectral methods in a certain natural limit. We also introduce the problem of inverting node embeddings in order to probe what information they contain. Further, we propose a simple, non-deep method for node representation learning, and find it to often be competitive with modern deep graph networks in downstream performance.
To better understand the limitations of node embeddings, we prove some upper and lower bounds on their capabilities. Most notably, we prove that node embeddings are capable of exact low-dimensional representation of networks with bounded max degree or arboricity, and we further show that a simple algorithm can find such exact embeddings for real-world networks. By contrast, we also prove inherent bounds on random graph models, including those derived from node embeddings, to capture key structural properties of networks without simply memorizing a given graph
Group-theoretic error mitigation enabled by classical shadows and symmetries
Estimating expectation values is a key subroutine in many quantum algorithms.
However, near-term implementations face two major challenges: a limited number
of samples to learn a large collection of observables, and the accumulation of
errors in devices without quantum error correction. To address these challenges
simultaneously, we develop a quantum error-mitigation strategy which unifies
the group-theoretic structure of classical-shadow tomography with symmetries in
quantum systems of interest. We refer to our protocol as "symmetry-adjusted
classical shadows," as it mitigates errors by adjusting estimators according to
how known symmetries are corrupted under those errors. As a concrete example,
we highlight global symmetry, which manifests in fermions as
particle number and in spins as total magnetization, and illustrate their
unification with respective classical-shadow protocols. One of our main results
establishes rigorous error and sampling bounds under readout errors obeying
minimal assumptions. Furthermore, to probe mitigation capabilities against a
more comprehensive class of gate-level errors, we perform numerical experiments
with a noise model derived from existing quantum processors. Our analytical and
numerical results reveal symmetry-adjusted classical shadows as a flexible and
low-cost strategy to mitigate errors from noisy quantum experiments in the
ubiquitous presence of symmetry.Comment: 45 pages, 13 figures. Typos corrected and references updated.
Open-source code available at
https://github.com/zhao-andrew/symmetry-adjusted-classical-shadow
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
An annotated graph model with differential degree heterogeneity for directed networks
Directed networks are conveniently represented as graphs in which ordered edges encode interactions between vertices. Despite their wide availability, there is a shortage of statistical models amenable for inference, specially when contextual information and degree heterogeneity are present. This paper presents an annotated graph model with parameters explicitly accounting for these features. To overcome the curse of dimensionality due to modelling degree heterogeneity, we introduce a sparsity assumption and propose a penalized likelihood approach with â„“1
-regularization for parameter estimation. We study the estimation and selection consistency of this approach under a sparse network assumption, and show that inference on the covariate parameter is straightforward, thus bypassing the need for the kind of debiasing commonly employed in â„“1
-penalized likelihood estimation. Simulation and data analysis corroborate our theoretical findings
Integrality and cutting planes in semidefinite programming approaches for combinatorial optimization
Many real-life decision problems are discrete in nature. To solve such problems as mathematical optimization problems, integrality constraints are commonly incorporated in the model to reflect the choice of finitely many alternatives. At the same time, it is known that semidefinite programming is very suitable for obtaining strong relaxations of combinatorial optimization problems. In this dissertation, we study the interplay between semidefinite programming and integrality, where a special focus is put on the use of cutting-plane methods. Although the notions of integrality and cutting planes are well-studied in linear programming, integer semidefinite programs (ISDPs) are considered only recently. We show that manycombinatorial optimization problems can be modeled as ISDPs. Several theoretical concepts, such as the Chvátal-Gomory closure, total dual integrality and integer Lagrangian duality, are studied for the case of integer semidefinite programming. On the practical side, we introduce an improved branch-and-cut approach for ISDPs and a cutting-plane augmented Lagrangian method for solving semidefinite programs with a large number of cutting planes. Throughout the thesis, we apply our results to a wide range of combinatorial optimization problems, among which the quadratic cycle cover problem, the quadratic traveling salesman problem and the graph partition problem. Our approaches lead to novel, strong and efficient solution strategies for these problems, with the potential to be extended to other problem classes
Recommended from our members
Efficient Methods for Large-Scale Dynamic Optimization with Applications to Inventory Management Problems
In this thesis, we study large-scale dynamic optimization problems in the context of inventory management. We analyze inventory problems with constraints coupling the items or facility locations in the inventory systems, and we propose efficient solutions that are asymptotically optimal or empirically near-optimal.
In Chapter 1, we analyze multi-item, single-location inventory systems with storage capacity limits which are formulated as both unconditional expected value constraints and unconditional probability constraints. We first show that problems with unconditional expected value constraints only can be solved to optimality through Lagrangian relaxation. Then, under an assumption on the correlation structure of the demands that is valid under most practical setting, we show that the original problem can be sandwiched between two other problems with expected value constraints only. One of these problems yields a feasible solution to the original problem that is asymptotically optimal as the number of items grows.
In Chapter 2, we consider the same problem but with conditional probability constraints, that impose limits on overflow frequency for every possible state in each period. We construct an efficient feasible solution in two steps. First, we solve an unconditional expected value constrained problem with reduced capacity. Second, in each period, given the state information, we solve a single-period convex optimization problem with a conditional expected value constraint. We further show that the heuristic is asymptotically optimal as number of items I grows. In addition, we design another efficient method for moderate values of I, which works empirically well in an extensive numerical study. Moreover, we extract key managerial insights from the numerical study which are critical to decision making in real business problems.
In Chapter 3, we analyze single-item, multi-location systems on inventory networks that can be described by directed acyclic graphs (DAG). We propose an innovative reformulation of the problem so that Lagrangian relaxation can still be applied, which, instead of decomposing the problem by facility location, aggregates the state information, leading to a tractable lower bound approximation for the problem. The Lagrange multiplier, which provides information on the value function from the lower bound dynamic program, is used in designing a feasible heuristic. An extensive numerical study is conducted which suggests that both the lower bound approximation and upper bound heuristic perform very well
Global Optimization for Cardinality-constrained Minimum Sum-of-Squares Clustering via Semidefinite Programming
The minimum sum-of-squares clustering (MSSC), or k-means type clustering, has
been recently extended to exploit prior knowledge on the cardinality of each
cluster. Such knowledge is used to increase performance as well as solution
quality. In this paper, we propose a global optimization approach based on the
branch-and-cut technique to solve the cardinality-constrained MSSC. For the
lower bound routine, we use the semidefinite programming (SDP) relaxation
recently proposed by Rujeerapaiboon et al. [SIAM J. Optim. 29(2), 1211-1239,
(2019)]. However, this relaxation can be used in a branch-and-cut method only
for small-size instances. Therefore, we derive a new SDP relaxation that scales
better with the instance size and the number of clusters. In both cases, we
strengthen the bound by adding polyhedral cuts. Benefiting from a tailored
branching strategy which enforces pairwise constraints, we reduce the
complexity of the problems arising in the children nodes. For the upper bound,
instead, we present a local search procedure that exploits the solution of the
SDP relaxation solved at each node. Computational results show that the
proposed algorithm globally solves, for the first time, real-world instances of
size 10 times larger than those solved by state-of-the-art exact methods
Gaussian resource theories and semidefinite programming hierarchies for quantum information
Determining which quantum tasks we can perform with currently available tools and devices is one of the most important goals of quantum information science today. To achieve this requires careful investigation of the capability of current quantum tools as well as development of classical protocols which can assist quantum tasks and amplify their abilities. In this thesis, we approach this problem through two different topics in quantum information theory: Gaussian resource theories and semidefinite programming hierarchies.
In the first part of this thesis, we examine the possibility of implementing quantum information processing tasks in the Gaussian platform through the eyes of quantum resource theories. Gaussian states and operations are primary tools for the study of continuous-variable quantum information processing due to their easy accessibility and concise mathematical descriptions, although it has been discovered that they are subject to a number of limitations for advanced quantum information processing tasks. We explore the capability of the Gaussian platform further in the first part of this thesis. Firstly, we investigate whether introducing convex structure to the Gaussian framework can circumvent the known no-go theorem of Gaussian resource distillation. Surprisingly, we find that resource distillation becomes possible — albeit in a limited fashion — when convexity is introduced. Then, we consider the quantum resource theory of Gaussian thermal operations when catalysts are allowed, and examine the abilities of catalytic Gaussian thermal operations by characterising all possible state transformations under them.
In the second part of this thesis, we address the problem of characterising quantum cor- relations via semidefinite programming hierarchies. In particular, we focus on characterising quantum correlations of fixed dimension, which is practically relevant to the field of semi- device-independent quantum information processing. Semidefinite programming is a special type of mathematical optimisation, and it is known that some important but difficult problems in quantum information theory admit semidefinite programming relaxations; these include the characterisation of general quantum correlations in the context of non-locality and the distinction of quantum separable states from entangled states. In this second part, we show how to construct a hierarchy of semidefinite programming relaxations for quantum correlations of fixed dimension and derive analytical bounds on the convergence speed of the hierarchy. For the proof, we make a connection to a variant of quantum separability problem and employ multipartite quantum de Finetti theorems with linear constraints.Open Acces
Robust Active and Passive Beamforming for RIS-Assisted Full-Duplex Systems under Imperfect CSI
The sixth-generation (6G) wireless technology recognizes the potential of
reconfigurable intelligent surfaces (RIS) as an effective technique for
intelligently manipulating channel paths through reflection to serve desired
users. Full-duplex (FD) systems, enabling simultaneous transmission and
reception from a base station (BS), offer the theoretical advantage of doubled
spectrum efficiency. However, the presence of strong self-interference (SI) in
FD systems significantly degrades performance, which can be mitigated by
leveraging the capabilities of RIS. Moreover, accurately obtaining channel
state information (CSI) from RIS poses a critical challenge. Our objective is
to maximize downlink (DL) user data rates while ensuring quality-of-service
(QoS) for uplink (UL) users under imperfect CSI from reflected channels. To
address this, we introduce the robust active BS and passive RIS beamforming
(RAPB) scheme for RIS-FD, accounting for both SI and imperfect CSI. RAPB
incorporates distributionally robust design, conditional value-at-risk (CVaR),
and penalty convex-concave programming (PCCP) techniques. Additionally, RAPB
extends to active and passive beamforming (APB) with perfect channel
estimation. Simulation results demonstrate the UL/DL rate improvements achieved
considering various levels of imperfect CSI. The proposed RAPB/APB schemes
validate their effectiveness across different RIS deployment and RIS/BS
configurations. Benefited from robust beamforming, RAPB outperforms existing
methods in terms of non-robustness, deployment without RIS, conventional
successive convex approximation, and half-duplex systems
- …