1,686 research outputs found
Essays in Behavioral Economics and Game Theory
This thesis consists of three papers. Chapter 1 conducts experimental research on individual bounded rationality in games, Chapter 2 introduces a novel equilibrium solution concept in behavioral game theory, and Chapter 3 investigates confirmation bias within the framework of game theory.
In Chapter 1 (joint with Wei James Chen and Po-Hsuan Lin), we investigate individual strategic reasoning depths by matching human subjects with fully rational computer players in a lab, allowing for the isolation of limited reasoning ability from beliefs about opponent players and social preferences. Our findings reveal that when matched with robots, subjects demonstrate higher stability in their strategic thinking depths across games, in contrast to when matched with humans.
In Chapter 2 (joint with Po-Hsuan Lin and Thomas R. Palfrey), we investigate how players’ misunderstanding about the relationship between opponents’ private information and strategies influence their equilibrium behavior in dynamic environments. This theoretical study introduces a framework that extends the analysis of cursed equilibrium from the strategic form to multi-stage games and applies it to various applications in economics and political science.
In Chapter 3, I employ a game-theoretic framework to model how decision makers strategically interpret signals, particularly when they face a utility loss from holding beliefs that differ from their partners. The study reveals that the emergence of confirmation bias is positively associated with the strength of prior beliefs about a state, while the impact of signal accuracy remains ambiguous.</p
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Investigations into Proof Structures
We introduce and elaborate a novel formalism for the manipulation and
analysis of proofs as objects in a global manner. In this first approach the
formalism is restricted to first-order problems characterized by condensed
detachment. It is applied in an exemplary manner to a coherent and
comprehensive formal reconstruction and analysis of historical proofs of a
widely-studied problem due to {\L}ukasiewicz. The underlying approach opens the
door towards new systematic ways of generating lemmas in the course of proof
search to the effects of reducing the search effort and finding shorter proofs.
Among the numerous reported experiments along this line, a proof of
{\L}ukasiewicz's problem was automatically discovered that is much shorter than
any proof found before by man or machine.Comment: This article is a continuation of arXiv:2104.1364
Model Checking Strategies from Synthesis Over Finite Traces
The innovations in reactive synthesis from {\em Linear Temporal Logics over
finite traces} (LTLf) will be amplified by the ability to verify the
correctness of the strategies generated by LTLf synthesis tools. This motivates
our work on {\em LTLf model checking}. LTLf model checking, however, is not
straightforward. The strategies generated by LTLf synthesis may be represented
using {\em terminating} transducers or {\em non-terminating} transducers where
executions are of finite-but-unbounded length or infinite length, respectively.
For synthesis, there is no evidence that one type of transducer is better than
the other since they both demonstrate the same complexity and similar
algorithms.
In this work, we show that for model checking, the two types of transducers
are fundamentally different. Our central result is that LTLf model checking of
non-terminating transducers is \emph{exponentially harder} than that of
terminating transducers. We show that the problems are EXPSPACE-complete and
PSPACE-complete, respectively. Hence, considering the feasibility of
verification, LTLf synthesis tools should synthesize terminating transducers.
This is, to the best of our knowledge, the \emph{first} evidence to use one
transducer over the other in LTLf synthesis.Comment: Accepted by ATVA 2
On the Computation of Multi-Scalar Multiplication for Pairing-Based zkSNARKs
Multi-scalar multiplication refers to the operation of computing multiple scalar multiplications in an elliptic curve group and then adding them together. It is an essential operation for proof generation and verification in pairing-based trusted setup zero-knowledge succinct non-interactive argument of knowledge (zkSNARK) schemes, which enable privacy-preserving features in many blockchain applications. Pairing-based trusted setup zkSNARKs usually follow a common paradigm. A public string composed of a list of fixed points in an elliptic curve group called common reference string is generated in a trusted setup and accessible to all parties involved. The prover generates a zkSNARK proof by computing multi-scalar multiplications over the points in the common reference string and performing other operations. The verifier verifies the proof by computing multi-scalar multiplications and elliptic curve bilinear pairings.
Multi-scalar multiplication in pairing-based trusted setup zkSNARKs has two characteristics. First, all the points are fixed once the common reference string is generated. Second, the number of points n is typically large, with the thesis targeting at n = 2^e (10 ≤ e ≤ 21). Our goal in this thesis is to propose and implement efficient algorithms for computing multi-scalar multiplication in order to enable efficient zkSNARKs.
This thesis primarily includes three aspects. First, the background knowledge is introduced and the classical multi-scalar multiplication algorithms are reviewed. Second, two frameworks for computing multi-scalar multiplications over fixed points and five corresponding auxiliary set pairs are proposed. Finally, the theoretical analysis, software implementation, and experimental tests on the representative instantiations of the proposed frameworks are presented
Recommended from our members
Foundations of Node Representation Learning
Low-dimensional node representations, also called node embeddings, are a cornerstone in the modeling and analysis of complex networks. In recent years, advances in deep learning have spurred development of novel neural network-inspired methods for learning node representations which have largely surpassed classical \u27spectral\u27 embeddings in performance. Yet little work asks the central questions of this thesis: Why do these novel deep methods outperform their classical predecessors, and what are their limitations?
We pursue several paths to answering these questions. To further our understanding of deep embedding methods, we explore their relationship with spectral methods, which are better understood, and show that some popular deep methods are equivalent to spectral methods in a certain natural limit. We also introduce the problem of inverting node embeddings in order to probe what information they contain. Further, we propose a simple, non-deep method for node representation learning, and find it to often be competitive with modern deep graph networks in downstream performance.
To better understand the limitations of node embeddings, we prove some upper and lower bounds on their capabilities. Most notably, we prove that node embeddings are capable of exact low-dimensional representation of networks with bounded max degree or arboricity, and we further show that a simple algorithm can find such exact embeddings for real-world networks. By contrast, we also prove inherent bounds on random graph models, including those derived from node embeddings, to capture key structural properties of networks without simply memorizing a given graph
Trocq: Proof Transfer for Free, With or Without Univalence
Libraries of formalized mathematics use a possibly broad range of different
representations for a same mathematical concept. Yet light to major manual
input from users remains most often required for obtaining the corresponding
variants of theorems, when such obvious replacements are typically left
implicit on paper. This article presents Trocq, a new proof transfer framework
for dependent type theory. Trocq is based on a novel formulation of type
equivalence, used to generalize the univalent parametricity translation. This
framework takes care of avoiding dependency on the axiom of univalence when
possible, and may be used with more relations than just equivalences. We have
implemented a corresponding plugin for the Coq proof assistant, in the CoqElpi
meta-language. We use this plugin on a gallery of representative examples of
proof transfer issues in interactive theorem proving, and illustrate how Trocq
covers the spectrum of several existing tools, used in program verification as
well as in formalized mathematics in the broad sense
The Distributed Complexity of Locally Checkable Labeling Problems Beyond Paths and Trees
We consider locally checkable labeling LCL problems in the LOCAL model of
distributed computing. Since 2016, there has been a substantial body of work
examining the possible complexities of LCL problems. For example, it has been
established that there are no LCL problems exhibiting deterministic
complexities falling between and . This line of
inquiry has yielded a wealth of algorithmic techniques and insights that are
useful for algorithm designers.
While the complexity landscape of LCL problems on general graphs, trees, and
paths is now well understood, graph classes beyond these three cases remain
largely unexplored. Indeed, recent research trends have shifted towards a
fine-grained study of special instances within the domains of paths and trees.
In this paper, we generalize the line of research on characterizing the
complexity landscape of LCL problems to a much broader range of graph classes.
We propose a conjecture that characterizes the complexity landscape of LCL
problems for an arbitrary class of graphs that is closed under minors, and we
prove a part of the conjecture.
Some highlights of our findings are as follows.
1. We establish a simple characterization of the minor-closed graph classes
sharing the same deterministic complexity landscape as paths, where ,
, and are the only possible complexity classes.
2. It is natural to conjecture that any minor-closed graph class shares the
same complexity landscape as trees if and only if the graph class has bounded
treewidth and unbounded pathwidth. We prove the "only if" part of the
conjecture.
3. In addition to the well-known complexity landscapes for paths, trees, and
general graphs, there are infinitely many different complexity landscapes among
minor-closed graph classes
Cognitive Hierarchies in Multi-Stage Games of Incomplete Information: Theory and Experiment
Sequential equilibrium is the conventional approach for analyzing multi-stage
games of incomplete information. It relies on mutual consistency of beliefs. To
relax mutual consistency, I theoretically and experimentally explore the
dynamic cognitive hierarchy (DCH) solution. One property of DCH is that the
solution can vary between two different games sharing the same reduced normal
form, i.e., violation of invariance under strategic equivalence. I test this
prediction in a laboratory experiment using two strategically equivalent
versions of the dirty-faces game. The game parameters are calibrated to
maximize the expected difference in behavior between the two versions, as
predicted by DCH. The experimental results indicate significant differences in
behavior between the two versions, and more importantly, the observed
differences align with DCH. This suggests that implementing a dynamic game
experiment in reduced normal form (using the "strategy method") could lead to
distortions in behavior.Comment: 48 pages for the main text, 52 pages for the appendi
The Geometric Median and Applications to Robust Mean Estimation
This paper is devoted to the statistical and numerical properties of the
geometric median, and its applications to the problem of robust mean estimation
via the median of means principle. Our main theoretical results include (a) an
upper bound for the distance between the mean and the median for general
absolutely continuous distributions in R^d, and examples of specific classes of
distributions for which these bounds do not depend on the ambient dimension
; (b) exponential deviation inequalities for the distance between the sample
and the population versions of the geometric median, which again depend only on
the trace-type quantities and not on the ambient dimension. As a corollary, we
deduce improved bounds for the (geometric) median of means estimator that hold
for large classes of heavy-tailed distributions. Finally, we address the error
of numerical approximation, which is an important practical aspect of any
statistical estimation procedure. We demonstrate that the objective function
minimized by the geometric median satisfies a "local quadratic growth"
condition that allows one to translate suboptimality bounds for the objective
function to the corresponding bounds for the numerical approximation to the
median itself. As a corollary, we propose a simple stopping rule (applicable to
any optimization method) which yields explicit error guarantees. We conclude
with the numerical experiments including the application to estimation of mean
values of log-returns for S&P 500 data.Comment: 28 pages, 2 figure
- …