96,619 research outputs found
Computational Complexity of Computing a Quasi-Proper Equilibrium
We study the computational complexity of computing or approximating a
quasi-proper equilibrium for a given finite extensive form game of perfect
recall. We show that the task of computing a symbolic quasi-proper equilibrium
is -complete for two-player games. For the case of zero-sum
games we obtain a polynomial time algorithm based on Linear Programming. For
general -player games we show that computing an approximation of a
quasi-proper equilibrium is -complete.Comment: Full version of paper to appear at the 23rd International Symposium
on Fundamentals of Computation Theory (FCT 2021
Distributed Signaling Games
A recurring theme in recent computer science literature is that proper design
of signaling schemes is a crucial aspect of effective mechanisms aiming to
optimize social welfare or revenue. One of the research endeavors of this line
of work is understanding the algorithmic and computational complexity of
designing efficient signaling schemes. In reality, however, information is
typically not held by a central authority, but is distributed among multiple
sources (third-party "mediators"), a fact that dramatically changes the
strategic and combinatorial nature of the signaling problem, making it a game
between information providers, as opposed to a traditional mechanism design
problem.
In this paper we introduce {\em distributed signaling games}, while using
display advertising as a canonical example for introducing this foundational
framework. A distributed signaling game may be a pure coordination game (i.e.,
a distributed optimization task), or a non-cooperative game. In the context of
pure coordination games, we show a wide gap between the computational
complexity of the centralized and distributed signaling problems. On the other
hand, we show that if the information structure of each mediator is assumed to
be "local", then there is an efficient algorithm that finds a near-optimal
(-approximation) distributed signaling scheme.
In the context of non-cooperative games, the outcome generated by the
mediators' signals may have different value to each (due to the auctioneer's
desire to align the incentives of the mediators with his own by relative
compensations). We design a mechanism for this problem via a novel application
of Shapley's value, and show that it possesses some interesting properties, in
particular, it always admits a pure Nash equilibrium, and it never decreases
the revenue of the auctioneer
High-performance model reduction techniques in computational multiscale homogenization
A novel model-order reduction technique for the solution of the fine-scale equilibrium problem appearing in computational homogenization is presented. The reduced set of empirical shape functions is obtained using a partitioned version that accounts for the elastic/inelastic character of the solution - of the Proper Orthogonal Decomposition (POD). On the other hand, it is shown that the standard approach of replacing the nonaffine term by an interpolant constructed using only POD modes leads to ill-posed formulations. We demonstrate that this ill-posedness can be avoided by enriching the approximation space with the span of the gradient of the empirical shape functions. Furthermore, interpolation points are chosen guided, not only by accuracy requirements, but also by stability considerations. The approach is assessed in the homogenization of a highly complex porous metal material. Computed results show that computational complexity is independent of the size and geometrical complexity of the Representative Volume Element. The speedup factor is over three orders of magnitude - as compared with finite element analysis - whereas the maximum error in stresses is less than 10%. A novel model-order reduction technique for the solution of the fine-scale equilibrium problem appearing in computational homogenization is presented. The reduced set of empirical shape functions is obtained using a partitioned version that accounts for the elastic/inelastic character of the solution - of the Proper Orthogonal Decomposition (POD). On the other hand, it is shown that the standard approach of replacing the nonaffine term by an interpolant constructed using only POD modes leads to ill-posed formulations. We demonstrate that this ill-posedness can be avoided by enriching the approximation space with the span of the gradient of the empirical shape functions. Furthermore, interpolation points are chosen guided, not only by accuracy requirements, but also by stability considerations. The approach is assessed in the homogenization of a highly complex porous metal material. Computed results show that computational complexity is independent of the size and geometrical complexity of the Representative Volume Element. The speedup factor is over three orders of magnitude - as compared with finite element analysis - whereas the maximum error in stresses is less than 10%
Model Order Reduction for the 1D Boltzmann-BGK Equation: Identifying Intrinsic Variables Using Neural Networks
Kinetic equations are crucial for modeling non-equilibrium phenomena, but
their computational complexity is a challenge. This paper presents a
data-driven approach using reduced order models (ROM) to efficiently model
non-equilibrium flows in kinetic equations by comparing two ROM approaches:
Proper Orthogonal Decomposition (POD) and autoencoder neural networks (AE).
While AE initially demonstrate higher accuracy, POD's precision improves as
more modes are considered. Notably, our work recognizes that the classical
POD-MOR approach, although capable of accurately representing the non-linear
solution manifold of the kinetic equation, may not provide a parsimonious model
of the data due to the inherently non-linear nature of the data manifold. We
demonstrate how AEs are used in finding the intrinsic dimension of a system and
to allow correlating the intrinsic quantities with macroscopic quantities that
have a physical interpretation
- …