5,458 research outputs found
Designing the Game to Play: Optimizing Payoff Structure in Security Games
Effective game-theoretic modeling of defender-attacker behavior is becoming
increasingly important. In many domains, the defender functions not only as a
player but also the designer of the game's payoff structure. We study
Stackelberg Security Games where the defender, in addition to allocating
defensive resources to protect targets from the attacker, can strategically
manipulate the attacker's payoff under budget constraints in weighted L^p-norm
form regarding the amount of change. Focusing on problems with weighted
L^1-norm form constraint, we present (i) a mixed integer linear program-based
algorithm with approximation guarantee; (ii) a branch-and-bound based algorithm
with improved efficiency achieved by effective pruning; (iii) a polynomial time
approximation scheme for a special but practical class of problems. In
addition, we show that problems under budget constraints in L^0-norm form and
weighted L^\infty-norm form can be solved in polynomial time. We provide an
extensive experimental evaluation of our proposed algorithms
Empirical Bayes inference in sparse high-dimensional generalized linear models
High-dimensional linear models have been extensively studied in the recent
literature, but the developments in high-dimensional generalized linear models,
or GLMs, have been much slower. In this paper, we propose the use an empirical
or data-driven prior specification leading to an empirical Bayes posterior
distribution which can be used for estimation of and inference on the
coefficient vector in a high-dimensional GLM, as well as for variable
selection. For our proposed method, we prove that the posterior distribution
concentrates around the true/sparse coefficient vector at the optimal rate and,
furthermore, provide conditions under which the posterior can achieve variable
selection consistency. Computation of the proposed empirical Bayes posterior is
simple and efficient, and, in terms of variable selection in logistic and
Poisson regression, is shown to perform well in simulations compared to
existing Bayesian and non-Bayesian methods.Comment: 30 pages, 2 table
Junior Recital: Rachel Stein, soprano
This recital is presented in partial fulfillment of requirements for the degree Bachelor of Music in Performance. Ms. Stein studies voice with Eileen Moremen.https://digitalcommons.kennesaw.edu/musicprograms/1134/thumbnail.jp
Denoising Particle-In-Cell Data via Smoothness-Increasing Accuracy-Conserving Filters with Application to Bohm Speed Computation
The simulation of plasma physics is computationally expensive because the
underlying physical system is of high dimensions, requiring three spatial
dimensions and three velocity dimensions. One popular numerical approach is
Particle-In-Cell (PIC) methods owing to its ease of implementation and
favorable scalability in high-dimensional problems. An unfortunate drawback of
the method is the introduction of statistical noise resulting from the use of
finitely many particles. In this paper we examine the application of the
Smoothness-Increasing Accuracy-Conserving (SIAC) family of convolution kernel
filters as denoisers for moment data arising from PIC simulations. We show that
SIAC filtering is a promising tool to denoise PIC data in the physical space as
well as capture the appropriate scales in the Fourier space. Furthermore, we
demonstrate how the application of the SIAC technique reduces the amount of
information necessary in the computation of quantities of interest in plasma
physics such as the Bohm speed
Low-redundancy codes for correcting multiple short-duplication and edit errors
Due to its higher data density, longevity, energy efficiency, and ease of
generating copies, DNA is considered a promising storage technology for
satisfying future needs. However, a diverse set of errors including deletions,
insertions, duplications, and substitutions may arise in DNA at different
stages of data storage and retrieval. The current paper constructs
error-correcting codes for simultaneously correcting short (tandem)
duplications and at most edits, where a short duplication generates a copy
of a substring with length and inserts the copy following the original
substring, and an edit is a substitution, deletion, or insertion. Compared to
the state-of-the-art codes for duplications only, the proposed codes correct up
to edits (in addition to duplications) at the additional cost of roughly
symbols of redundancy, thus achieving the same
asymptotic rate, where is the alphabet size and is a constant.
Furthermore, the time complexities of both the encoding and decoding processes
are polynomial when is a constant with respect to the code length.Comment: 21 pages. The paper has been submitted to IEEE Transaction on
Information Theory. Furthermore, the paper was presented in part at the
ISIT2021 and ISIT202
- …