5,664 research outputs found
Byzantine Attack and Defense in Cognitive Radio Networks: A Survey
The Byzantine attack in cooperative spectrum sensing (CSS), also known as the
spectrum sensing data falsification (SSDF) attack in the literature, is one of
the key adversaries to the success of cognitive radio networks (CRNs). In the
past couple of years, the research on the Byzantine attack and defense
strategies has gained worldwide increasing attention. In this paper, we provide
a comprehensive survey and tutorial on the recent advances in the Byzantine
attack and defense for CSS in CRNs. Specifically, we first briefly present the
preliminaries of CSS for general readers, including signal detection
techniques, hypothesis testing, and data fusion. Second, we analyze the spear
and shield relation between Byzantine attack and defense from three aspects:
the vulnerability of CSS to attack, the obstacles in CSS to defense, and the
games between attack and defense. Then, we propose a taxonomy of the existing
Byzantine attack behaviors and elaborate on the corresponding attack
parameters, which determine where, who, how, and when to launch attacks. Next,
from the perspectives of homogeneous or heterogeneous scenarios, we classify
the existing defense algorithms, and provide an in-depth tutorial on the
state-of-the-art Byzantine defense schemes, commonly known as robust or secure
CSS in the literature. Furthermore, we highlight the unsolved research
challenges and depict the future research directions.Comment: Accepted by IEEE Communications Surveys and Tutoiral
Falsification of Cyber-Physical Systems with Robustness-Guided Black-Box Checking
For exhaustive formal verification, industrial-scale cyber-physical systems
(CPSs) are often too large and complex, and lightweight alternatives (e.g.,
monitoring and testing) have attracted the attention of both industrial
practitioners and academic researchers. Falsification is one popular testing
method of CPSs utilizing stochastic optimization. In state-of-the-art
falsification methods, the result of the previous falsification trials is
discarded, and we always try to falsify without any prior knowledge. To
concisely memorize such prior information on the CPS model and exploit it, we
employ Black-box checking (BBC), which is a combination of automata learning
and model checking. Moreover, we enhance BBC using the robust semantics of STL
formulas, which is the essential gadget in falsification. Our experiment
results suggest that our robustness-guided BBC outperforms a state-of-the-art
falsification tool.Comment: Accepted to HSCC 202
Resilient Learning-Based Control for Synchronization of Passive Multi-Agent Systems under Attack
In this paper, we show synchronization for a group of output passive agents
that communicate with each other according to an underlying communication graph
to achieve a common goal. We propose a distributed event-triggered control
framework that will guarantee synchronization and considerably decrease the
required communication load on the band-limited network. We define a general
Byzantine attack on the event-triggered multi-agent network system and
characterize its negative effects on synchronization. The Byzantine agents are
capable of intelligently falsifying their data and manipulating the underlying
communication graph by altering their respective control feedback weights. We
introduce a decentralized detection framework and analyze its steady-state and
transient performances. We propose a way of identifying individual Byzantine
neighbors and a learning-based method of estimating the attack parameters.
Lastly, we propose learning-based control approaches to mitigate the negative
effects of the adversarial attack
Philosophy and the practice of Bayesian statistics
A substantial school in the philosophy of science identifies Bayesian
inference with inductive inference and even rationality as such, and seems to
be strengthened by the rise and practical success of Bayesian statistics. We
argue that the most successful forms of Bayesian statistics do not actually
support that particular philosophy but rather accord much better with
sophisticated forms of hypothetico-deductivism. We examine the actual role
played by prior distributions in Bayesian models, and the crucial aspects of
model checking and model revision, which fall outside the scope of Bayesian
confirmation theory. We draw on the literature on the consistency of Bayesian
updating and also on our experience of applied work in social science.
Clarity about these matters should benefit not just philosophy of science,
but also statistical practice. At best, the inductivist view has encouraged
researchers to fit and compare models without checking them; at worst,
theorists have actively discouraged practitioners from performing model
checking because it does not fit into their framework.Comment: 36 pages, 5 figures. v2: Fixed typo in caption of figure 1. v3:
Further typo fixes. v4: Revised in response to referee
Further support for the role of heroism in human mate choice
This is an accepted manuscript of an article published by the American Psychological Association in Evolutionary Behavioral Sciences on 03-09-2020.
The accepted version of the publication may differ from the final published version, accessible at https://psycnet.apa.org/doi/10.1037/ebs0000230.Although evidence suggests that altruistic behavior can act as a mating signal, little research has explored the role of heroism in mate choice. Previous research has focused on women only, ignoring the role of heroism in male mate choice. Here, we extended and replicated previous research on the role of heroism in human mate choice. Participants (N=276) rated how desirable targets were for a short-term and long-term relationship, which varied in heroism. The findings showed men and women reported higher desirability for heroic targets for long-term compared to short-term relationships, although this pattern was more prominent in women. These findings add support to the role of heroism in mate choice by exploring the role of heroism in male and female mate choice
Neuroimaging Research: From Null-Hypothesis Falsification to Out-of-sample Generalization
International audienceBrain imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands variables per brain scan were routinely tackled by independent statistical tests on each voxel. This circumvented the curse of dimensionality in exchange for neurobiologically imperfect observation units, a challenging multiple comparisons problem, and limited scaling to currently growing data repositories. Yet, the always-bigger information granularity of neuroimaging data repositories has lunched a rapidly increasing adoption of statistical learning algorithms. These scale naturally to high-dimensional data, extract models from data rather than prespecifying them, and are empirically evaluated for extrapolation to unseen data. The present paper portrays commonalities and differences between long-standing classical inference and upcoming generalization inference relevant for conducting neuroimaging research
The Falsification Adaptive Set in Linear Models with Instrumental Variables that Violate the Exogeneity or Exclusion Restriction
For the classical linear model with an endogenous variable estimated by the
method of instrumental variables (IVs) with multiple instruments, Masten and
Poirier (2021) introduced the falsification adaptive set (FAS). When a model is
falsified, the FAS reflects the model uncertainty that arises from
falsification of the baseline model. It is the set of just-identified IV
estimands, where each relevant instrument is considered as the just-identifying
instrument in turn, whilst all other instruments are included as controls. It
therefore applies to the case where the exogeneity assumption holds and invalid
instruments violate the exclusion assumption only. We propose a generalized FAS
that reflects the model uncertainty when some instruments violate the
exogeneity assumption and/or some instruments violate the exclusion assumption.
This FAS is the set of all possible just-identified IV estimands where the
just-identifying instrument is relevant. There are a maximum of
such estimands, where is the number of instruments.
If there is at least one relevant instrument that is valid in the sense that it
satisfies the exogeneity and exclusion assumptions, then this generalized FAS
is guaranteed to contain and therefore to be the identified set for
- …