2,744 research outputs found
Faster Family-wise Error Control for Neuroimaging with a Parametric Bootstrap
In neuroimaging, hundreds to hundreds of thousands of tests are performed
across a set of brain regions or all locations in an image. Recent studies have
shown that the most common family-wise error (FWE) controlling procedures in
imaging, which rely on classical mathematical inequalities or Gaussian random
field theory, yield FWE rates that are far from the nominal level. Depending on
the approach used, the FWER can be exceedingly small or grossly inflated. Given
the widespread use of neuroimaging as a tool for understanding neurological and
psychiatric disorders, it is imperative that reliable multiple testing
procedures are available. To our knowledge, only permutation joint testing
procedures have been shown to reliably control the FWER at the nominal level.
However, these procedures are computationally intensive due to the increasingly
available large sample sizes and dimensionality of the images, and analyses can
take days to complete. Here, we develop a parametric bootstrap joint testing
procedure. The parametric bootstrap procedure works directly with the test
statistics, which leads to much faster estimation of adjusted \emph{p}-values
than resampling-based procedures while reliably controlling the FWER in sample
sizes available in many neuroimaging studies. We demonstrate that the procedure
controls the FWER in finite samples using simulations, and present region- and
voxel-wise analyses to test for sex differences in developmental trajectories
of cerebral blood flow
A Hierarchy Theorem for Interactive Proofs of Proximity
The number of rounds, or round complexity, used in an interactive
protocol is a fundamental resource. In this work we consider the
significance of round complexity in the context of Interactive
Proofs of Proximity (IPPs). Roughly speaking, IPPs are interactive proofs in which the verifier runs in sublinear time and is only required to reject inputs that are far from the language.
Our main result is a round hierarchy theorem for IPPs, showing
that the power of IPPs grows with the number of rounds. More
specifically, we show that there exists a gap function
g(r) = Theta(r^2) such that for every constant r geq 1 there exists a language that (1) has a g(r)-round IPP with verification time t=t(n,r) but (2) does not have an r-round IPP with verification time t (or even verification time t\u27=poly(t)).
In fact, we prove a stronger result by exhibiting a single language L such that, for every constant r geq 1, there is an
O(r^2)-round IPP for L with t=n^{O(1/r)} verification time, whereas the verifier in any r-round IPP for L must run in time at least t^{100}. Moreover, we show an IPP for L with a poly-logarithmic number of rounds and only poly-logarithmic erification time, yielding a sub-exponential separation between the power of constant-round IPPs versus general (unbounded round) IPPs.
From our hierarchy theorem we also derive implications to standard
interactive proofs (in which the verifier can run in polynomial
time). Specifically, we show that the round reduction technique of
Babai and Moran (JCSS, 1988) is (almost) optimal among all blackbox transformations, and we show a connection to the algebrization framework of Aaronson and Wigderson (TOCT, 2009)
Proofs of proximity for context-free languages and read-once branching programs
Proofs of proximity are probabilistic proof systems in which the verifier only queries a sub-linear number of input bits, and soundness only means that, with high probability, the input is close to an accepting input. In their minimal form, called Merlin-Arthur proofs of proximity ( MAP ), the verifier receives, in addition to query access to the input, also free access to an explicitly given short (sub-linear) proof. A more general notion is that of an interactive proof of proximity ( IPP ), in which the verifier is allowed to interact with an all-powerful, yet untrusted, prover. MAP s and IPP s may be thought of as the NP and IP analogues of property testing, respectively
Striatal intrinsic reinforcement signals during recognition memory: relationship to response bias and dysregulation in schizophrenia
Ventral striatum (VS) is a critical brain region for reinforcement learning and motivation, and VS hypofunction is implicated in psychiatric disorders including schizophrenia. Providing rewards or performance feedback has been shown to activate VS. Intrinsically motivated subjects performing challenging cognitive tasks are likely to engage reinforcement circuitry even in the absence of external feedback or incentives. However, such intrinsic reinforcement responses have received little attention, have not been examined in relation to behavioral performance, and have not been evaluated for impairment in neuropsychiatric disorders such as schizophrenia. Here we used fMRI to examine a challenging “old” vs. “new” visual recognition task in healthy subjects and patients with schizophrenia. Targets were unique fractal stimuli previously presented as salient distractors in a visual oddball task, producing incidental memory encoding. Based on the prediction error theory of reinforcement learning, we hypothesized that correct target recognition would activate VS in controls, and that this activation would be greater in subjects with lower expectation of responding correctly as indexed by a more conservative response bias. We also predicted these effects would be reduced in patients with schizophrenia. Consistent with these predictions, controls activated VS and other reinforcement processing regions during correct recognition, with greater VS activation in those with a more conservative response bias. Patients did not show either effect, with significant group differences suggesting hyporesponsivity in patients to internally generated feedback. These findings highlight the importance of accounting for intrinsic motivation and reward when studying cognitive tasks, and add to growing evidence of reward circuit dysfunction in schizophrenia that may impact cognition and function
Relaxed Locally Correctable Codes
Locally decodable codes (LDCs) and locally correctable codes (LCCs) are error-correcting codes in which individual bits of the message and codeword, respectively, can be recovered by querying only few bits from a noisy codeword. These codes have found numerous applications both in theory and in practice.
A natural relaxation of LDCs, introduced by Ben-Sasson et al. (SICOMP, 2006), allows the decoder to reject (i.e., refuse to answer) in case it detects that the codeword is corrupt. They call such a decoder a relaxed decoder and construct a constant-query relaxed LDC with almost-linear blocklength, which is sub-exponentially better than what is known for (full-fledged) LDCs in the constant-query regime.
We consider an analogous relaxation for local correction. Thus, a relaxed local corrector reads only few bits from a (possibly) corrupt codeword and either recovers the desired bit of the codeword, or rejects in case it detects a corruption.
We give two constructions of relaxed LCCs in two regimes, where the first optimizes the query complexity and the second optimizes the rate:
1. Constant Query Complexity: A relaxed LCC with polynomial blocklength whose corrector only reads a constant number of bits of the codeword. This is a sub-exponential improvement over the best constant query (full-fledged) LCCs that are known.
2. Constant Rate: A relaxed LCC with constant rate (i.e., linear blocklength) with quasi-polylogarithmic query complexity. This is a nearly sub-exponential improvement over the query complexity of a recent (full-fledged) constant-rate LCC of Kopparty et al. (STOC, 2016)
- …