4,788 research outputs found
A construction of pooling designs with surprisingly high degree of error correction
It is well-known that many famous pooling designs are constructed from
mathematical structures by the "containment matrix" method. In this paper, we
propose another method and obtain a family of pooling designs with surprisingly
high degree of error correction based on a finite set. Given the numbers of
items and pools, the error-tolerant property of our designs is much better than
that of Macula's designs when the size of the set is large enough
Pooling designs with surprisingly high degree of error correction in a finite vector space
Pooling designs are standard experimental tools in many biotechnical
applications. It is well-known that all famous pooling designs are constructed
from mathematical structures by the "containment matrix" method. In particular,
Macula's designs (resp. Ngo and Du's designs) are constructed by the
containment relation of subsets (resp. subspaces) in a finite set (resp. vector
space). Recently, we generalized Macula's designs and obtained a family of
pooling designs with more high degree of error correction by subsets in a
finite set. In this paper, as a generalization of Ngo and Du's designs, we
study the corresponding problems in a finite vector space and obtain a family
of pooling designs with surprisingly high degree of error correction. Our
designs and Ngo and Du's designs have the same number of items and pools,
respectively, but the error-tolerant property is much better than that of Ngo
and Du's designs, which was given by D'yachkov et al. \cite{DF}, when the
dimension of the space is large enough
Statistical Network Analysis for Functional MRI: Summary Networks and Group Comparisons
Comparing weighted networks in neuroscience is hard, because the topological
properties of a given network are necessarily dependent on the number of edges
of that network. This problem arises in the analysis of both weighted and
unweighted networks. The term density is often used in this context, in order
to refer to the mean edge weight of a weighted network, or to the number of
edges in an unweighted one. Comparing families of networks is therefore
statistically difficult because differences in topology are necessarily
associated with differences in density. In this review paper, we consider this
problem from two different perspectives, which include (i) the construction of
summary networks, such as how to compute and visualize the mean network from a
sample of network-valued data points; and (ii) how to test for topological
differences, when two families of networks also exhibit significant differences
in density. In the first instance, we show that the issue of summarizing a
family of networks can be conducted by adopting a mass-univariate approach,
which produces a statistical parametric network (SPN). In the second part of
this review, we then highlight the inherent problems associated with the
comparison of topological functions of families of networks that differ in
density. In particular, we show that a wide range of topological summaries,
such as global efficiency and network modularity are highly sensitive to
differences in density. Moreover, these problems are not restricted to
unweighted metrics, as we demonstrate that the same issues remain present when
considering the weighted versions of these metrics. We conclude by encouraging
caution, when reporting such statistical comparisons, and by emphasizing the
importance of constructing summary networks.Comment: 16 pages, 5 figure
Recommended from our members
Covariate-assisted ranking and screening for large-scale two-sample inference
Two-sample multiple testing has a wide range of applications. The conventionalpractice first reduces the original observations to a vector of p-values and then chooses a cutoffto adjust for multiplicity. However, this data reduction step could cause significant loss ofinformation and thus lead to suboptimal testing procedures.We introduce a new framework fortwo-sample multiple testing by incorporating a carefully constructed auxiliary variable in inferenceto improve the power. A data-driven multiple-testing procedure is developed by employinga covariate-assisted ranking and screening (CARS) approach that optimally combines the informationfrom both the primary and the auxiliary variables. The proposed CARS procedureis shown to be asymptotically valid and optimal for false discovery rate control. The procedureis implemented in the R package CARS. Numerical results confirm the effectiveness of CARSin false discovery rate control and show that it achieves substantial power gain over existingmethods. CARS is also illustrated through an application to the analysis of a satellite imagingdata set for supernova detection
A lunar space station
A concept for a space station to be placed in low lunar orbit in support of the eventual establishment of a permanent moon base is proposed. This space station would have several functions: (1) a complete support facility for the maintenance of the permanent moon base and its population; (2) an orbital docking area to facilitate the ferrying of materials and personnel to and from Earth; (3) a zero gravity factory using lunar raw materials to grow superior GaAs crystals for use in semiconductors and mass produce inexpensive fiber glass; and (4) a space garden for the benefit of the air food cycles. The mission scenario, design requirements, and technology needs and developments are included as part of the proposal
Absence of Barren Plateaus in Quantum Convolutional Neural Networks
Quantum neural networks (QNNs) have generated excitement around the
possibility of efficiently analyzing quantum data. But this excitement has been
tempered by the existence of exponentially vanishing gradients, known as barren
plateau landscapes, for many QNN architectures. Recently, Quantum Convolutional
Neural Networks (QCNNs) have been proposed, involving a sequence of
convolutional and pooling layers that reduce the number of qubits while
preserving information about relevant data features. In this work we rigorously
analyze the gradient scaling for the parameters in the QCNN architecture. We
find that the variance of the gradient vanishes no faster than polynomially,
implying that QCNNs do not exhibit barren plateaus. This provides an analytical
guarantee for the trainability of randomly initialized QCNNs, which highlights
QCNNs as being trainable under random initialization unlike many other QNN
architectures. To derive our results we introduce a novel graph-based method to
analyze expectation values over Haar-distributed unitaries, which will likely
be useful in other contexts. Finally, we perform numerical simulations to
verify our analytical results.Comment: 9 + 20 pages, 7 + 8 figures, 3 tables. Updated to published versio
- …