38,294 research outputs found
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
This paper describes InfoGAN, an information-theoretic extension to the
Generative Adversarial Network that is able to learn disentangled
representations in a completely unsupervised manner. InfoGAN is a generative
adversarial network that also maximizes the mutual information between a small
subset of the latent variables and the observation. We derive a lower bound to
the mutual information objective that can be optimized efficiently, and show
that our training procedure can be interpreted as a variation of the Wake-Sleep
algorithm. Specifically, InfoGAN successfully disentangles writing styles from
digit shapes on the MNIST dataset, pose from lighting of 3D rendered images,
and background digits from the central digit on the SVHN dataset. It also
discovers visual concepts that include hair styles, presence/absence of
eyeglasses, and emotions on the CelebA face dataset. Experiments show that
InfoGAN learns interpretable representations that are competitive with
representations learned by existing fully supervised methods
Optimizing gravitational-wave searches for a population of coalescing binaries: Intrinsic parameters
We revisit the problem of searching for gravitational waves from inspiralling
compact binaries in Gaussian coloured noise. For binaries with quasicircular
orbits and non-precessing component spins, considering dominant mode emission
only, if the intrinsic parameters of the binary are known then the optimal
statistic for a single detector is the well-known two-phase matched filter.
However, the matched filter signal-to-noise ratio is /not/ in general an
optimal statistic for an astrophysical population of signals, since their
distribution over the intrinsic parameters will almost certainly not mirror
that of noise events, which is determined by the (Fisher) information metric.
Instead, the optimal statistic for a given astrophysical distribution will be
the Bayes factor, which we approximate using the output of a standard template
matched filter search. We then quantify the possible improvement in number of
signals detected for various populations of non-spinning binaries: for a
distribution of signals uniformly distributed in volume and with component
masses distributed uniformly over the range ,
at fixed expected SNR, we find more
signals at a false alarm threshold of Hz in a single detector. The
method may easily be generalized to binaries with non-precessing spins.Comment: Version accepted by Phys. Rev.
Post-processing partitions to identify domains of modularity optimization
We introduce the Convex Hull of Admissible Modularity Partitions (CHAMP)
algorithm to prune and prioritize different network community structures
identified across multiple runs of possibly various computational heuristics.
Given a set of partitions, CHAMP identifies the domain of modularity
optimization for each partition ---i.e., the parameter-space domain where it
has the largest modularity relative to the input set---discarding partitions
with empty domains to obtain the subset of partitions that are "admissible"
candidate community structures that remain potentially optimal over indicated
parameter domains. Importantly, CHAMP can be used for multi-dimensional
parameter spaces, such as those for multilayer networks where one includes a
resolution parameter and interlayer coupling. Using the results from CHAMP, a
user can more appropriately select robust community structures by observing the
sizes of domains of optimization and the pairwise comparisons between
partitions in the admissible subset. We demonstrate the utility of CHAMP with
several example networks. In these examples, CHAMP focuses attention onto
pruned subsets of admissible partitions that are 20-to-1785 times smaller than
the sets of unique partitions obtained by community detection heuristics that
were input into CHAMP.Comment: http://www.mdpi.com/1999-4893/10/3/9
- β¦