478,069 research outputs found
Counting with Focus for Free
This paper aims to count arbitrary objects in images. The leading counting
approaches start from point annotations per object from which they construct
density maps. Then, their training objective transforms input images to density
maps through deep convolutional networks. We posit that the point annotations
serve more supervision purposes than just constructing density maps. We
introduce ways to repurpose the points for free. First, we propose supervised
focus from segmentation, where points are converted into binary maps. The
binary maps are combined with a network branch and accompanying loss function
to focus on areas of interest. Second, we propose supervised focus from global
density, where the ratio of point annotations to image pixels is used in
another branch to regularize the overall density estimation. To assist both the
density estimation and the focus from segmentation, we also introduce an
improved kernel size estimator for the point annotations. Experiments on six
datasets show that all our contributions reduce the counting error, regardless
of the base network, resulting in state-of-the-art accuracy using only a single
network. Finally, we are the first to count on WIDER FACE, allowing us to show
the benefits of our approach in handling varying object scales and crowding
levels. Code is available at
https://github.com/shizenglin/Counting-with-Focus-for-FreeComment: ICCV, 201
Towards Complexity for Quantum Field Theory States
We investigate notions of complexity of states in continuous quantum-many
body systems. We focus on Gaussian states which include ground states of free
quantum field theories and their approximations encountered in the context of
the continuous version of Multiscale Entanglement Renormalization Ansatz. Our
proposal for quantifying state complexity is based on the Fubini-Study metric.
It leads to counting the number of applications of each gate (infinitesimal
generator) in the transformation, subject to a state-dependent metric. We
minimize the defined complexity with respect to momentum preserving quadratic
generators which form algebras. On the manifold of
Gaussian states generated by these operations the Fubini-Study metric
factorizes into hyperbolic planes with minimal complexity circuits reducing to
known geodesics. Despite working with quantum field theories far outside the
regime where Einstein gravity duals exist, we find striking similarities
between our results and holographic complexity proposals.Comment: 6+7 pages, 6 appendices, 2 figures; v2: references added;
acknowledgments expanded; appendix F added, reviewing similarities and
differences with hep-th/1707.08570; v3: version published in PR
Mapping 6D N = 1 supergravities to F-theory
We develop a systematic framework for realizing general anomaly-free chiral
6D supergravity theories in F-theory. We focus on 6D (1, 0) models with one
tensor multiplet whose gauge group is a product of simple factors (modulo a
finite abelian group) with matter in arbitrary representations. Such theories
can be decomposed into blocks associated with the simple factors in the gauge
group; each block depends only on the group factor and the matter charged under
it. All 6D chiral supergravity models can be constructed by gluing such blocks
together in accordance with constraints from anomalies. Associating a geometric
structure to each block gives a dictionary for translating a supergravity model
into a set of topological data for an F-theory construction. We construct the
dictionary of F-theory divisors explicitly for some simple gauge group factors
and associated matter representations. Using these building blocks we analyze a
variety of models. We identify some 6D supergravity models which do not map to
integral F-theory divisors, possibly indicating quantum inconsistency of these
6D theories.Comment: 37 pages, no figures; v2: references added, minor typos corrected;
v3: minor corrections to DOF counting in section
Development and test of photon-counting microchannel plate detector arrays for use on space telescopes
The full sensitivity, dynamic range, and photometric stability of microchannel array plates(MCP) are incorporated into a photon-counting detection system for space operations. Components of the system include feedback-free MCP's for high gain and saturated output pulse-height distribution with a stable response; multi-anode readout arrays mounted in proximity focus with the output face of the MCP; and multi-layer ceramic headers to provide electrical interface between the anode array in a sealed detector tube and the associated electronics
Investigating a Method to Measure Sperm Transfer in Chelidonura sandrana (Opisthobranchia: Cephalaspidea)
This paper investigates possible methods for measuring sperm transfer in the internal fertilizing, simultaneous hermaphrodite Chelidonura sandrana (Opisthobranchia: Cephalaspidea). Comparing sperm amount transferred in copulations has significance for testing the assumption that sperm transfer is linearly correlated with copulation duration as well as providing a tool for future studies. Various methods of preparing, treating, and viewing sperm samples were attempted. Two unsuccessful pilot studies were conducted to test free sperm counts and measuring sperm pellet surface area. Future research should focus on optimizing centrifugation method for surface area measurements of sperm clusters and resuspending sperm clusters to enable sperm counting. In conclusion, this study provides a background for future work measuring sperm transfer in C. sandrana
Neutrinos in a spherical box
In the present paper we study some neutrino properties as they may appear in
the low energy neutrinos emitted in triton decay with maximum neutrino energy
of 18.6 keV. The technical challenges to this end can be achieved by building a
very large TPC capable of detecting low energy recoils, down to a a few tenths
of a keV, within the required low background constraints. More specifically We
propose the development of a spherical gaseous TPC of about 10-m in radius and
a 200 Mcurie triton source in the center of curvature. One can list a number of
exciting studies, concerning fundamental physics issues, that could be made
using a large volume TPC and low energy antineutrinos: 1) The oscillation
length involving the small angle of the neutrino mixing matrix, directly
measured in this disappearance experiment, is fully contained inside the
detector. Measuring the counting rate of neutrino-electron elastic scattering
as a function of the distance of the source will give a precise and unambiguous
measurement of the oscillation parameters free of systematic errors. In fact
first estimates show that even with a year's data taking a sensitivity of a few
percent for the measurement of the above angle will be achieved. 2) The low
energy detection threshold offers a unique sensitivity for the neutrino
magnetic moment which is about two orders of magnitude beyond the current
experimental limit. 3) Scattering at such low neutrino energies has never been
studied and any departure from the expected behavior may be an indication of
new physics beyond the standard model. In this work we mainly focus on the
various theoretical issues involved including a precise determination of the
Weinberg angle at very low momentum transfer.Comment: 16 Pages, LaTex, 7 figures, talk given at NANP 2003, Dubna, Russia,
June 23, 200
Anonymous Readers Counting: A Wait-Free Multi-Word Atomic Register Algorithm for Scalable Data Sharing on Multi-Core Machines
In this article we present Anonymous Readers Counting (ARC), a multi-word atomic (1,N) register algorithm for multi-core machines. ARC exploits Read-Modify-Write (RMW) instructions to coordinate the writer and reader threads in a wait-free manner and enables large-scale data sharing by admitting up to (232-2) concurrent readers on off-the-shelf 64-bit machines, as opposed to the most advanced RMW-based approach which is limited to 58 readers on the same kind of machines. Further, ARC avoids multiple copies of the register content when accessing it - this is a problem that affects classical register algorithms based on atomic read/write operations on single words. Thus it allows for higher scalability with respect to the register size. Moreover, ARC explicitly reduces the overall power consumption, via a proper limitation of RMW instructions in case of read operations re-accessing a still-valid snapshot of the register content, and by showing constant time for read operations and amortized constant time for write operations. Our proposal has therefore a strong focus on real-world off-the-shelf architectures, allowing us to capture properties which benefit both performance and power consumption. A proof of correctness of our register algorithm is also provided, together with experimental data for a comparison with literature proposals. Beyond assessing ARC on physical platforms, we carry out as well an experimentation on virtualized infrastructures, which shows the resilience of wait-free synchronization as provided by ARC with respect to CPU-steal times, proper of modern paradigms such as cloud computing. Finally, we discuss how to extend ARC for scenarios with multiple writers and multiple readers - the so called (M,N) register. This is achieved not by changing the operations (and their wait-free nature) executed along the critical path of the threads, rather only changing the ratio between the number of buffers keeping the register snapshots and the number of threads to coordinate, as well as the number of bits used for counting readers within a 64-bit mask accessed via RMW instructions - just depending on the target balance between the number of readers and the number of writers to be supported
- …