2,798 research outputs found
Statistical Assertions for Validating Patterns and Finding Bugs in Quantum Programs
In support of the growing interest in quantum computing experimentation,
programmers need new tools to write quantum algorithms as program code.
Compared to debugging classical programs, debugging quantum programs is
difficult because programmers have limited ability to probe the internal states
of quantum programs; those states are difficult to interpret even when
observations exist; and programmers do not yet have guidelines for what to
check for when building quantum programs. In this work, we present quantum
program assertions based on statistical tests on classical observations. These
allow programmers to decide if a quantum program state matches its expected
value in one of classical, superposition, or entangled types of states. We
extend an existing quantum programming language with the ability to specify
quantum assertions, which our tool then checks in a quantum program simulator.
We use these assertions to debug three benchmark quantum programs in factoring,
search, and chemistry. We share what types of bugs are possible, and lay out a
strategy for using quantum programming patterns to place assertions and prevent
bugs.Comment: In The 46th Annual International Symposium on Computer Architecture
(ISCA '19). arXiv admin note: text overlap with arXiv:1811.0544
Intermediate Mass Black Hole Induced Quenching of Mass Segregation in Star Clusters
In many theoretical scenarios it is expected that intermediate-mass black
holes (IMBHs, with masses M ~ 100-10000 solar masses) reside at the centers of
some globular clusters. However, observational evidence for their existence is
limited. Several previous numerical investigations have focused on the impact
of an IMBH on the cluster dynamics or brightness profile. Here we instead
present results from a large set of direct N-body simulations including single
and binary stars. These show that there is a potentially more detectable IMBH
signature, namely on the variation of the average stellar mass between the
center and the half-light radius. We find that the existence of an IMBH
quenches mass segregation and causes the average mass to exhibit only modest
radial variation in collisionally relaxed star clusters. This differs from when
there is no IMBH. To measure this observationally requires high resolution
imaging at the level of that already available from the Hubble Space Telescope
(HST) for the cores of a large sample of galactic globular clusters. With a
modest additional investment of HST time to acquire fields around the
half-light radius, it will be possible to identify the best candidate clusters
to harbor an IMBH. This test can be applied only to globulars with a half-light
relaxation time less than or equal to 1 Gyr, which is required to guarantee
efficient energy equipartition due to two-body relaxation.Comment: 15 pages, 3 figures, ApJ, in pres
Mass Segregation in NGC 2298: limits on the presence of an Intermediate Mass Black Hole
[abridged] Theoretical investigations have suggested the presence of
Intermediate Mass Black Holes (IMBHs, with masses in the 100-10000 Msun range)
in the cores of some Globular Clusters (GCs). In this paper we present the
first application of a new technique to determine the presence or absence of a
central IMBH in globular clusters that have reached energy equipartition via
two-body relaxation. The method is based on the measurement of the radial
profile for the average mass of stars in the system, using the fact that a
quenching of mass segregation is expected when an IMBH is present. Here we
measure the radial profile of mass segregation using main-sequence stars for
the globular cluster NGC 2298 from resolved source photometry based on HST-ACS
data. The observations are compared to expectations from direct N-body
simulations of the dynamics of star clusters with and without an IMBH. The mass
segregation profile for NGC 2298 is quantitatively matched to that inferred
from simulations without a central massive object over all the radial range
probed by the observations, that is from the center to about two half-mass
radii. Profiles from simulations containing an IMBH more massive than ~ 300-500
Msun (depending on the assumed total mass of NGC 2298) are instead inconsistent
with the data at about 3 sigma confidence, irrespective of the IMF and binary
fraction chosen for these runs. While providing a null result in the quest of
detecting a central black hole in globular clusters, the data-model comparison
carried out here demonstrates the feasibility of the method which can also be
applied to other globular clusters with resolved photometry in their cores.Comment: 21 pages, 3 figures, ApJ accepte
Recommended from our members
Ocean data assimilation using optimal interpolation with a quasi-geostrophic model
Optimal interpolation (OI) has been used to produce analyses of quasi-geostrophic (QG) stream
function over a 59-day period in a 150-km-square domain off northern California. Hydrographic
observations acquired over five surveys, each of about 6 days' duration, were assimilated into a QG
open boundary ocean model. Since the true forecast error covariance function required for the OI is
unknown, assimilation experiments were conducted separately for individual surveys to investigate
the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error
covariance function. The analyses were intercompared through dynamical hindcasts between surveys,
since there were too few independent data for other verification of the various analyses. For the
hindcasts, the QG model was initialized with an analysis for one survey and then integrated according
to boundary data supplied by the corresponding analysis for the next survey. Two sets of such
hindcasts were conducted, since there were only three statistically independent realizations of the
stream function field for the entire observing period. For the irregular sampling strategy of the first half
of the observing period, the best hindcast was obtained using the smooth analyses produced with
assumed error decorrelation scales identical to those of the observed stream function (about 80 km):
the root mean square difference between the hindcast stream function and the final analysis was only
23% of the observation standard deviation. The best hindcast (with a 31% error) for the second half of
the observing period was obtained using an initial analysis based on an 80-km decorrelation scale and
a final analysis based on a 40-km decorrelation scale. The change in decorrelation scale was apparently
associated with a change in sampling strategy and the importance of the resolution of small-scale
vorticity input across the open boundary. The last survey used a regular sampling scheme with good
coverage (about 20-km resolution) of the entire domain so that smaller-scale features were resolved by
the data. The earlier surveys used a coarser (about 75 km) sampling resolution, and smaller-scale
features that were not well-resolved could not be inferred correctly even with short error covariance
scales. During the hindcast integrations, the dynamical model effectively filtered the stream function
fields to reduce differences between the various initial fields. Differences between the analyses near
inflow boundary points ultimately dominated the differences between dynamical hindcasts. Analyses
for the entire 59-day observing period of the five independent surveys were produced using continuous
assimilation. A modified form of OI in which the forecast error variances were updated according to
the analysis error variances and an assumed model error growth rate was also used, allowing the OI
to retain information about prior assimilation. The analyses using the updated variances were spatially
smoother and often in better agreement with the observations than the OI analyses using constant
variances. The two sets of OI analyses were temporally smoother than the fields from statistical
objective analysis (OA) and in good agreement with the only independent data available for
comparison. Unfortunately, the limiting factor in the validation of the assimilation methodology
remains the paucity of observations
Community characterization of heterogeneous complex systems
We introduce an analytical statistical method to characterize the communities
detected in heterogeneous complex systems. By posing a suitable null
hypothesis, our method makes use of the hypergeometric distribution to assess
the probability that a given property is over-expressed in the elements of a
community with respect to all the elements of the investigated set. We apply
our method to two specific complex networks, namely a network of world movies
and a network of physics preprints. The characterization of the elements and of
the communities is done in terms of languages and countries for the movie
network and of journals and subject categories for papers. We find that our
method is able to characterize clearly the identified communities. Moreover our
method works well both for large and for small communities.Comment: 8 pages, 1 figure and 2 table
A New Scintillator Tile/Fiber Preshower Detector for the CDF Central Calorimeter
A detector designed to measure early particle showers has been installed in
front of the central CDF calorimeter at the Tevatron. This new preshower
detector is based on scintillator tiles coupled to wavelength-shifting fibers
read out by multi-anode photomultipliers and has a total of 3,072 readout
channels. The replacement of the old gas detector was required due to an
expected increase in instantaneous luminosity of the Tevatron collider in the
next few years. Calorimeter coverage, jet energy resolution, and electron and
photon identification are among the expected improvements. The final detector
design, together with the R&D studies that led to the choice of scintillator
and fiber, mechanical assembly, and quality control are presented. The detector
was installed in the fall 2004 Tevatron shutdown and started collecting
colliding beam data by the end of the same year. First measurements indicate a
light yield of 12 photoelectrons/MIP, a more than two-fold increase over the
design goals.Comment: 5 pages, 10 figures (changes are minor; this is the final version
published in IEEE-Trans.Nucl.Sci.
Fractional and noncommutative spacetimes
We establish a mapping between fractional and noncommutative spacetimes in
configuration space. Depending on the scale at which the relation is
considered, there arise two possibilities. For a fractional spacetime with
log-oscillatory measure, the effective measure near the fundamental scale
determining the log-period coincides with the non-rotation-invariant but
cyclicity-preserving measure of \kappa-Minkowski. At scales larger than the
log-period, the fractional measure is averaged and becomes a power-law with
real exponent. This can be also regarded as the cyclicity-inducing measure in a
noncommutative spacetime defined by a certain nonlinear algebra of the
coordinates, which interpolates between \kappa-Minkowski and canonical
spacetime. These results are based upon a braiding formula valid for any
nonlinear algebra which can be mapped onto the Heisenberg algebra.Comment: 15 pages. v2: typos correcte
Statistically validated networks in bipartite complex systems
Many complex systems present an intrinsic bipartite nature and are often
described and modeled in terms of networks [1-5]. Examples include movies and
actors [1, 2, 4], authors and scientific papers [6-9], email accounts and
emails [10], plants and animals that pollinate them [11, 12]. Bipartite
networks are often very heterogeneous in the number of relationships that the
elements of one set establish with the elements of the other set. When one
constructs a projected network with nodes from only one set, the system
heterogeneity makes it very difficult to identify preferential links between
the elements. Here we introduce an unsupervised method to statistically
validate each link of the projected network against a null hypothesis taking
into account the heterogeneity of the system. We apply our method to three
different systems, namely the set of clusters of orthologous genes (COG) in
completely sequenced genomes [13, 14], a set of daily returns of 500 US
financial stocks, and the set of world movies of the IMDb database [15]. In all
these systems, both different in size and level of heterogeneity, we find that
our method is able to detect network structures which are informative about the
system and are not simply expression of its heterogeneity. Specifically, our
method (i) identifies the preferential relationships between the elements, (ii)
naturally highlights the clustered structure of investigated systems, and (iii)
allows to classify links according to the type of statistically validated
relationships between the connected nodes.Comment: Main text: 13 pages, 3 figures, and 1 Table. Supplementary
information: 15 pages, 3 figures, and 2 Table
- …