84,755 research outputs found
The Adaptive Sampling Revisited
The problem of estimating the number of distinct keys of a large
collection of data is well known in computer science. A classical algorithm
is the adaptive sampling (AS). can be estimated by , where is
the final bucket (cache) size and is the final depth at the end of the
process. Several new interesting questions can be asked about AS (some of them
were suggested by P.Flajolet and popularized by J.Lumbroso). The distribution
of is known, we rederive this distribution in a simpler way.
We provide new results on the moments of and . We also analyze the final
cache size distribution. We consider colored keys: assume that among the
distinct keys, do have color . We show how to estimate
. We also study colored keys with some multiplicity given by
some distribution function. We want to estimate mean an variance of this
distribution. Finally, we consider the case where neither colors nor
multiplicities are known. There we want to estimate the related parameters. An
appendix is devoted to the case where the hashing function provides bits with
probability different from
Cygnus A super-resolved via convex optimisation from VLA data
We leverage the Sparsity Averaging Reweighted Analysis (SARA) approach for
interferometric imaging, that is based on convex optimisation, for the
super-resolution of Cyg A from observations at the frequencies 8.422GHz and
6.678GHz with the Karl G. Jansky Very Large Array (VLA). The associated average
sparsity and positivity priors enable image reconstruction beyond instrumental
resolution. An adaptive Preconditioned Primal-Dual algorithmic structure is
developed for imaging in the presence of unknown noise levels and calibration
errors. We demonstrate the superior performance of the algorithm with respect
to the conventional CLEAN-based methods, reflected in super-resolved images
with high fidelity. The high resolution features of the recovered images are
validated by referring to maps of Cyg A at higher frequencies, more precisely
17.324GHz and 14.252GHz. We also confirm the recent discovery of a radio
transient in Cyg A, revealed in the recovered images of the investigated data
sets. Our matlab code is available online on GitHub.Comment: 14 pages, 7 figures (3/7 animated figures), accepted for publication
in MNRA
Adaptive Filters Revisited - RFI Mitigation in pulsar observations
Pulsar detection and timing experiments are applications where adaptive
filters seem eminently suitable tools for radio-frequency-interference (RFI)
mitigation. We describe a novel variant which works well in field trials of
pulsar observations centred on an observing frequency of 675 MHz, a bandwidth
of 64 MHz and with 2-bit sampling. Adaptive filters have generally received bad
press for RFI mitigation in radio astronomical observations with their most
serious drawback being a spectral echo of the RFI embedded in the filtered
signals. Pulsar observations are intrinsically less sensitive to this as they
operate in the (pulsar period) time domain. The field trials have allowed us to
identify those issues which limit the effectiveness of the adaptive filter. We
conclude that adaptive filters can significantly improve pulsar observations in
the presence of RFI.Comment: Accepted for publication in Radio Scienc
Robust Covariance Adaptation in Adaptive Importance Sampling
Importance sampling (IS) is a Monte Carlo methodology that allows for
approximation of a target distribution using weighted samples generated from
another proposal distribution. Adaptive importance sampling (AIS) implements an
iterative version of IS which adapts the parameters of the proposal
distribution in order to improve estimation of the target. While the adaptation
of the location (mean) of the proposals has been largely studied, an important
challenge of AIS relates to the difficulty of adapting the scale parameter
(covariance matrix). In the case of weight degeneracy, adapting the covariance
matrix using the empirical covariance results in a singular matrix, which leads
to poor performance in subsequent iterations of the algorithm. In this paper,
we propose a novel scheme which exploits recent advances in the IS literature
to prevent the so-called weight degeneracy. The method efficiently adapts the
covariance matrix of a population of proposal distributions and achieves a
significant performance improvement in high-dimensional scenarios. We validate
the new method through computer simulations
eXtended Variational Quasicontinuum Methodology for Lattice Networks with Damage and Crack Propagation
Lattice networks with dissipative interactions are often employed to analyze
materials with discrete micro- or meso-structures, or for a description of
heterogeneous materials which can be modelled discretely. They are, however,
computationally prohibitive for engineering-scale applications. The
(variational) QuasiContinuum (QC) method is a concurrent multiscale approach
that reduces their computational cost by fully resolving the (dissipative)
lattice network in small regions of interest while coarsening elsewhere. When
applied to damageable lattices, moving crack tips can be captured by adaptive
mesh refinement schemes, whereas fully-resolved trails in crack wakes can be
removed by mesh coarsening. In order to address crack propagation efficiently
and accurately, we develop in this contribution the necessary generalizations
of the variational QC methodology. First, a suitable definition of crack paths
in discrete systems is introduced, which allows for their geometrical
representation in terms of the signed distance function. Second, special
function enrichments based on the partition of unity concept are adopted, in
order to capture kinematics in the wakes of crack tips. Third, a summation rule
that reflects the adopted enrichment functions with sufficient degree of
accuracy is developed. Finally, as our standpoint is variational, we discuss
implications of the mesh refinement and coarsening from an energy-consistency
point of view. All theoretical considerations are demonstrated using two
numerical examples for which the resulting reaction forces, energy evolutions,
and crack paths are compared to those of the direct numerical simulations.Comment: 36 pages, 23 figures, 1 table, 2 algorithms; small changes after
review, paper title change
Coarse-to-Fine: Learning Compact Discriminative Representation for Single-Stage Image Retrieval
Image retrieval targets to find images from a database that are visually
similar to the query image. Two-stage methods following retrieve-and-rerank
paradigm have achieved excellent performance, but their separate local and
global modules are inefficient to real-world applications. To better trade-off
retrieval efficiency and accuracy, some approaches fuse global and local
feature into a joint representation to perform single-stage image retrieval.
However, they are still challenging due to various situations to tackle,
, background, occlusion and viewpoint. In this work, we design a
Coarse-to-Fine framework to learn Compact Discriminative representation (CFCD)
for end-to-end single-stage image retrieval-requiring only image-level labels.
Specifically, we first design a novel adaptive softmax-based loss which
dynamically tunes its scale and margin within each mini-batch and increases
them progressively to strengthen supervision during training and intra-class
compactness. Furthermore, we propose a mechanism which attentively selects
prominent local descriptors and infuse fine-grained semantic relations into the
global representation by a hard negative sampling strategy to optimize
inter-class distinctiveness at a global scale. Extensive experimental results
have demonstrated the effectiveness of our method, which achieves
state-of-the-art single-stage image retrieval performance on benchmarks such as
Revisited Oxford and Revisited Paris. Code is available at
https://github.com/bassyess/CFCD.Comment: Accepted to ICCV 202
The Core-Collapse Supernova Rate in Arp299 Revisited
We present a study of the CCSN rate in nuclei A and B1 of the luminous
infrared galaxy Arp299, based on 11 years of Very Large Array monitoring of
their radio emission at 8.4 GHz. Significant variations in the nuclear radio
flux density can be used to identify the CCSN activity in the absence of
high-resolution very long baseline interferometry observations. In the case of
the B1-nucleus, the small variations in its measured diffuse radio emission are
below the fluxes expected from radio supernovae, thus making it well-suited to
detect RSNe through flux density variability. In fact, we find strong evidence
for at least three RSNe this way, which results in a lower limit for the CCSN
rate of 0.28 +/- 0.16 per year. In the A-nucleus, we did not detect any
significant variability and found a SN detection threshold luminosity which
allows only the detection of the most luminous RSNe known. Our method is
basically blind to normal CCSN explosions occurring within the A-nucleus, which
result in too small variations in the nuclear flux density, remaining diluted
by the strong diffuse emission of the nucleus itself. Additionally, we have
attempted to find near-infrared counterparts for the earlier reported RSNe in
the Arp299 nucleus A, by comparing NIR adaptive optics images from the Gemini-N
telescope with contemporaneous observations from the European VLBI Network.
However, we were not able to detect NIR counterparts for the reported radio SNe
within the innermost regions of nucleus A. While our NIR observations were
sensitive to typical CCSNe at 300 mas from the centre of the nucleus A,
suffering from extinction up to A_v~15 mag, they were not sensitive to such
highly obscured SNe within the innermost nuclear regions where most of the EVN
sources were detected. (abridged)Comment: 12 pages, 4 figures and 7 tables. Accepted for publication in MNRA
- …