51 research outputs found
Deep Optimal Transport: A Practical Algorithm for Photo-realistic Image Restoration
We propose an image restoration algorithm that can control the perceptual
quality and/or the mean square error (MSE) of any pre-trained model, trading
one over the other at test time. Our algorithm is few-shot: Given about a dozen
images restored by the model, it can significantly improve the perceptual
quality and/or the MSE of the model for newly restored images without further
training. Our approach is motivated by a recent theoretical result that links
between the minimum MSE (MMSE) predictor and the predictor that minimizes the
MSE under a perfect perceptual quality constraint. Specifically, it has been
shown that the latter can be obtained by optimally transporting the output of
the former, such that its distribution matches the source data. Thus, to
improve the perceptual quality of a predictor that was originally trained to
minimize MSE, we approximate the optimal transport by a linear transformation
in the latent space of a variational auto-encoder, which we compute in
closed-form using empirical means and covariances. Going beyond the theory, we
find that applying the same procedure on models that were initially trained to
achieve high perceptual quality, typically improves their perceptual quality
even further. And by interpolating the results with the original output of the
model, we can improve their MSE on the expense of perceptual quality. We
illustrate our method on a variety of degradations applied to general content
images of arbitrary dimensions
Prefix Filter: Practically and Theoretically Better Than Bloom
Many applications of approximate membership query data structures, or
filters, require only an incremental filter that supports insertions but not
deletions. However, the design space of incremental filters is missing a "sweet
spot" filter that combines space efficiency, fast queries, and fast insertions.
Incremental filters, such as the Bloom and blocked Bloom filter, are not space
efficient. Dynamic filters (i.e., supporting deletions), such as the cuckoo or
vector quotient filter, are space efficient but do not exhibit consistently
fast insertions and queries.
In this paper, we propose the prefix filter, an incremental filter that
addresses the above challenge: (1) its space (in bits) is similar to
state-of-the-art dynamic filters; (2) query throughput is high and is
comparable to that of the cuckoo filter; and (3) insert throughput is high with
overall build times faster than those of the vector quotient filter and cuckoo
filter by - and -, respectively.
We present a rigorous analysis of the prefix filter that holds also for
practical set sizes (i.e., ). The analysis deals with the probability
of failure, false positive rate, and probability that an operation requires
accessing more than a single cache line
BLEACH: Cleaning Errors in Discrete Computations over CKKS
Approximated homomorphic encryption (HE) schemes such as CKKS are commonly used to perform computations over encrypted real numbers. It is commonly assumed that these schemes are not “exact” and thus they cannot execute circuits with unbounded depth over discrete sets, such as binary or integer numbers, without error overflows. These circuits are usually executed using BGV and B/FV for integers and TFHE for binary numbers. This artificial separation can cause users to favor one scheme over another for a given computation, without even exploring other, perhaps better, options.
We show that by treating step functions as “clean-up” utilities and by leveraging the SIMD capabilities of CKKS, we can extend the homomorphic encryption toolbox with efficient tools. These tools use CKKS to run unbounded circuits that operate over binary and small-integer elements and even combine these circuits with fixed-point real numbers circuits. We demonstrate the results using the Turing-complete Conway’s Game of Life. In our evaluation, for boards of size 128x128, these tools achieved an order of magnitude faster latency than previous implementations using other HE schemes. We argue and demonstrate that for large enough real-world inputs, performing binary circuits over CKKS, while considering it as an “exact” scheme, results in comparable or even better performance than using other schemes tailored for similar inputs
Quantitative analytical tools for bee health (Apis mellifera) assessment
Background: The number of honeybee (Apis mellifera) colony losses has grown significantly in the past decade, endangering pollination of agricultural crops. Research indicates that no single factor is sufficient to explain colony losses and that a combination of stressors appears to impact hive health. Accurate evaluation of the different factors such as pathogen load, environmental conditions, nutrition and foraging is important to understanding colony loss. Commonly used colony assessment methods are subjective and imprecise making it difficult to compare bee hive parameters between studies. Finding robust, validated methods to assess bees and hive health has become a key area of focus for bee health and bee risk assessment.Results: Our study focused on developing and implementing quantitative analytical tools that allowed us to investigate different factors contribution to colony loss. These validated methods include: adult bee and brood cell imaging and automated counting (IndiCounter, WSC Regexperts), cellular transmitting scales and weather monitoring (Phytech, ILS) and pathogen detection (QuantiGene® Plex 2.0 RNA assay platform from Affymetrix). These techniques enable accurate assessment of colony state.Conclusion: A major challenge to date for bee health is to identify the events leading to colony loss. Our study describes validated molecular and computational tools to assess colony health that can prospectively describe the etiology of potential diseases and in some cases identify the cause leading to colony collapse.Keywords: colony loss, colony assessment methods, cellular transmitting scales, weather monitoring, QuantiGene® Plex 2.0
First measurements of radon-220 diffusion in mice tumors, towards treatment planning in diffusing alpha-emitters radiation therapy
Alpha-DaRT is a new method for treating solid tumors with alpha particles,
relying on the release of the alpha-emitting daughter atoms of radium-224 from
sources inserted into the tumor. The most important model parameters for
Alpha-DaRT dosimetry are the diffusion lengths of radon-220 and lead-212, and
their estimation is essential for treatment planning. The aim of this work is
to provide first experimental estimates for the diffusion length of radon-220.
The diffusion length of radon-220 was estimated from autoradiography
measurements of histological sections taken from 24 mice-borne subcutaneous
tumors of five different types. Experiments were done in two sets: fourteen
in-vivo tumors, where during the treatment the tumors were still carried by the
mice with active blood supply, and ten ex-vivo tumors, where the tumors were
excised before source insertion and kept in a medium at 37 degrees C with the
source inside. The measured diffusion lengths of radon-220 lie in the range
0.25-0.6 mm, with no significant difference between the average values measured
in in-vivo and ex-vivo tumors: 0.40 0.08 mm for in-vivo vs. 0.39
0.07 mm for ex-vivo. However, in-vivo tumors display an enhanced spread of
activity 2-3 mm away from the source. This effect is not explained by the
current model and is much less pronounced in ex-vivo tumors. The average
measured radon-220 diffusion lengths in both in-vivo and ex-vivo tumors lie
close to the upper limit of the previously estimated range of 0.2-0.4 mm. The
observation that close to the source there is no apparent difference between
in-vivo and ex-vivo tumors, and the good agreement with the theoretical model
in this region suggest that the spread of radon-220 is predominantly diffusive
in this region. The departure from the model prediction in in-vivo tumors at
large radial distances may hint at potential vascular contribution
The E705K Mutation in hPMS2 Exerts Recessive, Not Dominant, Effects on Mismatch Repair
The hPMS2 mutation E705K is associated with Turcot syndrome. To elucidate the pathogenesis of hPMS2-E705K, we modeled this mutation in yeast and characterized its expression and effects on mutation avoidance in mammalian cells. We found that while hPMS2-E705K (pms1-E738K in yeast) did not significantly affect hPMS2 (Pms1p in yeast) stability or interaction with MLH1, it could not complement the mutator phenotype in MMR-deficient mouse or yeast cells. Furthermore, hPMS2-E705K/pms1-E738K inhibited MMR in wild-type (WT) mammalian cell extracts or yeast cells only when present in excess amounts relative to WT PMS2. Our results strongly suggest that hPMS2-E705K is a recessive loss-of-function allele
Time Variations in the Scale of Grand Unification
We study the consequences of time variations in the scale of grand
unification, , when the Planck scale and the value of the unified coupling
at the Planck scale are held fixed. We show that the relation between the
variations of the low energy gauge couplings is highly model dependent. It is
even possible, in principle, that the electromagnetic coupling varies,
but the strong coupling does not (to leading approximation). We
investigate whether the interpretation of recent observations of quasar
absorption lines in terms of time variation in can be accounted for by
time variation in . Our formalism can be applied to any scenario where a
time variation in an intermediate scale induces, through threshold corrections,
time variations in the effective low scale couplings.Comment: 14 pages, revtex4; Updated observational results and improved
statistical analysis (section IV); added reference
Recommended from our members
Genomic data provide insights into the classification of extant termites.
The higher classification of termites requires substantial revision as the Neoisoptera, the most diverse termite lineage, comprise many paraphyletic and polyphyletic higher taxa. Here, we produce an updated termite classification using genomic-scale analyses. We reconstruct phylogenies under diverse substitution models with ultraconserved elements analyzed as concatenated matrices or within the multi-species coalescence framework. Our classification is further supported by analyses controlling for rogue loci and taxa, and topological tests. We show that the Neoisoptera are composed of seven family-level monophyletic lineages, including the Heterotermitidae Froggatt, Psammotermitidae Holmgren, and Termitogetonidae Holmgren, raised from subfamilial rank. The species-rich Termitidae are composed of 18 subfamily-level monophyletic lineages, including the new subfamilies Crepititermitinae, Cylindrotermitinae, Forficulitermitinae, Neocapritermitinae, Protohamitermitinae, and Promirotermitinae; and the revived Amitermitinae Kemner, Microcerotermitinae Holmgren, and Mirocapritermitinae Kemner. Building an updated taxonomic classification on the foundation of unambiguously supported monophyletic lineages makes it highly resilient to potential destabilization caused by the future availability of novel phylogenetic markers and methods. The taxonomic stability is further guaranteed by the modularity of the new termite classification, designed to accommodate as-yet undescribed species with uncertain affinities to the herein delimited monophyletic lineages in the form of new families or subfamilies
- …