133,603 research outputs found
Heaviest Induced Ancestors and Longest Common Substrings
Suppose we have two trees on the same set of leaves, in which nodes are
weighted such that children are heavier than their parents. We say a node from
the first tree and a node from the second tree are induced together if they
have a common leaf descendant. In this paper we describe data structures that
efficiently support the following heaviest-induced-ancestor query: given a node
from the first tree and a node from the second tree, find an induced pair of
their ancestors with maximum combined weight. Our solutions are based on a
geometric interpretation that enables us to find heaviest induced ancestors
using range queries. We then show how to use these results to build an
LZ-compressed index with which we can quickly find with high probability a
longest substring common to the indexed string and a given pattern
Heavy Higgs Bosons at 14 TeV and 100 TeV
Searching for Higgs bosons beyond the Standard Model (BSM) is one of the most
important missions for hadron colliders. As a landmark of BSM physics, the MSSM
Higgs sector at the LHC is expected to be tested up to the scale of the
decoupling limit of O(1) TeV, except for a wedge region centered around
, which has been known to be difficult to probe. In this
article, we present a dedicated study testing the decoupled MSSM Higgs sector,
at the LHC and a next-generation -collider, proposing to search in channels
with associated Higgs productions, with the neutral and charged Higgs further
decaying into and , respectively. In the case of neutral Higgs we are
able to probe for the so far uncovered wedge region via . Additionally, we cover the the high range with . The combination of these searches with channels dedicated to
the low region, such as and potentially covers the full range. The search for charged
Higgs has a slightly smaller sensitivity for the moderate region,
but additionally probes for the higher and lower regions with even
greater sensitivity, via . While the LHC will be able
to probe the whole range for Higgs masses of O(1) TeV by combining
these channels, we show that a future 100 TeV -collider has a potential to
push the sensitivity reach up to TeV. In order to deal
with the novel kinematics of top quarks produced by heavy Higgs decays, the
multivariate Boosted Decision Tree (BDT) method is applied in our collider
analyses. The BDT-based tagging efficiencies of both hadronic and leptonic
top-jets, and their mutual fake rates as well as the faking rates by other jets
(, , , , etc.) are also presented.Comment: published versio
Maximum Inner-Product Search using Tree Data-structures
The problem of {\em efficiently} finding the best match for a query in a
given set with respect to the Euclidean distance or the cosine similarity has
been extensively studied in literature. However, a closely related problem of
efficiently finding the best match with respect to the inner product has never
been explored in the general setting to the best of our knowledge. In this
paper we consider this general problem and contrast it with the existing
best-match algorithms. First, we propose a general branch-and-bound algorithm
using a tree data structure. Subsequently, we present a dual-tree algorithm for
the case where there are multiple queries. Finally we present a new data
structure for increasing the efficiency of the dual-tree algorithm. These
branch-and-bound algorithms involve novel bounds suited for the purpose of
best-matching with inner products. We evaluate our proposed algorithms on a
variety of data sets from various applications, and exhibit up to five orders
of magnitude improvement in query time over the naive search technique.Comment: Under submission in KDD 201
Next-to-leading order QCD corrections to Higgs boson production in association with a photon via weak-boson fusion at the LHC
Higgs boson production in association with a hard central photon and two
forward tagging jets is expected to provide valuable information on Higgs boson
couplings in a range where it is difficult to disentangle weak-boson fusion
processes from large QCD backgrounds. We present next-to-leading order QCD
corrections to Higgs production in association with a photon via weak-boson
fusion at a hadron collider in the form of a flexible parton-level Monte Carlo
program. The QCD corrections to integrated cross sections are found to be small
for experimentally relevant selection cuts, while the shape of kinematic
distributions can be distorted by up to 20% in some regions of phase space.
Residual scale uncertainties at next-to-leading order are at the few-percent
level.Comment: 17 pages, 7 figures, 1 tabl
Faster tuple lattice sieving using spherical locality-sensitive filters
To overcome the large memory requirement of classical lattice sieving
algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS
2016] studied tuple lattice sieving, where tuples instead of pairs of lattice
vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017]
recently improved upon their results for arbitrary tuple sizes, for example
showing that a triple sieve can solve the shortest vector problem (SVP) in
dimension in time , using a technique similar to
locality-sensitive hashing for finding nearest neighbors.
In this work, we generalize the spherical locality-sensitive filters of
Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near
neighbor searching on dense data sets, and we apply these techniques to tuple
lattice sieving to obtain even better time complexities. For instance, our
triple sieve heuristically solves SVP in time . For
practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this
shows that a triple sieve uses less space and less time than the current best
near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology
ePrint Archive 2017/228, available at https://ia.cr/2017/122
Recommended from our members
Automated generation of computationally hard feature models using evolutionary algorithms
This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and
the Andalusian Government
Probing new physics with displaced vertices: muon tracker at CMS
Long-lived particles can manifest themselves at the LHC via "displaced
vertices" - several charged tracks originating from a position separated from
the proton interaction point by a macroscopic distance. Here we demonstrate a
potential of the muon trackers at the CMS experiment for displaced vertex
searches. We use heavy neutral leptons and Chern-Simons portal as two examples
of long-lived particles for which the CMS muon tracker can provide essential
information about their properties.Comment: Journal versio
- …