146 research outputs found
A Simply Exponential Upper Bound on the Maximum Number of Stable Matchings
Stable matching is a classical combinatorial problem that has been the
subject of intense theoretical and empirical study since its introduction in
1962 in a seminal paper by Gale and Shapley. In this paper, we provide a new
upper bound on , the maximum number of stable matchings that a stable
matching instance with men and women can have. It has been a
long-standing open problem to understand the asymptotic behavior of as
, first posed by Donald Knuth in the 1970s. Until now the best
lower bound was approximately , and the best upper bound was . In this paper, we show that for all , for some
universal constant . This matches the lower bound up to the base of the
exponent. Our proof is based on a reduction to counting the number of downsets
of a family of posets that we call "mixing". The latter might be of independent
interest
Parallel Construction of Wavelet Trees on Multicore Architectures
The wavelet tree has become a very useful data structure to efficiently
represent and query large volumes of data in many different domains, from
bioinformatics to geographic information systems. One problem with wavelet
trees is their construction time. In this paper, we introduce two algorithms
that reduce the time complexity of a wavelet tree's construction by taking
advantage of nowadays ubiquitous multicore machines.
Our first algorithm constructs all the levels of the wavelet in parallel in
time and bits of working space, where
is the size of the input sequence and is the size of the alphabet. Our
second algorithm constructs the wavelet tree in a domain-decomposition fashion,
using our first algorithm in each segment, reaching time and
bits of extra space, where is the
number of available cores. Both algorithms are practical and report good
speedup for large real datasets.Comment: This research has received funding from the European Union's Horizon
2020 research and innovation programme under the Marie Sk{\l}odowska-Curie
Actions H2020-MSCA-RISE-2015 BIRDS GA No. 69094
SILC: a new Planck Internal Linear Combination CMB temperature map using directional wavelets
We present new clean maps of the CMB temperature anisotropies (as measured by
Planck) constructed with a novel internal linear combination (ILC) algorithm
using directional, scale-discretised wavelets --- Scale-discretised,
directional wavelet ILC or SILC. Directional wavelets, when convolved with
signals on the sphere, can separate the anisotropic filamentary structures
which are characteristic of both the CMB and foregrounds. Extending previous
component separation methods, which use the frequency, spatial and harmonic
signatures of foregrounds to separate them from the cosmological background
signal, SILC can additionally use morphological information in the foregrounds
and CMB to better localise the cleaning algorithm. We test the method on Planck
data and simulations, demonstrating consistency with existing component
separation algorithms, and discuss how to optimise the use of morphological
information by varying the number of directional wavelets as a function of
spatial scale. We find that combining the use of directional and axisymmetric
wavelets depending on scale could yield higher quality CMB temperature maps.
Our results set the stage for the application of SILC to polarisation
anisotropies through an extension to spin wavelets.Comment: 15 pages, 13 figures. Minor changes to match version published in
MNRAS. Map products available at http://www.silc-cmb.or
One machine, one minute, three billion tetrahedra
This paper presents a new scalable parallelization scheme to generate the 3D
Delaunay triangulation of a given set of points. Our first contribution is an
efficient serial implementation of the incremental Delaunay insertion
algorithm. A simple dedicated data structure, an efficient sorting of the
points and the optimization of the insertion algorithm have permitted to
accelerate reference implementations by a factor three. Our second contribution
is a multi-threaded version of the Delaunay kernel that is able to concurrently
insert vertices. Moore curve coordinates are used to partition the point set,
avoiding heavy synchronization overheads. Conflicts are managed by modifying
the partitions with a simple rescaling of the space-filling curve. The
performances of our implementation have been measured on three different
processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we
have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds
to a generation rate of over 55 million tetrahedra per second. We finally show
how this very efficient parallel Delaunay triangulation can be integrated in a
Delaunay refinement mesh generator which takes as input the triangulated
surface boundary of the volume to mesh
Recommended from our members
Characterizing the spectral properties and time variation of the in-vehicle wireless communication channel
To deploy effective communication systems in vehicle cavities, it is critical to understand the time variation of the in-vehicle channel. Initially rapid channel variation is addressed, which is characterised in the frequency domain as a Doppler spread. It is then shown that for typical Doppler spreads, the in-vehicle channel is underspread, and therefore the
information capacity approaches the capacity achieved with perfect receiver channel state information in the infinite bandwidth limit. Measurements are performed for a number of channel variation scenarios (absorptive motion, reflective motion, one antenna moving, both antennas moving), at a number of carrier frequencies and for a number of cavity loading scenarios. It is found that the Doppler spread increases with carrier frequency,
however the type of channel variation and loading appear to have little effect.
Channel variation over a longer time period is also measured, to characterise the slower channel variation. Channel variation is a function of the cavity occupant motion, which is difficult to model theoretically, therefore an empirical model for the slow channel
variation is proposed, which leads to an improved estimate of the channel state.This work is supported by the U.K. Engineering and Physical Sciences
Research Council (EPSRC) and National Physical Laboratory (NPL) under an
EPSRC-NPL Industrial CASE studentship programme on the subject of intra-Vehicular Wireless Sensor Networks. The work of T. H. Loh was supported by
the 2009 - 2012 Physical Program and 2012 - 2015 Electromagnetic Metrology
Program of the National Measurement Office, an Executive Agency of the
U.K. Department for Business, Innovation and Skills, under Projects 113860
and EMT13020, respectively.This is the author accepted manuscript. The final version can be found on the publisher's website at: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=682581
Succinct Oblivious RAM
As online storage services become increasingly common, it is important that users\u27 private information is protected from database access pattern analyses. Oblivious RAM (ORAM) is a cryptographic primitive that enables users to perform arbitrary database accesses without revealing any information about the access pattern to the server. Previous ORAM studies focused mostly on reducing the access overhead. Consequently, the access overhead of the state-of-the-art ORAM constructions are almost at practical levels in certain application scenarios such as secure processors. However, we assume that the server space usage could become a new important issue in the coming big-data era. To enable large-scale computation in security-aware settings, it is necessary to rethink the ORAM server space cost using big-data standards.
In this paper, we introduce "succinctness" as a theoretically tractable and practically relevant criterion of the ORAM server space efficiency in the big-data era. We, then, propose two succinct ORAM constructions that also exhibit state-of-the-art performance in terms of the bandwidth blowup and the user space. We also give non-asymptotic analyses and simulation results which indicate that the proposed ORAM constructions are practically effective
Journal of Open Source Software (JOSS): design and first-year review
This article describes the motivation, design, and progress of the Journal of Open Source Software (JOSS). JOSS is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit. While designed to work within the current merit system of science, JOSS addresses the dearth of rewards for key contributions to science made in the form of software. JOSS publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license. A JOSS article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a JOSS submission, which encompasses the full set of software artifacts. Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, JOSS does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements). Once an article is accepted, JOSS gives it a DOI, deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their JOSS article, releasing it under a Creative Commons Attribution 4.0 International License. In its first year, starting in May 2016, JOSS published 111 articles, with more than 40 additional articles currently under review. JOSS is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative
Prescripción de analgésicos en la sala de emergencia del Hospital de Referencia Nacional Antonio Lenín Fonseca, Distrito II, Abril-Octubre del 2011
El presente estudio de utilización de medicamentos, de tipo Prescripción-Indicación se realizó para conocer el uso de los analgésicos evaluando la efectividad y seguridad de éstos, se estudiaron 168 casos que corresponden a igual cantidad de pacientes atendidos en la Emergencia del Hospital en el periodo de Abril a Octubre del 2011 que asistieron con algún tipo de dolor.
Cabe mencionar, que para dicho estudio se utilizó fichas de recolección de datos en la que parte de la información para completar los datos de la ficha se toman de la hoja de registro de cada individuo pero, a su vez, en ciertos parámetros se entrevistan a los pacientes los cuales nos proporcionan la información requerida para cumplir con los objetivos de esta investigación.
En ambos sexos el dolor músculo-esquelético predominó con un 15%, seguido del dolor Toráxico, al igual que la Cefalea unos de los más frecuentes en los pacientes en estudio con
un 9% para cada uno. El fármaco más prescrito fue metamizol con el 80% de los pacientes en estudio, luego los prescritos fueron el ketorolaco y diclofenaco con un 16 y 3%
respectivamente.
Estos analgésicos produjeron Reacciones Adversas edicamentosas, entre estas tenemos quemazón, mareos y somnolencia; el Metamizol predominó presentando un 27% de
reacciones adversas, seguido del Ketorolaco y Morfina con un 1% para cada uno. El 94% de los analgésicos utilizados fueron prescritos de forma correcta, siendo el de
mayor efectividad el ketorolaco que alivió el dolor que remitían los pacientes en los primeros 30 minutos luego de su administració
Current challenges of research on filamentous fungi in relation to human welfare and a sustainable bio-economy: a white paper.
The EUROFUNG network is a virtual centre of multidisciplinary expertise in the field of fungal biotechnology. The first academic-industry Think Tank was hosted by EUROFUNG to summarise the state of the art and future challenges in fungal biology and biotechnology in the coming decade. Currently, fungal cell factories are important for bulk manufacturing of organic acids, proteins, enzymes, secondary metabolites and active pharmaceutical ingredients in white and red biotechnology. In contrast, fungal pathogens of humans kill more people than malaria or tuberculosis. Fungi are significantly impacting on global food security, damaging global crop production, causing disease in domesticated animals, and spoiling an estimated 10 % of harvested crops. A number of challenges now need to be addressed to improve our strategies to control fungal pathogenicity and to optimise the use of fungi as sources for novel compounds and as cell factories for large scale manufacture of bio-based products. This white paper reports on the discussions of the Think Tank meeting and the suggestions made for moving fungal bio(techno)logy forward
- …
