312 research outputs found
On hardcoding finite state automata processing
In this paper, we present various experiments in hardcoding the transition table of a finite state machine directly into stringrecognizing code. Measurements are provided to show the time efficiency gains by various hardcoded versions over the traditional table-driven approach
Hardcoding and dynamic implementation of finite automata
The theoretical complexity of a string recognizer is linear to the length of the string being tested for acceptance. However, for some kind of strings the processing time largely depends on the number of states visited by the recognizer at run-time. Various experiments are conducted in order to compare the time efficiency of both hardcoded and table-driven algorithms when using such strings patterns. The results of the experiments are cross-compared in order to show the efficiency of the hardcoded algorithm over its table-driven counterpart. This help further the investigations on the problem of the dynamic implementation of finite automata. It is shown that we can rely on the history of the states previously visited in the dynamic framework in order to predict the suitable algorithm for acceptance testing
Kinematic modelling of the Milky Way using the RAVE and GCS stellar surveys
We investigate the kinematic parameters of the Milky Way disc using the RAVE
and GCS stellar surveys. We do this by fitting a kinematic model to the data
taking the selection function of the data into account. For stars in the GCS we
use all phase-space coordinates, but for RAVE stars we use only . Using MCMC technique, we investigate the full posterior distributions
of the parameters given the data. We investigate the `age-velocity dispersion'
relation for the three kinematic components
(), the radial dependence of the velocity
dispersions, the Solar peculiar motion (), the
circular speed at the Sun and the fall of mean azimuthal motion with
height above the mid-plane. We confirm that the Besan\c{c}on-style Gaussian
model accurately fits the GCS data, but fails to match the details of the more
spatially extended RAVE survey. In particular, the Shu distribution function
(DF) handles non-circular orbits more accurately and provides a better fit to
the kinematic data. The Gaussian distribution function not only fits the data
poorly but systematically underestimates the fall of velocity dispersion with
radius. We find that correlations exist between a number of parameters, which
highlights the importance of doing joint fits. The large size of the RAVE
survey, allows us to get precise values for most parameters. However, large
systematic uncertainties remain, especially in and . We
find that, for an extended sample of stars, is underestimated by as
much as if the vertical dependence of the mean azimuthal motion is
neglected. Using a simple model for vertical dependence of kinematics, we find
that it is possible to match the Sgr A* proper motion without any need for
being larger than that estimated locally by surveys like GCS.Comment: 27 pages, 13 figures, accepted for publication in Ap
The RAVE survey: the Galactic escape speed and the mass of the Milky Way
We construct new estimates on the Galactic escape speed at various
Galactocentric radii using the latest data release of the Radial Velocity
Experiment (RAVE DR4). Compared to previous studies we have a database larger
by a factor of 10 as well as reliable distance estimates for almost all stars.
Our analysis is based on the statistical analysis of a rigorously selected
sample of 90 high-velocity halo stars from RAVE and a previously published data
set. We calibrate and extensively test our method using a suite of cosmological
simulations of the formation of Milky Way-sized galaxies. Our best estimate of
the local Galactic escape speed, which we define as the minimum speed required
to reach three virial radii , is km/s (90%
confidence) with an additional 5% systematic uncertainty, where is
the Galactocentric radius encompassing a mean over-density of 340 times the
critical density for closure in the Universe. From the escape speed we further
derive estimates of the mass of the Galaxy using a simple mass model with two
options for the mass profile of the dark matter halo: an unaltered and an
adiabatically contracted Navarro, Frenk & White (NFW) sphere. If we fix the
local circular velocity the latter profile yields a significantly higher mass
than the un-contracted halo, but if we instead use the statistics on halo
concentration parameters in large cosmological simulations as a constraint we
find very similar masses for both models. Our best estimate for , the
mass interior to (dark matter and baryons), is M (corresponding to M). This estimate is in good agreement with recently published
independent mass estimates based on the kinematics of more distant halo stars
and the satellite galaxy Leo I.Comment: 16 pages, 15 figures; accepted for publication in Astronomy &
Astrophysic
Computer-aided design of nano-filter construction using DNA self-assembly
Computer-aided design plays a fundamental role in both top-down and bottom-up nano-system fabrication. This paper presents a bottom-up nano-filter patterning process based on DNA self-assembly. In this study we designed a new method to construct fully designed nano-filters with the pores between 5 nm and 9 nm in diameter. Our calculations illustrated that by constructing such a nano-filter we would be able to separate many molecules
Dengue in Madeira Island
This is a preprint of a paper whose final and definite form will be published in the volume
Mathematics of Planet Earth that initiates the book series CIM Series in Mathematical Sciences
(CIM-MS) published by Springer. Submitted Oct/2013; Revised 16/July/2014 and 20/Sept/2014;
Accepted 28/Sept/2014.Dengue is a vector-borne disease and 40% of world population is at risk.
Dengue transcends international borders and can be found in tropical and subtropical
regions around the world, predominantly in urban and semi-urban areas. A
model for dengue disease transmission, composed by mutually-exclusive compartments
representing the human and vector dynamics, is presented in this study. The
data is from Madeira, a Portuguese island, where an unprecedented outbreak was
detected on October 2012. The aim of this work is to simulate the repercussions of
the control measures in the fight of the disease
Survival probability of mutually killing Brownian motions and the O'Connell process
Recently O'Connell introduced an interacting diffusive particle system in
order to study a directed polymer model in 1+1 dimensions. The infinitesimal
generator of the process is a harmonic transform of the quantum Toda-lattice
Hamiltonian by the Whittaker function. As a physical interpretation of this
construction, we show that the O'Connell process without drift is realized as a
system of mutually killing Brownian motions conditioned that all particles
survive forever. When the characteristic length of interaction killing other
particles goes to zero, the process is reduced to the noncolliding Brownian
motion (the Dyson model).Comment: v2: AMS-LaTeX, 20 pages, 2 figures, minor corrections made for
publication in J. Stat. Phy
Grain Surface Models and Data for Astrochemistry
AbstractThe cross-disciplinary field of astrochemistry exists to understand the formation, destruction, and survival of molecules in astrophysical environments. Molecules in space are synthesized via a large variety of gas-phase reactions, and reactions on dust-grain surfaces, where the surface acts as a catalyst. A broad consensus has been reached in the astrochemistry community on how to suitably treat gas-phase processes in models, and also on how to present the necessary reaction data in databases; however, no such consensus has yet been reached for grain-surface processes. A team of ∼25 experts covering observational, laboratory and theoretical (astro)chemistry met in summer of 2014 at the Lorentz Center in Leiden with the aim to provide solutions for this problem and to review the current state-of-the-art of grain surface models, both in terms of technical implementation into models as well as the most up-to-date information available from experiments and chemical computations. This review builds on the results of this workshop and gives an outlook for future directions
Research trends in combinatorial optimization
Acknowledgments This work has been partially funded by the Spanish Ministry of Science, Innovation, and Universities through the project COGDRIVE (DPI2017-86915-C3-3-R). In this context, we would also like to thank the Karlsruhe Institute of Technology. Open access funding enabled and organized by Projekt DEAL.Peer reviewedPublisher PD
Projected WIMP sensitivity of the LUX-ZEPLIN dark matter experiment
LUX-ZEPLIN (LZ) is a next-generation dark matter direct detection experiment that will operate 4850 feet underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota, USA. Using a two-phase xenon detector with an active mass of 7 tonnes, LZ will search primarily for low-energy interactions with weakly interacting massive particles (WIMPs), which are hypothesized to make up the dark matter in our galactic halo. In this paper, the projected WIMP sensitivity of LZ is presented based on the latest background estimates and simulations of the detector. For a 1000 live day run using a 5.6-tonne fiducial mass, LZ is projected to exclude at 90% confidence level spin-independent WIMP-nucleon cross sections above 1.4 × 10-48cm2 for a 40 GeV/c2 mass WIMP.
Additionally, a 5σ discovery potential is projected, reaching cross sections below the exclusion limits of recent experiments. For spin-dependent WIMP-neutron(-proton) scattering, a sensitivity of 2.3 × 10−43 cm2 (7.1 × 10−42 cm2) for a 40 GeV/c2
mass WIMP is expected. With underground installation well underway, LZ is on track for commissioning at SURF in 2020
- …