1,989 research outputs found
An Efficient Monte Carlo-based Probabilistic Time-Dependent Routing Calculation Targeting a Server-Side Car Navigation System
Incorporating speed probability distribution to the computation of the route
planning in car navigation systems guarantees more accurate and precise
responses. In this paper, we propose a novel approach for dynamically selecting
the number of samples used for the Monte Carlo simulation to solve the
Probabilistic Time-Dependent Routing (PTDR) problem, thus improving the
computation efficiency. The proposed method is used to determine in a proactive
manner the number of simulations to be done to extract the travel-time
estimation for each specific request while respecting an error threshold as
output quality level. The methodology requires a reduced effort on the
application development side. We adopted an aspect-oriented programming
language (LARA) together with a flexible dynamic autotuning library (mARGOt)
respectively to instrument the code and to take tuning decisions on the number
of samples improving the execution efficiency. Experimental results demonstrate
that the proposed adaptive approach saves a large fraction of simulations
(between 36% and 81%) with respect to a static approach while considering
different traffic situations, paths and error requirements. Given the
negligible runtime overhead of the proposed approach, it results in an
execution-time speedup between 1.5x and 5.1x. This speedup is reflected at
infrastructure-level in terms of a reduction of around 36% of the computing
resources needed to support the whole navigation pipeline
An Incremental Language Conversion Method to Convert C++ into Ada95
This thesis develops a methodology to incrementally convert a legacy object oriented C++ application into Ada95. Using the experience of converting a graphic application, called Remote Debriefing Tool (RDT), in the Graphics Lab of the Air Force Institute of Technology (AFIT), this effort defined a process to convert a C++ application into Ada95. The methodology consists of five phases: (1) reorganizing the software application, (2) breaking mutual dependencies, (3) creating package specifications to interface the existing C++ classes, (4) converting C++ code into Ada programs, and (5) embellishing. This methodology used the GNAT\u27s C++ low level interface capabilities to support the incremental conversion. The goal of this methodology is not only to correctly convert C++ code into Ada95, but also to take advantage of Ada\u27s features which support good software engineering principles
SimInf: An R package for Data-driven Stochastic Disease Spread Simulations
We present the R package SimInf which provides an efficient and very flexible
framework to conduct data-driven epidemiological modeling in realistic large
scale disease spread simulations. The framework integrates infection dynamics
in subpopulations as continuous-time Markov chains using the Gillespie
stochastic simulation algorithm and incorporates available data such as births,
deaths and movements as scheduled events at predefined time-points. Using C
code for the numerical solvers and OpenMP to divide work over multiple
processors ensures high performance when simulating a sample outcome. One of
our design goal was to make SimInf extendable and enable usage of the numerical
solvers from other R extension packages in order to facilitate complex
epidemiological research. In this paper, we provide a technical description of
the framework and demonstrate its use on some basic examples. We also discuss
how to specify and extend the framework with user-defined models.Comment: The manual has been updated to the latest version of SimInf (v6.0.0).
41 pages, 16 figure
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
CALIBRATION, VERIFICATION, AND DIAGNOSIS OF A SEASON-AHEAD DROUGHT PREDICTION MODEL: LIMITS TO PREDICTABILITY IN CENTRAL TEXAS
The Variable Infiltration Capacity (VIC) land surface hydrology model was calibrated and verified for prediction of naturalized flows into the Highland Lakes system in central Texas. Using seasonal climate forecasts downscaled to daily precipitation, maximum and minimum temperatures, and wind speeds, the VIC model was run to generate ensemble inflow hindcasts for two seasons – March through June and July through October – corresponding to the period of 1960 through 2010. A diagnosis of the seasonal hindcast results determined that inflows are not as heavily influenced by the physical soil moisture state as expected, and that variability in statistical precipitation downscaling can combine with hydrologic model errors to degrade the skill in streamflow forecasts. Recommendations are made for future work to improve forecast skill
World Agriculture and Climate Change: Economic Adaptations
Recent studies suggest that possible global increases in temperature and changes in precipitation patterns during the next century will affect world agriculture. Because of the ability of farmers to adapt , however, these changes are not likely to imperil world food production. Nevertheless, world production of all goods and services may decline, if climate change is severe enough or if cropland expansion is hindered. Impacts are not equally distributed around the world.climate change, world agriculture, Environmental Economics and Policy,
A new P-Lingua toolkit for agile development in membrane computing
Membrane computing is a massively parallel and non-deterministic bioinspired computing paradigm whose models are called P systems. Validating and testing such models is a challenge which is being overcome by developing simulators. Regardless of their heterogeneity, such simulators require to read and interpret the models to be simulated. To this end, P-Lingua is a high-level P system definition language which has been widely used in the last decade. The P-Lingua ecosystem includes not only the language, but also libraries and software tools for parsing and simulating membrane computing models. Each version of P-Lingua supported new types or variants of P systems. This leads to a shortcoming: Only a predefined list of variants can be used, thus making it difficult for researchers to study custom ones. Moreover, derivation modes cannot be user-defined, i.e, the way in which P system computations should be generated is determined by the simulation algorithm in the source code.
The main contribution of this paper is a completely new design of the P-Lingua language, called P-Lingua 5, in which the user can define custom variants and derivation modes, among other improvements such as including procedural programming and simulation directives. It is worth mentioning that it has backward-compatibility with previous versions of the language. A completely new set of command-line tools is provided for parsing and simulating P-Lingua 5 files. Finally, several examples are included in this paper covering the most common P system types.Agencia Estatal de Investigación TIN2017-89842-
Development of a computer model to predict platform station keeping requirements in the Gulf of Mexico using remote sensing data
Offshore operations such as oil drilling and radar monitoring require semisubmersible platforms to remain stationary at specific locations in the Gulf of Mexico. Ocean currents, wind, and waves in the Gulf of Mexico tend to move platforms away from their desired locations. A computer model was created to predict the station keeping requirements of a platform. The computer simulation uses remote sensing data from satellites and buoys as input. A background of the project, alternate approaches to the project, and the details of the simulation are presented
- …