117,189 research outputs found
Comparison of Gaussian process modeling software
Gaussian process fitting, or kriging, is often used to create a model from a
set of data. Many available software packages do this, but we show that very
different results can be obtained from different packages even when using the
same data and model. We describe the parameterization, features, and
optimization used by eight different fitting packages that run on four
different platforms. We then compare these eight packages using various data
functions and data sets, revealing that there are stark differences between the
packages. In addition to comparing the prediction accuracy, the predictive
variance--which is important for evaluating precision of predictions and is
often used in stopping criteria--is also evaluated
Simulation of Water Distribution Systems
In this paper a software package offering a means of simulating
complex water distribution systems is described. It has been
developed in the course of our investigations into the applicability
of neural networks and fuzzy systems for the implementation of
decision support systems in operational control of industrial
processes with case-studies taken from the water industry.
Examples of how the simulation package have been used in a
design and testing of the algorithms for state estimation,
confidence limit analysis and fault detection are presented.
Arguments for using a suitable graphical visualization techniques
in solving problems like meter placement or leakage diagnosis are
also given and supported by a set of examples
Data analysis with R in an experimental physics environment
A software package has been developed to bridge the R analysis model with the
conceptual analysis environment typical of radiation physics experiments. The
new package has been used in the context of a project for the validation of
simulation models, where it has demonstrated its capability to satisfy typical
requirements pertinent to the problem domain.Comment: IEEE Nuclear Science Symposium 201
ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization
ROOT is an object-oriented C++ framework conceived in the high-energy physics
(HEP) community, designed for storing and analyzing petabytes of data in an
efficient way. Any instance of a C++ class can be stored into a ROOT file in a
machine-independent compressed binary format. In ROOT the TTree object
container is optimized for statistical data analysis over very large data sets
by using vertical data storage techniques. These containers can span a large
number of files on local disks, the web, or a number of different shared file
systems. In order to analyze this data, the user can chose out of a wide set of
mathematical and statistical functions, including linear algebra classes,
numerical algorithms such as integration and minimization, and various methods
for performing regression analysis (fitting). In particular, ROOT offers
packages for complex data modeling and fitting, as well as multivariate
classification based on machine learning techniques. A central piece in these
analysis tools are the histogram classes which provide binning of one- and
multi-dimensional data. Results can be saved in high-quality graphical formats
like Postscript and PDF or in bitmap formats like JPG or GIF. The result can
also be stored into ROOT macros that allow a full recreation and rework of the
graphics. Users typically create their analysis macros step by step, making use
of the interactive C++ interpreter CINT, while running over small data samples.
Once the development is finished, they can run these macros at full compiled
speed over large data sets, using on-the-fly compilation, or by creating a
stand-alone batch program. Finally, if processing farms are available, the user
can reduce the execution time of intrinsically parallel tasks - e.g. data
mining in HEP - by using PROOF, which will take care of optimally distributing
the work over the available resources in a transparent way
Recommended from our members
Investigating distributed simulation at the Ford motor company
Engine production is a complex process that requires the manufacturing and assembly of a wide variety of components to create a varied product mix. Simulation plays a key role in the planning process of a new production line to determine if it can meet expected demand. However, these simulations can be very time consuming and can often take up to a day to execute a single run. This paper investigates how distributed simulation based on the IEEE 1516 High Level Architecture and the emerging standard COTS Simulation Package Interoperability Product Development Group (CSPI-PDG) Type I Interoperability Reference Model could be used to reduce the time taken for a single simulation run. CSP interoperability and the problem of integrating CSPs with HLA software (the runtime infrastructure) are presented. New prototype benchmarking software, the COTS Simulation Package Emulator (CSPE), which is being developed to investigate distributed simulation problems, is discussed. The paper then develops a case study of how this was used to investigate the feasibility of using distributed simulation at Ford. The paper discusses results obtained from this case study and suggests that distributed simulation could indeed be beneficial to Ford
- …