100,652 research outputs found
musrfit: A free platform-independent framework for muSR data analysis
A free data-analysis framework for muSR has been developed. musrfit is fully
written in C++, is running under GNU/Linux, Mac OS X, as well as Microsoft
Windows, and is distributed under the terms of the GNU GPL. It is based on the
CERN ROOT framework and is utilizing the Minuit optimization routines for
fitting. It consists of a set of programs allowing the user to analyze and
visualize the data. The fitting process is controlled by an ascii-input file
with an extended syntax. A dedicated text editor is helping the user to create
and handle these files in an efficient way, execute the fitting, show the data,
get online help, and so on. A versatile tool for the generation of new input
files and the extraction of fit parameters is provided as well. musrfit
facilitates a plugin mechanism allowing to invoke user-defined functions.
Hence, the functionality of the framework can be extended with a minimal amount
of overhead for the user. Currently, musrfit can read the following facility
raw-data files: PSI-BIN, MDU (PSI), ROOT (LEM/PSI), WKM (outdated ascii
format), MUD (TRIUMF), NeXus (ISIS).Comment: 4 pages, 4 figure
On the Communication of Scientific Results: The Full-Metadata Format
In this paper, we introduce a scientific format for text-based data files,
which facilitates storing and communicating tabular data sets. The so-called
Full-Metadata Format builds on the widely used INI-standard and is based on
four principles: readable self-documentation, flexible structure, fail-safe
compatibility, and searchability. As a consequence, all metadata required to
interpret the tabular data are stored in the same file, allowing for the
automated generation of publication-ready tables and graphs and the semantic
searchability of data file collections. The Full-Metadata Format is introduced
on the basis of three comprehensive examples. The complete format and syntax is
given in the appendix
Disentanglement of Syntactic Components for Text Generation
Modelling human generated text, i.e., natural language data, is an important challenge in artificial intelligence. A good AI program should be able to understand and analyze natural language, and generate fluent and accurate responses. This standard is seen in applications of AI for natural language like machine translation, summarization, and dialog generation, all of which require the above ability. This work examines the application of deep neural networks for natural language generation. We explore how graph convolutional networks (GCNs) can be paired with recurrent neural networks (RNNs) for text generation.
GCNs have the advantage of being able to leverage the inherent graphical nature of text. Sentences can be expressed as dependency trees, and GCNs can incorporate this information to generate sentences in a syntax-aware manner. Modelling sentences with both dependency trees and word representations allows us to disentangle the syntactic components of sentences and generate sentences while fusing parts of speech from multiple sentences. Our methodology combines the sentence representations from an RNN with that of a GCN to allow a decoder to gain syntactic information while reconstructing a sentence. We explore different ways of separating the syntax components in a sentence and inspect how the generation operates.
We report BLEU and perplexity scores to evaluate how well the model incorporates the content based on its syntax from multiple sentences. We also observe, qualitatively, how the model generates fluent and coherent sentences while assimilating syntactic components from multiple sentences
- …