19,868 research outputs found
eBank UK: linking research data, scholarly communication and learning
This paper includes an overview of the changing landscape of scholarly communication and describes outcomes from the innovative eBank UK project, which seeks to build links from e-research through to e-learning. As introduction, the scholarly knowledge cycle is described and the role of digital repositories and aggregator services in linking data-sets from Grid-enabled projects to e-prints through to peer-reviewed articles as resources in portals and Learning Management Systems, are assessed. The development outcomes from the eBank UK project are presented including the distributed information architecture, requirements for common ontologies, data models, metadata schema, open linking technologies, provenance and workflows. Some emerging challenges for the future are presented in conclusion
Character-level Chinese-English Translation through ASCII Encoding
Character-level Neural Machine Translation (NMT) models have recently
achieved impressive results on many language pairs. They mainly do well for
Indo-European language pairs, where the languages share the same writing
system. However, for translating between Chinese and English, the gap between
the two different writing systems poses a major challenge because of a lack of
systematic correspondence between the individual linguistic units. In this
paper, we enable character-level NMT for Chinese, by breaking down Chinese
characters into linguistic units similar to that of Indo-European languages. We
use the Wubi encoding scheme, which preserves the original shape and semantic
information of the characters, while also being reversible. We show promising
results from training Wubi-based models on the character- and subword-level
with recurrent as well as convolutional models.Comment: 7 pages, 3 figures, 3rd Conference on Machine Translation (WMT18),
201
POLLUX : a database of synthetic stellar spectra
Synthetic spectra are needed to determine fundamental stellar and wind
parameters of all types of stars. They are also used for the construction of
theoretical spectral libraries helpful for stellar population synthesis.
Therefore, a database of theoretical spectra is required to allow rapid and
quantitative comparisons to spectroscopic data. We provide such a database
offering an unprecedented coverage of the entire Hertzsprung-Russell diagram.
We present the POLLUX database of synthetic stellar spectra. For objects with
Teff < 6 000 K, MARCS atmosphere models are computed and the program
TURBOSPECTRUM provides the synthetic spectra. ATLAS12 models are computed for
stars with 7 000 K <Teff <15 000 K. SYNSPEC gives the corresponding spectra.
Finally, the code CMFGEN provides atmosphere models for the hottest stars (Teff
> 25 000 K). Their spectra are computed with CMF_FLUX. Both high resolution
(R>150 000) optical spectra in the range 3 000 to 12 000 A and spectral energy
distributions extending from the UV to near--IR ranges are presented. These
spectra cover the HR diagram at solar metallicity. We propose a wide variety of
synthetic spectra for various types of stars in a format that is compliant with
the Virtual Observatory standards. A user--friendly web interface allows an
easy selection of spectra and data retrieval. Upcoming developments will
include an extension to a large range of metallicities and to the near--IR high
resolution spectra, as well as a better coverage of the HR diagram, with the
inclusion of models for Wolf-Rayet stars and large datasets for cool stars. The
POLLUX database is accessible at http://pollux.graal.univ-montp2.fr/ and
through the Virtual Observatory.Comment: 9 pages, 5 figures, accepted for publication in Astronomy ans
Astrophysic
Managing complexity in a distributed digital library
As the capabilities of distributed digital libraries increase, managing organizational and software complexity becomes a key issue. How can collections and indexes be updated without impacting queries currently in progress? How can the system handle several user-interface clients for the same collections? Computer science professors and lectors from the University of Waikato have developed a software structure that successfully manages this complexity in the New Zealand Digital Library. This digital library has been a success in managing organizational and software complexity. The researchers' primary goal has been to minimize the effort required to keep the system operational and yet continue to expand its offerings
An extensible benchmark and tooling for comparing reverse engineering approaches
Various tools exist to reverse engineer software source code and generate design information, such as UML projections. Each has specific strengths and weaknesses, however no standardised benchmark exists that can be used to evaluate and compare their performance and effectiveness in a systematic manner. To facilitate such comparison in this paper we introduce the Reverse Engineering to Design Benchmark (RED-BM), which consists of a comprehensive set of Java-based targets for reverse engineering and a formal set of performance measures with which tools and approaches can be analysed and ranked. When used to evaluate 12 industry standard tools performance figures range from 8.82\% to 100\% demonstrating the ability of the benchmark to differentiate between tools. To aid the comparison, analysis and further use of reverse engineering XMI output we have developed a parser which can interpret the XMI output format of the most commonly used reverse engineering applications, and is used in a number of tools
Fast, parallel and secure cryptography algorithm using Lorenz's attractor
A novel cryptography method based on the Lorenz's attractor chaotic system is
presented. The proposed algorithm is secure and fast, making it practical for
general use. We introduce the chaotic operation mode, which provides an
interaction among the password, message and a chaotic system. It ensures that
the algorithm yields a secure codification, even if the nature of the chaotic
system is known. The algorithm has been implemented in two versions: one
sequential and slow and the other, parallel and fast. Our algorithm assures the
integrity of the ciphertext (we know if it has been altered, which is not
assured by traditional algorithms) and consequently its authenticity. Numerical
experiments are presented, discussed and show the behavior of the method in
terms of security and performance. The fast version of the algorithm has a
performance comparable to AES, a popular cryptography program used commercially
nowadays, but it is more secure, which makes it immediately suitable for
general purpose cryptography applications. An internet page has been set up,
which enables the readers to test the algorithm and also to try to break into
the cipher in
- âŠ