228,723 research outputs found
WISeREP - An Interactive Supernova Data Repository
We have entered an era of massive data sets in astronomy. In particular, the
number of supernova (SN) discoveries and classifications has substantially
increased over the years from few tens to thousands per year. It is no longer
the case that observations of a few prototypical events encapsulate most
spectroscopic information about SNe, motivating the development of modern tools
to collect, archive, organize and distribute spectra in general, and SN spectra
in particular. For this reason we have developed the Weizmann Interactive
Supernova data REPository - WISeREP - an SQL-based database (DB) with an
interactive web-based graphical interface. The system serves as an archive of
high quality SN spectra, including both historical (legacy) data as well as
data that is accumulated by ongoing modern programs. The archive provides
information about objects, their spectra, and related meta-data. Utilizing
interactive plots, we provide a graphical interface to visualize data, perform
line identification of the major relevant species, determine object redshifts,
classify SNe and measure expansion velocities. Guest users may view and
download spectra or other data that have been placed in the public domain.
Registered users may also view and download data that are proprietary to
specific programs with which they are associated. The DB currently holds >8000
spectra, of which >5000 are public; the latter include published spectra from
the Palomar Transient Factory, all of the SUSPECT archive, the
Caltech-Core-Collapse Program, the CfA SN spectra archive and published spectra
from the UC Berkeley SNDB repository. It offers an efficient and convenient way
to archive data and share it with colleagues, and we expect that data stored in
this way will be easy to access, increasing its visibility, usefulness and
scientific impact.Comment: To be published in PASP. WISeREP:
http://www.weizmann.ac.il/astrophysics/wiserep
Using Provenance to support Good Laboratory Practice in Grid Environments
Conducting experiments and documenting results is daily business of
scientists. Good and traceable documentation enables other scientists to
confirm procedures and results for increased credibility. Documentation and
scientific conduct are regulated and termed as "good laboratory practice."
Laboratory notebooks are used to record each step in conducting an experiment
and processing data. Originally, these notebooks were paper based. Due to
computerised research systems, acquired data became more elaborate, thus
increasing the need for electronic notebooks with data storage, computational
features and reliable electronic documentation. As a new approach to this, a
scientific data management system (DataFinder) is enhanced with features for
traceable documentation. Provenance recording is used to meet requirements of
traceability, and this information can later be queried for further analysis.
DataFinder has further important features for scientific documentation: It
employs a heterogeneous and distributed data storage concept. This enables
access to different types of data storage systems (e. g. Grid data
infrastructure, file servers). In this chapter we describe a number of building
blocks that are available or close to finished development. These components
are intended for assembling an electronic laboratory notebook for use in Grid
environments, while retaining maximal flexibility on usage scenarios as well as
maximal compatibility overlap towards each other. Through the usage of such a
system, provenance can successfully be used to trace the scientific workflow of
preparation, execution, evaluation, interpretation and archiving of research
data. The reliability of research results increases and the research process
remains transparent to remote research partners.Comment: Book Chapter for "Data Provenance and Data Management for eScience,"
of Studies in Computational Intelligence series, Springer. 25 pages, 8
figure
The XENON1T Data Distribution and Processing Scheme
The XENON experiment is looking for non-baryonic particle dark matter in the
universe. The setup is a dual phase time projection chamber (TPC) filled with
3200 kg of ultra-pure liquid xenon. The setup is operated at the Laboratori
Nazionali del Gran Sasso (LNGS) in Italy. We present a full overview of the
computing scheme for data distribution and job management in XENON1T. The
software package Rucio, which is developed by the ATLAS collaboration,
facilitates data handling on Open Science Grid (OSG) and European Grid
Infrastructure (EGI) storage systems. A tape copy at the Center for High
Performance Computing (PDC) is managed by the Tivoli Storage Manager (TSM).
Data reduction and Monte Carlo production are handled by CI Connect which is
integrated into the OSG network. The job submission system connects resources
at the EGI, OSG, SDSC's Comet, and the campus HPC resources for distributed
computing. The previous success in the XENON1T computing scheme is also the
starting point for its successor experiment XENONnT, which starts to take data
in autumn 2019.Comment: 8 pages, 2 figures, CHEP 2018 proceeding
Identification of Design Principles
This report identifies those design principles for a (possibly new) query and transformation
language for the Web supporting inference that are considered essential. Based upon these
design principles an initial strawman is selected. Scenarios for querying the Semantic Web
illustrate the design principles and their reflection in the initial strawman, i.e., a first draft of
the query language to be designed and implemented by the REWERSE working group I4
RACOFI: A Rule-Applying Collaborative Filtering System
In this paper we give an overview of the RACOFI (Rule-Applying Collaborative Filtering) multidimensional rating system and its related technologies. This will be exemplified with RACOFI Music, an implemented collaboration agent that assists on-line users in the rating and recommendation of audio (Learning) Objects. It lets users rate contemporary Canadian music in the five dimensions of impression, lyrics, music, originality, and production. The collaborative filtering algorithms STI Pearson, STIN2, and the Per Item Average algorithms are then employed together with RuleML-based rules to recommend music objects that best match user queries. RACOFI has been on-line since August 2003 at http://racofi.elg.ca.
MDA-based ATL transformation to generate MVC 2 web models
Development and maintenance of Web application is still a complex and
error-prone process. We need integrated techniques and tool support for
automated generation of Web systems and a ready prescription for easy
maintenance. The MDA approach proposes an architecture taking into account the
development and maintenance of large and complex software. In this paper, we
apply MDA approach for generating PSM from UML design to MVC 2Web
implementation. That is why we have developed two meta-models handling UML
class diagrams and MVC 2 Web applications, then we have to set up
transformation rules. These last are expressed in ATL language. To specify the
transformation rules (especially CRUD methods) we used a UML profiles. To
clearly illustrate the result generated by this transformation, we converted
the XMI file generated in an EMF (Eclipse Modeling Framework) model.Comment: International Journal of Computer Science & Information
Technology-201
- ā¦