10,787 research outputs found
Easylife: the data reduction and survey handling system for VIPERS
We present Easylife, the software environment developed within the framework
of the VIPERS project for automatic data reduction and survey handling.
Easylife is a comprehensive system to automatically reduce spectroscopic data,
to monitor the survey advancement at all stages, to distribute data within the
collaboration and to release data to the whole community. It is based on the
OPTICON founded project FASE, and inherits the FASE capabilities of modularity
and scalability. After describing the software architecture, the main reduction
and quality control features and the main services made available, we show its
performance in terms of reliability of results. We also show how it can be
ported to other projects having different characteristics.Comment: pre-print, 17 pages, 4 figures, accepted for publication in
Publications of the Astronomical Society of the Pacifi
Relating geometry descriptions to its derivatives on the web
Sharing building information over the Web is becoming more popular, leading to advances in describing building models in a Semantic Web context. However, those descriptions lack unified approaches for linking geometry descriptions to building elements, derived properties and derived other geometry descriptions. To bridge this gap, we analyse the basic characteristics of geometric dependencies and propose the Ontology for Managing Geometry (OMG) based on this analysis. In this paper, we present our results and show how the OMG provides means to link geometric and non-geometric data in meaningful ways. Thus, exchanging building data, including geometry, on the Web becomes more efficient
Recommended from our members
Similarities, challenges and opportunities of wikipedia content and open source projects
Copyright @ 2012 John Wiley & Sons, Ltd.Several years of research and evidence have demonstrated that Open Source Software (OSS) portals often contain a large amount of software projects that simply do not evolve, developed by relatively small communities, struggling to attract a sustained number of contributors. These portals have started to
increasingly act as a storage for abandoned projects, and researchers and practitioners should try and point out how to take advantage of such content. Similarly, other online content portals (like Wikipedia) could be harvested for valuable content. In this paper we argue that, even with differences in the requested expertise, many projects reliant on content and contributions by users undergo a similar evolution, and follow similar patterns: when a project fails to attract contributors, it appears to be not evolving, or abandoned. Far from a negative finding, even those projects could provide valuable content that should be harvested and identified based on common characteristics: by using the attributes of âusefulnessâ and âmodularityâ we isolate valuable content in both Wikipedia pages and OSS projects
Towards a dynamic rule-based business process
IJWGS is now included in Science Citation Index Expanded (SCIE), starting from volume 4, 2008. The first impact factor, which will be for 2010, is expected to be published in mid 201
Assessing architectural evolution: A case study
This is the post-print version of the Article. The official published can be accessed from the link below - Copyright @ 2011 SpringerThis paper proposes to use a historical perspective on generic laws, principles,
and guidelines, like Lehmanâs software evolution laws and Martinâs design principles, in order to achieve a multi-faceted process and structural assessment of a systemâs architectural evolution. We present a simple structural model with associated historical metrics and
visualizations that could form part of an architectâs dashboard. We perform such an assessment for the Eclipse SDK, as a case study of a large, complex, and long-lived system for which sustained effective architectural evolution is paramount. The twofold aim of checking generic principles on a well-know system is, on the one hand,
to see whether there are certain lessons that could be learned for best practice of architectural evolution, and on the other hand to get more insights about the applicability of such principles. We find that while the Eclipse SDK does follow several of the laws and principles, there are some deviations, and we discuss areas of architectural improvement and limitations of the assessment approach
A Semantic Web Annotation Tool for a Web-Based Audio Sequencer
Music and sound have a rich semantic structure which is so clear to the composer and the listener, but that remains mostly hidden to computing machinery. Nevertheless, in recent years, the introduction of software tools for music production have enabled new opportunities for migrating this knowledge from humans to machines. A new generation of these tools may exploit sound samples and semantic information coupling for the creation not only of a musical, but also of a "semantic" composition. In this paper we describe an ontology driven content annotation framework for a web-based audio editing tool. In a supervised approach, during the editing process, the graphical web interface allows the user to annotate any part of the composition with concepts from publicly available ontologies. As a test case, we developed a collaborative web-based audio sequencer that provides users with the functionality to remix the audio samples from the Freesound website and subsequently annotate them. The annotation tool can load any ontology and thus gives users the opportunity to augment the work with annotations on the structure of the composition, the musical materials, and the creator's reasoning and intentions. We believe this approach will provide several novel ways to make not only the final audio product, but also the creative process, first class citizens of the Semantic We
Recommended from our members
Investigating the use of background knowledge for assessing the relevance of statements to an ontology in ontology evolution
The tasks of learning and enriching ontologies with new concepts and relations have attracted a lot of attention in the research community, leading to a number of tools facilitating the process of building and updating ontologies. These tools often discover new elements of information to be included in the considered ontology from external data sources such as text documents or databases, transforming these elements into ontology compatible statements or axioms. While some techniques are used to make sure that statements to be added are compatible with the ontology (e.g. through conflict detection), such tools generally pay little attention to the relevance of the statement in question. It is either assumed that any statement extracted from a data source is relevant, or that the user will assess whether a statement adds value to the ontology. In this paper, we investigate the use of background knowledge about the context where statements appear to assess their relevance. We devise a methodology to extract such a context from ontologies available online, to map it to the considered ontology and to visualize this mapping in a way that allows to study the intersection and complementarity of the two sources of knowledge. By applying this methodology on several examples, we identified an initial set of patterns giving strong indications concerning the relevance of a statement, as well as interesting issues to be considered when applying such techniques
Simple and Effective Multi-Paragraph Reading Comprehension
We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system.Comment: 11 pages, updated a referenc
- âŠ