494 research outputs found
Towards the improvement of the Semantic Web technology
The Semantic Web technology needs to be thoroughly evalu- ated for providing objective results and to attain a massive improvement in their quality in order to be consolidated in the industrial and in the academic world. This paper presents software benchmarking as a process to carry out over the SemanticWeb technology in order to improve it and to search for best practices. It also describes a software benchmarking methodology and provides recommendations for performing evaluations in benchmarking activities
Benchmarking in the Semantic Web
The Semantic Web technology needs to be thoroughly evaluated for providing objective results and obtaining massive improvement in its quality; thus, the transfer of this technology from research to industry will speed up. This chapter presents software benchmarking, a process that aims to improve the Semantic Web technology and to find the best practices. The chapter also describes a specific software benchmarking methodology and shows how this methodology has been used to benchmark the interoperability of ontology development tools, employing RDF(S) as the interchange language
New results from the SoftLAB benchmark of antenna software
This paper gives an overview of the antenna software
benchmarking activity that took place within the Software
Group of the EurAAP association in 2009. A particular focus is
brought to one of the proposed test case that consists of a non
conventional horn antennaPostprint (published version
Scope Management of Non-Functional Requirements
In order to meet commitments in software projects, a realistic assessment must be made of project scope. Such an assessment relies on the availability of knowledge on the user-defined project requirements and their effort estimates and priorities, as well as their risk. This knowledge enables analysts, managers and software engineers to identify the most significant requirements from the list of requirements initially defined by the user. In practice, this scope assessment is applied to the Functional Requirements (FRs) provided by users who are unaware of, or ignore, the Non-Functional Requirements (NFRs). This paper presents ongoing research which aims at managing NFRs during the software development process. Establishing the relative priority of each NFR, and obtaining a rough estimate of the effort and risk associated with it, is integral to the software development process and to resource management. Our work extends the taxonomy of the NFR framework by integrating the concept of the "hardgoal". A functional size measure of NFRs is applied to facilitate the effort estimation process. The functional size measurement method we have chosen is COSMICFFP, which is theoretically sound and the de facto standard in the software industry
Grinder: a versatile amplicon and shotgun sequence simulator
We introduce Grinder (http://sourceforge.net/ projects/biogrinder/), an open-source bioinformatic tool to simulate amplicon and shotgun (genomic, metagenomic, transcriptomic and metatranscriptomic) datasets from reference sequences. This is the first tool to simulate amplicon datasets (e.g. 16S rRNA) widely used by microbial ecologists. Grinder can create sequence libraries with a specific community structure, α and β diversities and experimental biases (e.g. chimeras, gene copy number variation) for commonly used sequencing platforms. This versatility allows the creation of simple to complex read datasets necessary for hypothesis testing when developing bioinformatic software, benchmarking existing tools or designing sequence-based experiments. Grinder is particularly useful for simulating clinical or environmental microbial communities and complements the use of in vitro mock communities
Evaluating the use of rCBV as a tumor grade and treatment response classifier across NCI Quantitative Imaging Network sites: Part II of the DSC-MRI digital reference object (DRO) challenge
We have previously characterized the reproducibility of brain tumor relative cerebral blood volume (rCBV) using a dynamic susceptibility contrast magnetic resonance imaging digital reference object across 12 sites using a range of imaging protocols and software platforms. As expected, reproducibility was highest when imaging protocols and software were consistent, but decreased when they were variable. Our goal in this study was to determine the impact of rCBV reproducibility for tumor grade and treatment response classification. We found that varying imaging protocols and software platforms produced a range of optimal thresholds for both tumor grading and treatment response, but the performance of these thresholds was similar. These findings further underscore the importance of standardizing acquisition and analysis protocols across sites and software benchmarking
A Bootstrap Theory: the SEMAT Kernel Itself as Runnable Software
The SEMAT kernel is a thoroughly thought generic framework for Software
Engineering system development in practice. But one should be able to test its
characteristics by means of a no less generic theory matching the SEMAT kernel.
This paper claims that such a matching theory is attainable and describes its
main principles. The conceptual starting point is the robustness of the Kernel
alphas to variations in the nature of the software system, viz. to software
automation, distribution and self-evolution. From these and from observed
Kernel properties follows the proposed bootstrap principle: a software system
theory should itself be a runnable software. Thus, the kernel alphas can be
viewed as a top-level ontology, indeed the Essence of Software Engineering.
Among the interesting consequences of this bootstrap theory, the observable
system characteristics can now be formally tested. For instance, one can check
the system completeness, viz. that software system modules fulfill each one of
the system requirements.Comment: 8 pages; 2 figures; Preprint of paper accepted for GTSE'2014
Workshop, within ICSE'2014 Conferenc
- …