49,123 research outputs found
Evaluation of Software Product Quality Metrics
Computing devices and associated software govern everyday life, and form the
backbone of safety critical systems in banking, healthcare, automotive and
other fields. Increasing system complexity, quickly evolving technologies and
paradigm shifts have kept software quality research at the forefront. Standards
such as ISO's 25010 express it in terms of sub-characteristics such as
maintainability, reliability and security. A significant body of literature
attempts to link these subcharacteristics with software metric values, with the
end goal of creating a metric-based model of software product quality. However,
research also identifies the most important existing barriers. Among them we
mention the diversity of software application types, development platforms and
languages. Additionally, unified definitions to make software metrics truly
language-agnostic do not exist, and would be difficult to implement given
programming language levels of variety. This is compounded by the fact that
many existing studies do not detail their methodology and tooling, which
precludes researchers from creating surveys to enable data analysis on a larger
scale. In our paper, we propose a comprehensive study of metric values in the
context of three complex, open-source applications. We align our methodology
and tooling with that of existing research, and present it in detail in order
to facilitate comparative evaluation. We study metric values during the entire
18-year development history of our target applications, in order to capture the
longitudinal view that we found lacking in existing literature. We identify
metric dependencies and check their consistency across applications and their
versions. At each step, we carry out comparative evaluation with existing
research and present our results.Comment: Published in: Molnar AJ., Neam\c{t}u A., Motogna S. (2020) Evaluation
of Software Product Quality Metrics. In: Damiani E., Spanoudakis G.,
Maciaszek L. (eds) Evaluation of Novel Approaches to Software Engineering.
ENASE 2019. Communications in Computer and Information Science, vol 1172.
Springer, Cham. https://doi.org/10.1007/978-3-030-40223-5_
Health Figures: An Open Source JavaScript Library for Health Data Visualization
The way we look at data has a great impact on how we can understand it,
particularly when the data is related to health and wellness. Due to the
increased use of self-tracking devices and the ongoing shift towards preventive
medicine, better understanding of our health data is an important part of
improving the general welfare of the citizens. Electronic Health Records,
self-tracking devices and mobile applications provide a rich variety of data
but it often becomes difficult to understand. We implemented the hFigures
library inspired on the hGraph visualization with additional improvements. The
purpose of the library is to provide a visual representation of the evolution
of health measurements in a complete and useful manner. We researched the
usefulness and usability of the library by building an application for health
data visualization in a health coaching program. We performed a user evaluation
with Heuristic Evaluation, Controlled User Testing and Usability
Questionnaires. In the Heuristics Evaluation the average response was 6.3 out
of 7 points and the Cognitive Walkthrough done by usability experts indicated
no design or mismatch errors. In the CSUQ usability test the system obtained an
average score of 6.13 out of 7, and in the ASQ usability test the overall
satisfaction score was 6.64 out of 7. We developed hFigures, an open source
library for visualizing a complete, accurate and normalized graphical
representation of health data. The idea is based on the concept of the hGraph
but it provides additional key features, including a comparison of multiple
health measurements over time. We conducted a usability evaluation of the
library as a key component of an application for health and wellness
monitoring. The results indicate that the data visualization library was
helpful in assisting users in understanding health data and its evolution over
time.Comment: BMC Medical Informatics and Decision Making 16.1 (2016
An Empirical Study of Cohesion and Coupling: Balancing Optimisation and Disruption
Search based software engineering has been extensively applied to the problem of finding improved modular structures that maximise cohesion and minimise coupling. However, there has, hitherto, been no longitudinal study of developers’ implementations, over a series of sequential releases. Moreover, results validating whether developers respect the fitness functions are scarce, and the potentially disruptive effect of search-based remodularisation is usually overlooked. We present an empirical study of 233 sequential releases of 10 different systems; the largest empirical study reported in the literature so far, and the first longitudinal study. Our results provide evidence that developers do, indeed, respect the fitness functions used to optimise cohesion/coupling (they are statistically significantly better than arbitrary choices with p << 0.01), yet they also leave considerable room for further improvement (cohesion/coupling can be improved by 25% on average). However, we also report that optimising the structure is highly disruptive (on average more than 57% of the structure must change), while our results reveal that developers tend to avoid such disruption. Therefore, we introduce and evaluate a multi-objective evolutionary approach that minimises disruption while maximising cohesion/coupling improvement. This allows developers to balance reticence to disrupt existing modular structure, against their competing need to improve cohesion and coupling. The multi-objective approach is able to find modular structures that improve the cohesion of developers’ implementations by 22.52%, while causing an acceptably low level of disruption (within that already tolerated by developers)
MITK-ModelFit: A generic open-source framework for model fits and their exploration in medical imaging -- design, implementation and application on the example of DCE-MRI
Many medical imaging techniques utilize fitting approaches for quantitative
parameter estimation and analysis. Common examples are pharmacokinetic modeling
in DCE MRI/CT, ADC calculations and IVIM modeling in diffusion-weighted MRI and
Z-spectra analysis in chemical exchange saturation transfer MRI. Most available
software tools are limited to a special purpose and do not allow for own
developments and extensions. Furthermore, they are mostly designed as
stand-alone solutions using external frameworks and thus cannot be easily
incorporated natively in the analysis workflow. We present a framework for
medical image fitting tasks that is included in MITK, following a rigorous
open-source, well-integrated and operating system independent policy. Software
engineering-wise, the local models, the fitting infrastructure and the results
representation are abstracted and thus can be easily adapted to any model
fitting task on image data, independent of image modality or model. Several
ready-to-use libraries for model fitting and use-cases, including fit
evaluation and visualization, were implemented. Their embedding into MITK
allows for easy data loading, pre- and post-processing and thus a natural
inclusion of model fitting into an overarching workflow. As an example, we
present a comprehensive set of plug-ins for the analysis of DCE MRI data, which
we validated on existing and novel digital phantoms, yielding competitive
deviations between fit and ground truth. Providing a very flexible environment,
our software mainly addresses developers of medical imaging software that
includes model fitting algorithms and tools. Additionally, the framework is of
high interest to users in the domain of perfusion MRI, as it offers
feature-rich, freely available, validated tools to perform pharmacokinetic
analysis on DCE MRI data, with both interactive and automatized batch
processing workflows.Comment: 31 pages, 11 figures URL: http://mitk.org/wiki/MITK-ModelFi
Benchmarking of a software stack for autonomous racing against a professional human race driver
The way to full autonomy of public road vehicles requires the step-by-step
replacement of the human driver, with the ultimate goal of replacing the driver
completely. Eventually, the driving software has to be able to handle all
situations that occur on its own, even emergency situations. These particular
situations require extreme combined braking and steering actions at the limits
of handling to avoid an accident or to diminish its consequences. An average
human driver is not trained to handle such extreme and rarely occurring
situations and therefore often fails to do so. However, professional race
drivers are trained to drive a vehicle utilizing the maximum amount of possible
tire forces. These abilities are of high interest for the development of
autonomous driving software. Here, we compare a professional race driver and
our software stack developed for autonomous racing with data analysis
techniques established in motorsports. The goal of this research is to derive
indications for further improvement of the performance of our software and to
identify areas where it still fails to meet the performance level of the human
race driver. Our results are used to extend our software's capabilities and
also to incorporate our findings into the research and development of public
road autonomous vehicles.Comment: Accepted at 2020 Fifteenth International Conference on Ecological
Vehicles and Renewable Energies (EVER
Evaluating Digital Libraries: A Longitudinal and Multifaceted View
published or submitted for publicatio
Recommended from our members
Optimizing sequencing protocols for leaderboard metagenomics by combining long and short reads.
As metagenomic studies move to increasing numbers of samples, communities like the human gut may benefit more from the assembly of abundant microbes in many samples, rather than the exhaustive assembly of fewer samples. We term this approach leaderboard metagenome sequencing. To explore protocol optimization for leaderboard metagenomics in real samples, we introduce a benchmark of library prep and sequencing using internal references generated by synthetic long-read technology, allowing us to evaluate high-throughput library preparation methods against gold-standard reference genomes derived from the samples themselves. We introduce a low-cost protocol for high-throughput library preparation and sequencing
- …