294,485 research outputs found
Systematic evaluation of design choices for software development tools
[Abstract]: Most design and evaluation of software tools
is based on the intuition and experience of the designers.
Software tool designers consider themselves typical users
of the tools that they build and tend to subjectively evaluate their products rather than objectively evaluate them using established usability methods. This subjective approach is inadequate if the quality of software tools is to improve and the use of more systematic methods is advocated. This paper summarises a sequence of studies that
show how user interface design choices for software development tools can be evaluated using established usability engineering techniques. The techniques used included guideline review, predictive modelling and experimental studies with users
Evolution of statistical analysis in empirical software engineering research: Current state and steps forward
Software engineering research is evolving and papers are increasingly based
on empirical data from a multitude of sources, using statistical tests to
determine if and to what degree empirical evidence supports their hypotheses.
To investigate the practices and trends of statistical analysis in empirical
software engineering (ESE), this paper presents a review of a large pool of
papers from top-ranked software engineering journals. First, we manually
reviewed 161 papers and in the second phase of our method, we conducted a more
extensive semi-automatic classification of papers spanning the years 2001--2015
and 5,196 papers. Results from both review steps was used to: i) identify and
analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well
as relevant trends in usage of specific statistical methods (e.g.,
nonparametric tests and effect size measures) and, ii) develop a conceptual
model for a statistical analysis workflow with suggestions on how to apply
different statistical methods as well as guidelines to avoid pitfalls. Lastly,
we confirm existing claims that current ESE practices lack a standard to report
practical significance of results. We illustrate how practical significance can
be discussed in terms of both the statistical analysis and in the
practitioner's context.Comment: journal submission, 34 pages, 8 figure
Astrophysics datamining in the classroom: Exploring real data with new software tools and robotic telescopes
Within the efforts to bring frontline interactive astrophysics and astronomy
to the classroom, the Hands on Universe (HOU) developed a set of exercises and
platform using real data obtained by some of the most advanced ground and space
observatories. The backbone of this endeavour is a new free software Web tool -
Such a Lovely Software for Astronomy based on Image J (Salsa J). It is
student-friendly and developed specifically for the HOU project and targets
middle and high schools. It allows students to display, analyze, and explore
professionally obtained astronomical images, while learning concepts on
gravitational dynamics, kinematics, nuclear fusion, electromagnetism. The
continuous evolving set of exercises and tutorials is being completed with real
(professionally obtained) data to download and detailed tutorials. The
flexibility of the Salsa J platform tool enables students and teachers to
extend the exercises with their own observations. The software developed for
the HOU program has been designed to be a multi-platform, multi-lingual
experience for image manipulation and analysis in the classroom. Its design
enables easy implementation of new facilities (extensions and plugins), minimal
in-situ maintenance and flexibility for exercise plugin. Here, we describe some
of the most advanced exercises about astrophysics in the classroom, addressing
particular examples on gravitational dynamics, concepts currently introduced in
most sciences curricula in middle and high schools.Comment: 10 pages, 12 images, submitted to the special theme issue Using
Astronomy and Space Science Research in Physics Courses of the American
Journal of Physic
Lifecycle information for e-literature: full report from the LIFE project
This Report is a record of the LIFE Project. The Project has been run for one year and its aim is to deliver crucial information about the cost and management of digital material. This information should then in turn be able to be applied to any institution that has an interest in preserving and providing access to electronic collections.
The Project is a joint venture between The British Library and UCL Library Services. The Project is funded by JISC under programme area (i) as listed in paragraph 16 of the JISC 4/04 circular- Institutional Management Support and
Collaboration and as such has set requirements and outcomes which must be met and the Project has done its best to do so. Where the Project has been unable to answer specific questions, strong recommendations have been made for future Project work to do so.
The outcomes of this Project are expected to be a practical set of guidelines and a framework within which costs can be applied to digital collections in order to answer the following questions:
• What is the long term cost of preserving digital material;
• Who is going to do it;
• What are the long term costs for a library in HE/FE to partner with another institution to carry out long term archiving;
• What are the comparative long-term costs of a paper and digital copy of
the same publication;
• At what point will there be sufficient confidence in the stability and
maturity of digital preservation to switch from paper for publications
available in parallel formats;
• What are the relative risks of digital versus paper archiving.
The Project has attempted to answer these questions by using a developing
lifecycle methodology and three diverse collections of digital content. The LIFE Project team chose UCL e-journals, BL Web Archiving and the BL VDEP digital collections to provide a strong challenge to the methodology as well as to help reach the key Project aim of attributing long term cost to digital collections. The results from the Case Studies and the Project findings are both surprising
and illuminating
Generating natural language specifications from UML class diagrams
Early phases of software development are known to be problematic, difficult to manage and errors occurring during these phases are expensive to correct. Many systems have been developed to aid the transition from informal Natural Language requirements to semistructured or formal specifications. Furthermore, consistency checking is seen by many software engineers as the solution to reduce the number of errors occurring during the software development life cycle and allow early verification and validation of software systems. However, this is confined to the models developed during analysis and design and fails to include the early Natural Language requirements. This excludes proper user involvement and creates a gap between the original requirements and the updated and modified models and implementations of the system. To improve this process, we propose a system that generates Natural Language specifications from UML class diagrams. We first investigate the variation of the input language used in naming the components of a class diagram based on the study of a large number of examples from the literature and then develop rules for removing ambiguities in the subset of Natural Language used within UML. We use WordNet,a linguistic ontology, to disambiguate the lexical structures of the UML string names and generate semantically sound sentences. Our system is developed in Java and is tested on an independent though academic case study
On A Simpler and Faster Derivation of Single Use Reliability Mean and Variance for Model-Based Statistical Testing
Markov chain usage-based statistical testing has proved sound and effective in providing audit trails of evidence in certifying software-intensive systems. The system end-toend reliability is derived analytically in closed form, following an arc-based Bayesian model. System reliability is represented by an important statistic called single use reliability, and defined as the probability of a randomly selected use being successful. This paper continues our earlier work on a simpler and faster derivation of the single use reliability mean, and proposes a new derivation of the single use reliability variance by applying a well-known theorem and eliminating the need to compute the second moments of arc
failure probabilities. Our new results complete a new analysis that could be shown to be simpler, faster, and more direct while also rendering a more intuitive explanation. Our new
theory is illustrated with three simple Markov chain usage models with manual derivations and experimental results
- …