50,352 research outputs found
Towards a Theory of Software Development Expertise
Software development includes diverse tasks such as implementing new
features, analyzing requirements, and fixing bugs. Being an expert in those
tasks requires a certain set of skills, knowledge, and experience. Several
studies investigated individual aspects of software development expertise, but
what is missing is a comprehensive theory. We present a first conceptual theory
of software development expertise that is grounded in data from a mixed-methods
survey with 335 software developers and in literature on expertise and expert
performance. Our theory currently focuses on programming, but already provides
valuable insights for researchers, developers, and employers. The theory
describes important properties of software development expertise and which
factors foster or hinder its formation, including how developers' performance
may decline over time. Moreover, our quantitative results show that developers'
expertise self-assessments are context-dependent and that experience is not
necessarily related to expertise.Comment: 14 pages, 5 figures, 26th ACM Joint European Software Engineering
Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE
2018), ACM, 201
Iterative criteria-based approach to engineering the requirements of software development methodologies
Software engineering endeavours are typically based on and governed by the requirements of the target software; requirements identification is therefore an integral part of software development methodologies. Similarly, engineering a software development methodology (SDM) involves the identification of the requirements of the target methodology. Methodology engineering approaches pay special attention to this issue; however, they make little use of existing methodologies as sources of insight into methodology requirements. The authors propose an iterative method for eliciting and specifying the requirements of a SDM using existing methodologies as supplementary resources. The method is performed as the analysis phase of a methodology engineering process aimed at the ultimate design and implementation of a target methodology. An initial set of requirements is first identified through analysing the characteristics of the development situation at hand and/or via delineating the general features desirable in the target methodology. These initial requirements are used as evaluation criteria; refined through iterative application to a select set of relevant methodologies. The finalised criteria highlight the qualities that the target methodology is expected to possess, and are therefore used as a basis for de. ning the final set of requirements. In an example, the authors demonstrate how the proposed elicitation process can be used for identifying the requirements of a general object-oriented SDM. Owing to its basis in knowledge gained from existing methodologies and practices, the proposed method can help methodology engineers produce a set of requirements that is not only more complete in span, but also more concrete and rigorous
Ontology modelling methodology for temporal and interdependent applications
The increasing adoption of Semantic Web technology by several classes of applications in recent years, has made ontology engineering a crucial part of application development. Nowadays, the abundant accessibility of interdependent information from multiple resources and representing various fields such as health, transport, and banking etc., further evidence the growing need for utilising ontology for the development of Web applications. While there have been several advances in the adoption of the ontology for application development, less emphasis is being made on the modelling methodologies for representing modern-day application that are characterised by the temporal nature of the data they process, which is captured from multiple sources. Taking into account the benefits of a methodology in the system development, we propose a novel methodology for modelling ontologies representing Context-Aware Temporal and Interdependent Systems (CATIS). CATIS is an ontology development methodology for modelling temporal interdependent applications in order to achieve the desired results when modelling sophisticated applications with temporal and inter dependent attributes to suit today's application requirements
A guidance and evaluation approach for mHealth education applications
© Springer International Publishing AG 2017. A growing number of mobile applications for health education are being utilized to support different stakeholders, from health professionals to software developers to patients and more general users. There is a lack of a critical evaluation framework to ensure the usability and reliability of these mobile health education applications (MHEAs). Such a framework would facilitate the saving of time and effort for the different user groups. This paper describes a framework for evaluating mobile applications for health education, including a guidance tool to help different stakeholders select the one most suitable for them. The framework is intended to meet the needs and requirements of the different user categories, as well as improving the development of MHEAs through software engineering approaches. A description of the evaluation framework is provided, with its efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) factors. Lastly, an account of the quantitative and qualitative results for the framework applied to the Medscape and other mobile apps is given. This proposed framework - an Evaluation Framework for Mobile Health Education Apps - consists of a hybrid of five metrics selected from a larger set during heuristic and usability evaluation, the choice being based on interviews with patients, software developers and health professionals
Improving (Software) Patent Quality Through the Administrative Process
The available evidence indicates that patent quality, particularly in the area of software, needs improvement. This Article argues that even an agency as institutionally constrained as the U.S. Patent and Trademark Office (âPTOâ) could implement a portfolio of pragmatic, cost-effective quality improvement strategies. The argument in favor of these strategies draws upon not only legal theory and doctrine but also new data from a PTO software examination unit with relatively strict practices. Strategies that resolve around Section 112 of the patent statute could usefully be deployed at the initial examination stage. Other strategies could be deployed within the new post-issuance procedures available to the agency under the America Invents Act. Notably, although the strategies the Article discusses have the virtue of being neutral as to technology, they are likely to have a very significant practical impact in the area of software
Integrating Research Data Management into Geographical Information Systems
Ocean modelling requires the production of high-fidelity computational meshes
upon which to solve the equations of motion. The production of such meshes by
hand is often infeasible, considering the complexity of the bathymetry and
coastlines. The use of Geographical Information Systems (GIS) is therefore a
key component to discretising the region of interest and producing a mesh
appropriate to resolve the dynamics. However, all data associated with the
production of a mesh must be provided in order to contribute to the overall
recomputability of the subsequent simulation. This work presents the
integration of research data management in QMesh, a tool for generating meshes
using GIS. The tool uses the PyRDM library to provide a quick and easy way for
scientists to publish meshes, and all data required to regenerate them, to
persistent online repositories. These repositories are assigned unique
identifiers to enable proper citation of the meshes in journal articles.Comment: Accepted, camera-ready version. To appear in the Proceedings of the
5th International Workshop on Semantic Digital Archives
(http://sda2015.dke-research.de/), held in Pozna\'n, Poland on 18 September
2015 as part of the 19th International Conference on Theory and Practice of
Digital Libraries (http://tpdl2015.info/
Contribution structures
The invisibility of the individuals and groups that gave rise to requirements artifacts has
been identified as a primary reason for the persistence of requirements traceability
problems. This paper presents an approach, based on modelling the dynamic contribution
structures underlying requirements artifacts, which addresses this issue. We show how
these structures can be defined, using information about the agents who have contributed
to artifact production, in conjunction with details of the numerous traceability relations
that hold within and between artifacts themselves. We describe a scheme, derived from
work in sociolinguistics, which can be used to indicate the capacities in which agents
contribute. We then show how this information can be used to infer details about the
social roles and commitments of agents with respect to their various contributions and to
each other. We further propose a categorisation for artifact-based traceability relations
and illustrate how they impinge on the identification and definition of these structures.
Finally, we outline how this approach can be implemented and supported by tools,
explain the means by which requirements change can be accommodated in the
corresponding contribution structures, and demonstrate the potential it provides for
"personnel-based" requirements traceability
- âŠ