22 research outputs found
Teaching global software development through game design
In order to be prepared for careers in todays global
economy, software engineering students need to understand the
issues, methods, and practices associated with Global Software
Development (GSD). One approach to teaching GSD is to conduct
a GSD project class involving student teams from different institutions
in different countries. This approach has the advantage
of giving students first-hand experience with the barriers to
collaboration and other issues faced by software development
teams engaged in GSD. However, this approach is resourceintensive
and requires cooperation among institutions.
This paper presents an alternate approach based on game
design, where students learn GSD concepts by developing a GSD
simulation game. Following this approach, students learn about
GSD through implementing a game engine that simulates the
effects of global distance on a distributed software project. The
experience shows that students seem to grasp the concepts and
issues as a side effect of implementing the game
Requirements elicitation in open source software development: a case study
A growing body of empirical research has examined large, successful
open source software projects such as the Linux kernel,
Apache web server, and Mozilla web browser. Do these results
extend to small open source efforts involving a handful of developers?
A study of the OpenEMR open source electronic medical
record project was conducted, with the goal of understanding how
requirements are elicited, documented, agreed, and validated in a
small open source software project The results show that the majority
of features are asserted by developers, based on either their
personal experience, or knowledge of users’ needs. Relatively few
were requested directly by users. Validation and documentation
took the form of informal discussions via the project’s developer
mailing list. These results are consistent with an earlier study of
the Firefox web browser, suggesting that there is a common open
source requirements approach that is independent of project size
A qualitative study of open source software development: the OpenEMR project
Open Source software is competing successfully in many areas. The commercial sector is recognizing the benefits offered by Open Source development methods that
lead to high quality software. Can these benefits be realized in specialized domains where expertise is rare? This study
examined discussion forums of an Open Source project in a particular specialized application domain – electronic medical
records – to see how development roles are carried out, and by whom. We found through a qualitative analysis that the core developers in this system include doctors and clinicians who also use the product. We also found that the size of the community associated with the project is an order of magnitude smaller than predicted, yet still maintains a high degree of responsiveness to issues raised by users. The implication is that a few experts and a small core of dedicated programmers can achieve success using an Open Source approach in a specialized domain
A resource flow approach to modelling care pathways
Attempts to extend process management to support pathways in the health domain have not been as successful as workflow for routine business processes. In part this is due to the dynamic nature of knowledge-intensive work such as care pathways: the actions performed change continuously in response to the knowledge developed by those actions. Also, care pathways involve significant informal communications between those involved in caring for the patient and between these carers and the patient / patient family which are difficult to capture. We propose using an approach to supporting care pathways that embraces these difficulties. Rather than attempting to capture every nuance of individual activities, we seek to facilitate communication and coordination among knowledge workers to disseminate knowledge and pathway expertise throughout the organization
UTP semantics for shared-state, concurrent, context-sensitive process models
Process Modelling Language (PML) is a notation
for describing software development and business processes. It
takes the form of a shared-state concurrent imperative language
describing tasks as activities that require resources to start
and provide resources when they complete. Its syntax covers
sequential composition, parallelism, iteration and choice, but
without explicit iteration and choice conditions. It is intended
to support a range of context-sensitive interpretations, from a
rough guide for intended behaviour, to being very prescriptive
about the order in which tasks must occur. We are using Unifying
Theories of Programming (UTP) to model this range of semantic
interpretations, with formal links between them, typically of the
nature of a refinement. We address a number of challenges
that arise when trying to develop a compositional semantics
for PML and its shared-state concurrent underpinnings, most
notably in how UTP observations need to distinguish between
dynamic state-changes and static context parameters. The formal
semantics are intended as the basis for tool support for process
analysis, with applications in the healthcare domain, covering
such areas as healthcare pathways and software development
and certification processes for medical device software
Global software development and collaboration : barriers and solutions
While organisations recognise the advantages offered by global software development, there are many socio-technical barriers that affect successful collaboration in this inter-cultural environment. In this paper we present a review of the global software development literature where we highlight collaboration problems experienced by a cross-section of organisations in twenty-six studies. We also look at the literature to answer how organisations are over- coming these barriers in practice. We build on our previous study on global software development where we define collaboration as four practices related to agreeing, allocating, and planning goals, objectives, and tasks among distributed teams. We found that the key barriers to collaboration are geographic, temporal, cultural, and linguistic distance; the primary solutions to overcoming these barriers include site visits, synchronous communication technology, and knowledge sharing infrastructure to capture implicit knowledge and make it explicit
Can automated text classification improve content analysis of software profect data?
Content analysis is a useful approach for analyzing
unstructured software project data, but it is labor-intensive and
slow. Can automated text classification (using supervised machine
learning) be used to reduce the labor or improve the speed of
content analysis?
We conducted a case study involving data from a previous
study that employed content analysis of an open source software
project. We used a human-coded data set with 3256 samples to
create different size training sets ranging in size from 100 to
3000 samples to train an “ensemble” text classifier to assign one
of five different categories to a test set of samples.
The results show that the automated classifier could be trained
to recognize categories, but much less accurately than the human
classifiers. In particular, both precision and recall for lowfrequency
categories was very low (less than 20%). Nevertheless,
we hypothesize that automated classifiers could be used to filter a
sample to identify common categories before human researchers
examine the remainder for more difficult categories
Experience of industry case studies: a comparison of multi-case and embedded case study methods
This research comprises a methodological comparison of two
independent empirical case studies in industry: Case Study
A and Case Study B. Case Study A, is a multiple-case study
involving a set of short-duration data collections with 46
practitioners at 9 international companies engaged in offshoring
and outsourcing. Case Study B, in contrast, is a
single case, participant observation embedded case study
lasting 13 months in a mid-sized Irish software company
with geographically distributed software teams. Both cases
were exploring similar problems of understanding the activities
performed by various actors involved in scrum software
development teams. In this study, we examine the findings
from both studies, the efficiency of the different case study
methods and the contributions offered by each approach. We
adopted naturalistic research criteria to evaluate the two case
study approaches. We found that both multiple-case and
embedded case studies are suitable for exploratory research
(hypothesis development) but that embedded research may
also be more suitable for explanatory research (hypothesis
testing). We also found that longitudinal case studies offer
better confirmability; while multi-case studies offer better
transferability. We propose a set of illustrative research questions
to assist with the selection of the appropriate case study
method
Crafting a global teaming model for architectural knowledge
In this paper, we present the Global Teaming Model (GTM), which is empirically grounded, and outlines practices that managers need to consider when managing virtual teams. We explain how the model can be adapted to specific areas of software development, and use architectural knowledge management (AKM) as our exemplar. We focus on specific practices relating to how teams collaborate and share essential architectural knowledge across multiple sites. Through a review of the literature, we develop an in-depth view of recommended practices associated with AKM in a global environment. We then consider how we can incorporate these AKM practices into our Global Teaming model to ensure managers are given the necessary support. Our contribution to research therefore is to present AKM practices within the context of all other Global Software Development processes
Do scaling agile frameworks address global software development risks? An empirical study
Driven by the need to coordinate activities of multiple agile development teams cooperating to produce a large software product, software-intensive organizations are turning to scaling agile software development frameworks. Despite the growing adoption of various scaling agile frameworks, there is little empirical evidence of how effective their practices are in mitigating risk, especially in global software development (GSD), where project failure is a known problem.In this study, we develop a GSD Risk Catalog of 63 risks to assess the degree to which two scaling agile frameworks-Disciplined Agile Delivery (DAD) and the Scaled Agile Framework (SAFe)-address software project risks in GSD. We examined data from two longitudinal case studies implementing each framework to identify the extent to which the framework practices address GSD risks.Scaling agile frameworks appear to help companies eliminate or mitigate many traditional risks in GSD, especially relating to users and customers. However, several important risks were not eliminated or mitigated. These persistent risks in the main belonged to the Environment quadrant highlighting the inherent risk in developing software across geographic boundaries. Perhaps these frameworks (and arguably any framework), would have difficulty alleviating, issues that appear to be outside the immediate control of the organization