46,397 research outputs found
Agile Requirements Engineering: A systematic literature review
Nowadays, Agile Software Development (ASD) is used to cope with increasing complexity in system development. Hybrid development models, with the integration of User-Centered Design (UCD), are applied with the aim to deliver competitive products with a suitable User Experience (UX). Therefore, stakeholder and user involvement during Requirements Engineering (RE) are essential in order to establish a collaborative environment with constant feedback loops. The aim of this study is to capture the current state of the art of the literature related to Agile RE with focus on stakeholder and user involvement. In particular, we investigate what approaches exist to involve stakeholder in the process, which methodologies are commonly used to present the user perspective and how requirements management is been carried out.
We conduct a Systematic Literature Review (SLR) with an extensive quality assessment of the included studies. We identified 27 relevant papers. After analyzing them in detail, we derive deep insights to the following aspects of Agile RE: stakeholder and user involvement, data gathering, user perspective, integrated methodologies, shared understanding, artifacts, documentation and Non-Functional Requirements (NFR). Agile RE is a complex research field with cross-functional influences. This study will contribute to the software development body of knowledge by assessing the involvement of stakeholder and user in Agile RE, providing methodologies that make ASD more human-centric and giving an overview of requirements management in ASD.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-RMinisterio de Economía y Competitividad TIN2015-71938-RED
Recommended from our members
Human-Centered Approaches in Geovisualization Design: Investigating Multiple Methods Through a Long-Term Case Study
Working with three domain specialists we investigate human-centered approaches to geovisualization following an
ISO13407 taxonomy covering context of use, requirements and early stages of design. Our case study, undertaken over three years, draws attention to repeating trends: that generic approaches fail to elicit adequate requirements for geovis application design; that the use of real data is key to understanding needs and possibilities; that trust and knowledge must be built and developed with collaborators. These processes take time but modified human-centred approaches can be effective. A scenario developed through contextual inquiry but supplemented with domain data and graphics is useful to geovis designers. Wireframe, paper and digital prototypes enable successful communication between specialist and geovis domains when incorporating real and interesting data, prompting exploratory behaviour and eliciting previously unconsidered requirements. Paper prototypes are particularly successful at eliciting suggestions, especially for novel visualization. Enabling specialists to explore their data freely with a digital prototype is as effective as using a structured task protocol and is easier to administer. Autoethnography has potential for framing the design process. We conclude that a common understanding of context of use, domain data and visualization possibilities are essential to successful geovis design and develop as this progresses. HC approaches can make a significant contribution here. However, modified approaches, applied with flexibility, are most promising. We advise early, collaborative engagement with data – through simple, transient visual artefacts supported by data sketches and existing designs – before moving to successively more sophisticated data wireframes and data prototypes
Relevance, benefits, and problems of software modelling and model driven techniques—A survey in the Italian industry
Context Claimed benefits of software modelling and model driven techniques are improvements in productivity, portability, maintainability and interoperability. However, little effort has been devoted at collecting evidence to evaluate their actual relevance, benefits and usage complications. Goal The main goals of this paper are: (1) assess the diffusion and relevance of software modelling and MD techniques in the Italian industry, (2) understand the expected and achieved benefits, and (3) identify which problems limit/prevent their diffusion. Method We conducted an exploratory personal opinion survey with a sample of 155 Italian software professionals by means of a Web-based questionnaire on-line from February to April 2011. Results Software modelling and MD techniques are very relevant in the Italian industry. The adoption of simple modelling brings common benefits (better design support, documentation improvement, better maintenance, and higher software quality), while MD techniques make it easier to achieve: improved standardization, higher productivity, and platform independence. We identified problems, some hindering adoption (too much effort required and limited usefulness) others preventing it (lack of competencies and supporting tools). Conclusions The relevance represents an important objective motivation for researchers in this area. The relationship between techniques and attainable benefits represents an instrument for practitioners planning the adoption of such techniques. In addition the findings may provide hints for companies and universitie
Realization of Semantic Atom Blog
Web blog is used as a collaborative platform to publish and share
information. The information accumulated in the blog intrinsically contains the
knowledge. The knowledge shared by the community of people has intangible value
proposition. The blog is viewed as a multimedia information resource available
on the Internet. In a blog, information in the form of text, image, audio and
video builds up exponentially. The multimedia information contained in an Atom
blog does not have the capability, which is required by the software processes
so that Atom blog content can be accessed, processed and reused over the
Internet. This shortcoming is addressed by exploring OWL knowledge modeling,
semantic annotation and semantic categorization techniques in an Atom blog
sphere. By adopting these techniques, futuristic Atom blogs can be created and
deployed over the Internet
Managing Research Data in Big Science
The project which led to this report was funded by JISC in 2010--2011 as part of its 'Managing Research Data' programme, to examine the way in which Big Science data is managed, and produce any recommendations which may be appropriate. Big science data is different: it comes in large volumes, and it is shared and exploited in ways which may differ from other disciplines. This project has explored these differences using as a case-study Gravitational Wave data generated by the LSC, and has produced recommendations intended to be useful variously to JISC, the funding council (STFC) and the LSC community. In Sect. 1 we define what we mean by 'big science', describe the overall data culture there, laying stress on how it necessarily or contingently differs from other disciplines. In Sect. 2 we discuss the benefits of a formal data-preservation strategy, and the cases for open data and for well-preserved data that follow from that. This leads to our recommendations that, in essence, funders should adopt rather light-touch prescriptions regarding data preservation planning: normal data management practice, in the areas under study, corresponds to notably good practice in most other areas, so that the only change we suggest is to make this planning more formal, which makes it more easily auditable, and more amenable to constructive criticism. In Sect. 3 we briefly discuss the LIGO data management plan, and pull together whatever information is available on the estimation of digital preservation costs. The report is informed, throughout, by the OAIS reference model for an open archive
Managing Research Data: Gravitational Waves
The project which led to this report was funded by JISC in 2010–2011 as part of its
‘Managing Research Data’ programme, to examine the way in which Big Science data
is managed, and produce any recommendations which may be appropriate.
Big science data is different: it comes in large volumes, and it is shared and
exploited in ways which may differ from other disciplines. This project has explored
these differences using as a case-study Gravitational Wave data generated by the LSC,
and has produced recommendations intended to be useful variously to JISC, the funding
council (STFC) and the LSC community.
In Sect. 1 we define what we mean by ‘big science’, describe the overall data
culture there, laying stress on how it necessarily or contingently differs from other
disciplines.
In Sect. 2 we discuss the benefits of a formal data-preservation strategy, and the
cases for open data and for well-preserved data that follow from that. This leads to our
recommendations that, in essence, funders should adopt rather light-touch prescriptions
regarding data preservation planning: normal data management practice, in the areas
under study, corresponds to notably good practice in most other areas, so that the only
change we suggest is to make this planning more formal, which makes it more easily
auditable, and more amenable to constructive criticism.
In Sect. 3 we briefly discuss the LIGO data management plan, and pull together
whatever information is available on the estimation of digital preservation costs.
The report is informed, throughout, by the OAIS reference model for an open
archive. Some of the report’s findings and conclusions were summarised in [1].
See the document history on page 37
Integrating the common variability language with multilanguage annotations for web engineering
Web applications development involves managing a high diversity of files and resources like code, pages or style sheets, implemented in different languages. To deal with the automatic generation of
custom-made configurations of web applications, industry usually adopts annotation-based approaches even though the majority of studies encourage the use of composition-based approaches to implement
Software Product Lines. Recent work tries to combine both approaches to get the complementary benefits. However, technological companies are reticent to adopt new development paradigms
such as feature-oriented programming or aspect-oriented programming.
Moreover, it is extremely difficult, or even impossible, to apply
these programming models to web applications, mainly because of
their multilingual nature, since their development involves multiple
types of source code (Java, Groovy, JavaScript), templates (HTML,
Markdown, XML), style sheet files (CSS and its variants, such as
SCSS), and other files (JSON, YML, shell scripts). We propose to
use the Common Variability Language as a composition-based approach
and integrate annotations to manage fine grained variability
of a Software Product Line for web applications. In this paper, we (i)
show that existing composition and annotation-based approaches,
including some well-known combinations, are not appropriate to
model and implement the variability of web applications; and (ii)
present a combined approach that effectively integrates annotations
into a composition-based approach for web applications. We implement
our approach and show its applicability with an industrial
real-world system.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Well Performance Tracking in a Mature Waterflood Asset
Imperial Users onl
- …