11 research outputs found

    Understanding requirements dependency in requirements prioritization: a systematic literature review

    Get PDF
    Requirement prioritization (RP) is a crucial task in managing requirements as it determines the order of implementation and, thus, the delivery of a software system. Improper RP may cause software project failures due to over budget and schedule as well as a low-quality product. Several factors influence RP. One of which is requirements dependency. Handling inappropriate handling of requirements dependencies can lead to software development failures. If a requirement that serves as a prerequisite for other requirements is given low priority, it affects the overall project completion time. Despite its importance, little is known about requirements dependency in RP, particularly its impacts, types, and techniques. This study, therefore, aims to understand the phenomenon by analyzing the existing literature. It addresses three objectives, namely, to investigate the impacts of requirements dependency on RP, to identify different types of requirements dependency, and to discover the techniques used for requirements dependency problems in RP. To fulfill the objectives, this study adopts the Systematic Literature Review (SLR) method. Applying the SLR protocol, this study selected forty primary articles, which comprise 58% journal papers, 32% conference proceedings, and 10% book sections. The results of data synthesis indicate that requirements dependency has significant impacts on RP, and there are a number of requirements dependency types as well as techniques for addressing requirements dependency problems in RP. This research discovered various techniques employed, including the use of Graphs for RD visualization, Machine Learning for handling large-scale RP, decision making for multi-criteria handling, and optimization techniques utilizing evolutionary algorithms. The study also reveals that the existing techniques have encountered serious limitations in terms of scalability, time consumption, interdependencies of requirements, and limited types of requirement dependencies

    An improved requirement change management model for agile software development

    Get PDF
    Business requirements for software development projects are volatile and continuously need improvement. Hence, popularity of Agile methodology increases as it welcomes requirement changes during the Agile Software Development (ASD). However, existing models merely focus on change of functional requirements that are not adequate to achieve software sustainability and support change requirement processes. Therefore, this study proposes an improved Agile Requirement Change Management (ARCM) Model which provides a better support of non-functional requirement changes in ASD for achieving software sustainability. This study was carried out in four phases. Phase one is a theoretical study that examined the important issues and practices of requirement change in ASD. Then, in phase two, an exploratory study was conducted to investigate current practices of requirement changes in ASD. The study involved 137 software practitioners from Pakistan. While in phase three, the findings from the previous phases were used to construct the ARCM model. The model was constructed by adapting Plan-Do-Check-Act (PDCA) method which consists of four 4 stages. Every stage provides well-defined aims, processes, activities, and practices. Finally, the model was evaluated by using expert review and case study approaches. There were six experts involved to verify the model and two case studies which involved two software companies from Pakistan were carried out to validate the applicability of the proposed model. The study proposes the ARCM model that consists of three main components: sustainability characteristics for handling non-functional requirements, sustainability analysis method for performing impact and risk analysis and assessment mechanism of ARCM using Goal Question Metrics (GQM) method. The evaluation result shown that the ARCM Model gained software practitioners’ satisfaction and able to be executed in a real environment. From the theoretical perspective, this study introduces the ARCM Model that contributed to the field of Agile Requirement Management, as well as the empirical findings that focused on the current issues, challenges and practices of RCM. Moreover, the ARCM model provides a solution for handling the nonfunctional requirements changes in ASD. Consequently, these findings are beneficial to Agile software practitioners and researchers to ensure the software sustainability are fulfilled hence empowers the companies to improve their value delivery

    Embedded Machine-Learning For Variable-Rate Fertiliser Systems: A Model-Driven Approach To Precision Agriculture

    Get PDF
    Efficient use of fertilisers, in particular the use of Nitrogen (N), is one of the rate-limiting factors in meeting global food production requirements. While N is a key driver in increasing crop yields, overuse can also lead to negative environmental and health impacts. It has been suggested that Variable-Rate Fertiliser (VRF) techniques may help to reduce excessive N applications. VRF seeks to spatially vary fertiliser input based on estimated crop requirements, however a major challenge in the operational deployment of VRF systems is the automated processing of large amounts of sensor data in real-time. Machine Learning (ML) algorithms have shown promise in their ability to process these large, high-velocity data streams, and to produce accurate predictions. The newly developed Fuzzy Boxes (FB) algorithm has been designed with VRF applications in mind, however no publicly available software implementation currently exists. Therefore, development of a prototype implementation of FB forms a component of this work. This thesis will also employ a Hardware-in-the-Loop (HWIL) testing methodology using a potential target device in order to simulate a real-world VRF deployment environment. By using this environment simulation, two existing ML algorithms (Artificial Neural Network (ANN) and Support Vector Machine (SVM)) can be compared against the prototype implementation of FB for applicability to VRF applications. It will be shown that all tested algorithms could potentially be suitable for high-speed VRF when measured on prediction time and various accuracy metrics. All algorithms achieved higher than 84.5% accuracy, with FB20 reaching 87.21%. Prediction times were highly varied; the fastest average predictor was an ANN (16.64μs), while the slowest was FB20(502.77μs). All average prediction times were fast enough to achieve a spatial resolution of 31 mm when operating at 60 m/s, making all tested algorithms fast enough predictors for VRF applications

    Queensland University of Technology: Annual Report 2010

    Get PDF
    Our annual report provides an evaluation of our performance and achievements during the previous year, measured against our goals and strategic plans. It documents our performance in the three key areas of: teaching and learning research community service. The report includes a summary of financial performance and a copy of our audited accounts

    Continuous Rationale Management

    Get PDF
    Continuous Software Engineering (CSE) is a software life cycle model open to frequent changes in requirements or technology. During CSE, software developers continuously make decisions on the requirements and design of the software or the development process. They establish essential decision knowledge, which they need to document and share so that it supports the evolution and changes of the software. The management of decision knowledge is called rationale management. Rationale management provides an opportunity to support the change process during CSE. However, rationale management is not well integrated into CSE. The overall goal of this dissertation is to provide workflows and tool support for continuous rationale management. The dissertation contributes an interview study with practitioners from the industry, which investigates rationale management problems, current practices, and features to support continuous rationale management beneficial for practitioners. Problems of rationale management in practice are threefold: First, documenting decision knowledge is intrusive in the development process and an additional effort. Second, the high amount of distributed decision knowledge documentation is difficult to access and use. Third, the documented knowledge can be of low quality, e.g., outdated, which impedes its use. The dissertation contributes a systematic mapping study on recommendation and classification approaches to treat the rationale management problems. The major contribution of this dissertation is a validated approach for continuous rationale management consisting of the ConRat life cycle model extension and the comprehensive ConDec tool support. To reduce intrusiveness and additional effort, ConRat integrates rationale management activities into existing workflows, such as requirements elicitation, development, and meetings. ConDec integrates into standard development tools instead of providing a separate tool. ConDec enables lightweight capturing and use of decision knowledge from various artifacts and reduces the developers' effort through automatic text classification, recommendation, and nudging mechanisms for rationale management. To enable access and use of distributed decision knowledge documentation, ConRat defines a knowledge model of decision knowledge and other artifacts. ConDec instantiates the model as a knowledge graph and offers interactive knowledge views with useful tailoring, e.g., transitive linking. To operationalize high quality, ConRat introduces the rationale backlog, the definition of done for knowledge documentation, and metrics for intra-rationale completeness and decision coverage of requirements and code. ConDec implements these agile concepts for rationale management and a knowledge dashboard. ConDec also supports consistent changes through change impact analysis. The dissertation shows the feasibility, effectiveness, and user acceptance of ConRat and ConDec in six case study projects in an industrial setting. Besides, it comprehensively analyses the rationale documentation created in the projects. The validation indicates that ConRat and ConDec benefit CSE projects. Based on the dissertation, continuous rationale management should become a standard part of CSE, like automated testing or continuous integration

    Understanding, Analysis, and Handling of Software Architecture Erosion

    Get PDF
    Architecture erosion occurs when a software system's implemented architecture diverges from the intended architecture over time. Studies show erosion impacts development, maintenance, and evolution since it accumulates imperceptibly. Identifying early symptoms like architectural smells enables managing erosion through refactoring. However, research lacks comprehensive understanding of erosion, unclear which symptoms are most common, and lacks detection methods. This thesis establishes an erosion landscape, investigates symptoms, and proposes identification approaches. A mapping study covers erosion definitions, symptoms, causes, and consequences. Key findings: 1) "Architecture erosion" is the most used term, with four perspectives on definitions and respective symptom types. 2) Technical and non-technical reasons contribute to erosion, negatively impacting quality attributes. Practitioners can advocate addressing erosion to prevent failures. 3) Detection and correction approaches are categorized, with consistency and evolution-based approaches commonly mentioned.An empirical study explores practitioner perspectives through communities, surveys, and interviews. Findings reveal associated practices like code review and tools identify symptoms, while collected measures address erosion during implementation. Studying code review comments analyzes erosion in practice. One study reveals architectural violations, duplicate functionality, and cyclic dependencies are most frequent. Symptoms decreased over time, indicating increased stability. Most were addressed after review. A second study explores violation symptoms in four projects, identifying 10 categories. Refactoring and removing code address most violations, while some are disregarded.Machine learning classifiers using pre-trained word embeddings identify violation symptoms from code reviews. Key findings: 1) SVM with word2vec achieved highest performance. 2) fastText embeddings worked well. 3) 200-dimensional embeddings outperformed 100/300-dimensional. 4) Ensemble classifier improved performance. 5) Practitioners found results valuable, confirming potential.An automated recommendation system identifies qualified reviewers for violations using similarity detection on file paths and comments. Experiments show common methods perform well, outperforming a baseline approach. Sampling techniques impact recommendation performance

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    On the application of artificial intelligence and human computation to the automation of agile software task effort estimation

    Get PDF
    Software effort estimation (SEE), as part of the wider project planning and product road mapping process, occurs throughout a software development life cycle. A variety of effort estimation methods have been proposed in the literature, including algorithmic methods, expert based methods, and more recently, methods based on techniques drawn from machine learning and natural language processing. In general, the consensus in the literature is that expert-based methods such as Planning Poker are more reliable than automated effort estimation. However, these methods are labour intensive and difficult to scale to large-scale projects. To address this limitation, this thesis investigates the feasibility of using human computation techniques to coordinate crowds of inexpert workers to predict expert-comparable effort estimates for a given software development task. The research followed an empirical methodology and used four different methods: literature review, replication, a series of laboratory experiments, and ethnography. The literature uncovered the lack of suitable datasets that include the attributes of descriptive text (corpus), actual cost, and expert estimates for a given software development task. Thus, a new dataset was developed to meet the necessary requirements. Next, effort estimation based on recent natural language processing advancements was evaluated and compared with expert estimates. The results suggest that there was no significant improvement, and the automated approach was still outperformed by expert estimates. Therefore, the feasibility of scaling the Planning Poker effort estimation method by using human computation in a micro-task crowdsourcing environment was explored. A series of pilot experiments were conducted to find the proper design for adapting Planning Poker to a crowd environment. This resulted in designing a new estimation method called Crowd Planning Poker (CPP). The pilot experiments revealed that a significant proportion of the crowd submitted poor quality assignments. Therefore, an approach to actively managing the quality of SEE work was proposed and evaluated before being integrated into the CPP method. A substantial overall evaluation was then conducted. The results demonstrated that crowd workers were able to discriminate between tasks of varying complexity and produce estimates that were comparable with those of experts and at substantially reduced cost compared with small teams of domain experts. It was further noted in the experiments that crowd workers provide useful insights as to the resolution of the task. Therefore, as a final step, fine-grained details about crowd workers’ behaviour, including actions taken and artifacts reviewed, were used in an ethnographic study to understand how crowd effort estimation takes place in a crowd. Four persona archetypes were developed to describe the crowd behaviours, and the results of the behaviour analysis were confirmed by surveying the crowd workers
    corecore