20 research outputs found

    Automated Game Design Learning

    Full text link
    While general game playing is an active field of research, the learning of game design has tended to be either a secondary goal of such research or it has been solely the domain of humans. We propose a field of research, Automated Game Design Learning (AGDL), with the direct purpose of learning game designs directly through interaction with games in the mode that most people experience games: via play. We detail existing work that touches the edges of this field, describe current successful projects in AGDL and the theoretical foundations that enable them, point to promising applications enabled by AGDL, and discuss next steps for this exciting area of study. The key moves of AGDL are to use game programs as the ultimate source of truth about their own design, and to make these design properties available to other systems and avenues of inquiry.Comment: 8 pages, 2 figures. Accepted for CIG 201

    Revitalizing Legacy Systems: Extracting Key Features for Software Transplantation

    Get PDF
    The creation of intelligent software depends on the ability to transfer software without any restrictions. In this article, a crucial stage in software engineering, the feature extraction for effective software transplantation, is discussed. As hardware, operating systems, or other factors change, it is commonly necessary to move software from one environment to another. It is vital to identify and extract the relevant software characteristics, which might be challenging given how complex software is, in order to carry out efficient software transplantation. On the other hand, the procedure to extract these attributes from the software might be time-consuming and need extensive understanding. To address this, we propose a transplantation strategy that prioritizes automation with the help of AWS. Our approach involves an agent running on the application server (on-premises). It performs the task of feature identification, extraction and deployment on AWS Cloud. Currently, our strategy is confined to Java and .NET applications

    VTrace-A Tool for Visualizing Traceability Links Among Software Artefacts for an Evolving System

    Full text link
    Traceability Management plays a key role in tracing the life of a requirement through all the specifications produced during the development phase of a software project. A lack of traceability information not only hinders the understanding of the system but also will prove to be a bottleneck in the future maintenance of the system. Projects that maintain traceability information during the development stages somehow fail to upgrade their artefacts or maintain traceability among the different versions of the artefacts that are produced during the maintenance phase. As a result the software artefacts lose the trustworthiness and engineers mostly work from the source code for impact analysis. The goal of our research is on understanding the impact of visualizing traceability links on change management tasks for an evolving system. As part of our research we have implemented a Traceability Visualization Tool-VTrace that manages software artefacts and also enables the visualization of traceability links. The results of our controlled experiment show that subjects who used the tool were more accurate and faster on change management tasks than subjects that didn't use the tool

    VTrace-A Tool for Visualizing Traceability Links among Software Artefacts for an Evolving System

    Get PDF
    Traceability Management plays a key role in tracing the life of a requirement through all the specifications produced during the development phase of a software project. A lack of traceability information not only hinders the understanding of the system but also will prove to be a bottleneck in the future maintenance of the system. Projects that maintain traceability information during the development stages somehow fail to upgrade their artefacts or maintain traceability among the different versions of the artefacts that are produced during the maintenance phase. As a result the software artefacts lose the trustworthiness and engineers mostly work from the source code for impact analysis. The goal of our research is on understanding the impact of visualizing traceability links on change management tasks for an evolving system. As part of our research we have implemented a Traceability Visualization Tool-VTrace that manages software artefacts and also enables the visualization of traceability links. The results of our controlled experiment show that subjects who used the tool were more accurate and faster on change management tasks than subjects that didn’t use the tool

    Traceability approach for conflict dissolution in handling requirements crosscutting

    Get PDF
    Requirements crosscutting in software development and maintenance has gradually become an important issue in software engineering. There are growing needs of traceability support to achieve some possible understanding in requirements crosscutting throughout phases in software lifecycle. It is aimed to manage practical process in addressing requirements crosscutting at various phases in order to comply with industrial standard. However, due to its distinct nature, many recent works are focusing on identification, modularization, composition and conflict dissolution of requirements crosscutting which are mostly saturated at requirements level. These works fail to practically specify crosscutting properties for functional and nonfunctional requirements at requirements, analysis and design phases. Therefore, this situation leads to inability to provide sufficient support for software engineers to manage requirements crosscutting across the remaining development phases. This thesis proposes a new approach called the Identification, Modularization, Design Composition Rules and Conflict Dissolutions (IM-DeCRuD) that provides a special traceability to facilitate better understanding and reasoning for engineering tasks towards requirements crosscutting during software development and evolution. This study also promotes a simple but significant way to support pragmatic changes of crosscutting properties at requirements, analysis and design phases for medium sizes of software development and maintenance projects. A tool was developed based on the proposed approach to support four main perspectives namely requirements specification definition, requirements specification modification, requirements prioritization setting and graphics visualizing representation. Software design components are generated using Generic Modeling Environment (GME) with Java language interpreter to incorporate all these features. The proposed IM-DeCRuD was applied to an industrial strength case study of medium-scaled system called myPolicy. The tool was evaluated and the results were verified by some experts for validation and opinion. The feedbacks were then gathered and analyzed using DESMET qualitative method. The outcomes show that the IM-DeCRuD is applicable to address some tedious job of engineering process in handling crosscutting properties at requirements, analysis and design phases for system development and evolution

    Process improvement for traceability: A study of human fallibility

    Full text link
    Abstract—Human analysts working with results from automated traceability tools often make incorrect decisions that lead to lower quality final trace matrices. As the human must vet the results of trace tools for mission- and safety-critical systems, the hopes of developing expedient and accurate tracing procedures lies in understanding how analysts work with trace matrices. This paper describes a study to understand when and why humans make correct and incorrect decisions during tracing tasks through logs of analyst actions. In addition to the traditional measures of recall and precision to describe the accuracy of the results, we introduce and study new measures that focus on analyst work quality: potential recall, sensitivity, and effort distribution. We use these measures to visualize analyst progress towards the final trace matrix, identifying factors that may influence their performance and determining how actual tracing strategies, derived from analyst logs, affect results

    Clustering and its Application in Requirements Engineering

    Get PDF
    Large scale software systems challenge almost every activity in the software development life-cycle, including tasks related to eliciting, analyzing, and specifying requirements. Fortunately many of these complexities can be addressed through clustering the requirements in order to create abstractions that are meaningful to human stakeholders. For example, the requirements elicitation process can be supported through dynamically clustering incoming stakeholders’ requests into themes. Cross-cutting concerns, which have a significant impact on the architectural design, can be identified through the use of fuzzy clustering techniques and metrics designed to detect when a theme cross-cuts the dominant decomposition of the system. Finally, traceability techniques, required in critical software projects by many regulatory bodies, can be automated and enhanced by the use of cluster-based information retrieval methods. Unfortunately, despite a significant body of work describing document clustering techniques, there is almost no prior work which directly addresses the challenges, constraints, and nuances of requirements clustering. As a result, the effectiveness of software engineering tools and processes that depend on requirements clustering is severely limited. This report directly addresses the problem of clustering requirements through surveying standard clustering techniques and discussing their application to the requirements clustering process

    IMPROVING TRACEABILITY RECOVERY TECHNIQUES THROUGH THE STUDY OF TRACING METHODS AND ANALYST BEHAVIOR

    Get PDF
    Developing complex software systems often involves multiple stakeholder interactions, coupled with frequent requirements changes while operating under time constraints and budget pressures. Such conditions can lead to hidden problems, manifesting when software modifications lead to unexpected software component interactions that can cause catastrophic or fatal situations. A critical step in ensuring the success of software systems is to verify that all requirements can be traced to the design, source code, test cases, and any other software artifacts generated during the software development process. The focus of this research is to improve on the trace matrix generation process and study how human analysts create the final trace matrix using traceability information generated from automated methods. This dissertation presents new results in the automated generation of traceability matrices and in the analysis of analyst actions during a tracing task. The key contributions of this dissertation are as follows: (1) Development of a Proximity-based Vector Space Model for automated generation of TMs. (2) Use of Mean Average Precision (a ranked retrieval-based measure) and 21-point interpolated precision-recall graph (a set-based measure) for statistical evaluation of automated methods. (3) Logging and visualization of analyst actions during a tracing task. (4) Study of human analyst tracing behavior with consideration of decisions made during the tracing task and analyst tracing strategies. (5) Use of potential recall, sensitivity, and effort distribution as analyst performance measures. Results show that using both a ranked retrieval-based and a set-based measure with statistical rigor provides a framework for evaluating automated methods. Studying the human analyst provides insight into how analysts use traceability information to create the final trace matrix and identifies areas for improvement in the traceability process. Analyst performance measures can be used to identify analysts that perform the tracing task well and use effective tracing strategies to generate a high quality final trace matrix

    Improving Automated Requirements Trace Retrieval Through Term-Based Enhancement Strategies

    Get PDF
    Requirements traceability is concerned with managing and documenting the life of requirements. Its primary goal is to support critical software development activities such as evaluating whether a generated software system satisfies the specified set of requirements, checking that all requirements have been implemented by the end of the lifecycle, and analyzing the impact of proposed changes on the system. Various approaches for improving requirements traceability practices have been proposed in recent years. Automated traceability methods that utilize information retrieval (IR) techniques have been recognized to effectively support the trace generation and retrieval process. IR based approaches not only significantly reduce human effort involved in manual trace generation and maintenance, but also allow the analyst to perform tracing on an “as-needed” basis. The IR-based automated traceability tools typically retrieve a large number of potentially relevant traceability links between requirements and other software artifacts in order to return to the analyst as many true links as possible. As a result, the precision of the retrieval results is generally low and the analyst often needs to manually filter out a large amount of unwanted links. The low precision among the retrieved links consequently impacts the usefulness of the IR-based tools. The analyst’s confidence in the effectiveness of the approach can be negatively affected both by the presence of a large number of incorrectly retrieved traces, and the number of true traces that are missed. In this thesis we present three enhancement strategies that aim to improve precision in trace retrieval results while still striving to retrieve a large number of traceability links. The three strategies are: 1) Query term coverage (TC) This strategy assumes that a software artifact sharing a larger proportion of distinct words with a requirement is more likely to be relevant to that requirement. This concept is defined as query term coverage (TC). A new approach is introduced to incorporate the TC factor into the basic IR model such that the relevance ranking for query-document pairs that share two or more distinct terms will be increased and the retrieval precision is improved. 2) Phrasing The standard IR models generate similarity scores for links between a query and a document based on the distribution of single terms in the document collection. Several studies in the general IR area have shown phrases can provide a more accurate description of document content and therefore lead to improvement in retrieval [21, 23, 52]. This thesis therefore presents an approach using phrase detection to enhance the basic IR model and to improve its retrieval accuracy. 3) Utilizing a project glossary Terms and phrases defined in the project glossary tend to capture the critical meaning of a project and therefore can be regarded as more meaningful for detecting relations between documents compared to other more general terms. A new enhancement technique is then introduced in this thesis that utilizes the information in the project glossary and increases the weights of terms and phrases included in the project glossary. This strategy aims at increasing the relevance ranking of documents containing glossary items and consequently at improving the retrieval precision. The incorporation of these three enhancement strategies into the basic IR model, both individually and synergistically, is presented. Extensive empirical studies have been conducted to analyze and compare the retrieval performance of the three strategies. In addition to the standard performance metrics used in IR, a new metric average precision change [80] is also introduced in this thesis to measure the accuracy of the retrieval techniques. Empirical results on datasets with various characteristics show that the three enhancement methods are generally effective in improving the retrieval results. The improvement is especially significant at the top of the retrieval results which contains the links that will be seen and inspected by the analyst first. Therefore the improvement is especially meaningful as it implies the analyst may be able to evaluate those important links earlier in the process. As the performance of these enhancement strategies varies from project to project, the thesis identifies a set of metrics as possible predictors for the effectiveness of these enhancement approaches. Two such predictors, namely average query term coverage (QTC) and average phrasal term coverage (PTC), are introduced for the TC and the phrasing approach respectively. These predictors can be employed to identify which enhancement algorithm should be used in the tracing tool to improve the retrieval performance for specific documents collections. Results of a small-scale study indicate that the predictor values can provide useful guidelines to select a specific tracing approach when there is no prior knowledge on a given project. The thesis also presents criteria for evaluating whether an existing project glossary can be used to enhance results in a given project. The project glossary approach will not be effective if the existing glossary is not being consistently followed in the software development. The thesis therefore presents a new procedure to automatically extract critical keywords and phrases from the requirements collection of a given project. The experimental results suggest that these extracted terms and phrases can be used effectively in lieu of missing or ineffective project glossary to help improve precision of the retrieval results. To summarize, the work presented in this thesis supports the development and application of automated tracing tools. The three strategies share the same goal of improving precision in the retrieval results to address the low precision problem, which is a big concern associated with the IR-based tracing methods. Furthermore, the predictors for individual enhancement strategies presented in this thesis can be utilized to identify which strategy will be effective in the specific tracing tasks. These predictors can be adopted to define intelligent tracing tools that can automatically determine which enhancement strategy should be applied in order to achieve the best retrieval results on the basis of the metrics values. A tracing tool incorporating one or more of these methods is expected to achieve higher precision in the trace retrieval results than the basic IR model. Such improvement will not only reduce the analyst’s effort of inspecting the retrieval results, but also increase his or her confidence in the accuracy of the tracing tool
    corecore