34 research outputs found

    Towards Knowledge Based Risk Management Approach in Software Projects

    Get PDF
    All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009; Batista Webster et al., 2005; Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994; Basili et al., 2007; Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as “FIRM”. Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are: Does the proposed knowledge based framework lead to a more effective risk management than the one obtained without using it? Does the proposed knowledge based framework lead to a more precise risk management than the one obtained without using it? The rest of the paper is organized as follows: section 2 provides a brief overview of the main research activities presented in literature dealing with the same topics; section 3 presents the proposed framework, while section 4 its specialization in the FIRM context; section 5 describes empirical studies we executed, results and discussions are presented in section 6. Finally, conclusions are drawn in section 7

    Software Analytics to Support Students in Object-Oriented Programming Tasks: An Empirical Study

    Get PDF
    The computing education community has shown a long-time interest in how to analyze the Object-Oriented (OO) source code developed by students to provide them with useful formative tips. Instructors need to understand the student's difficulties to provide precise feedback on most frequent mistakes and to shape, design and effectively drive the course. This paper proposes and evaluates an approach allowing to analyze student's source code and to automatically generate feedback about the more common violations of the produced code. The approach is implemented through a cloud-based tool allowing to monitor how students use language constructs based on the analysis of the most common violations of the Object-Oriented paradigm in the student source code. Moreover, the tool supports the generation of reports about student's mistakes and misconceptions that can be used to improve the students' education. The paper reports the results of a quasi-experiment performed in a class of a CS1 course to investigate the effects of the provided reports in terms of coding ability (concerning the correctness and the quality of the produced source code). Results show that after the course the treatment group obtained higher scores and produced better source code than the control group following the feedback provided by the teachers

    Enhancing Bug-Fixing Time Prediction with LSTM-Based Approach

    No full text
    This work presents an approach based on Long short-term memory (LSTM) for estimating the bug-fixing time in the bug triage process. Existing bug-fixing time predictor approaches underutilize useful semantic information and long-term dependencies between activities in the bug-fixing sequence. Therefore, the proposed approach is a deep learning-based model that converts activities into vectors of real numbers based on their semantic meaning. It then uses LSTM to identify long-term dependencies between activities and classifies sequences as having either short fixing time or long fixing time. The evaluation on bug reports from the Eclipse project shows that this approach performs slightly better than the current best in the literature, boasting improved metrics such as accuracy, precision, f-score, and recall

    Predicting Bug-Fixing Time: DistilBERT Versus Google BERT

    No full text
    The problem of bug-fixing time can be treated as a supervised text categorization task in Natural Language Processing. In recent years, following the use of deep learning also in the field of Natural Language Processing, pre-trained contextualized representations of words have become widespread. One of the most used pre-trained language representations models is named Google BERT (hereinafter, for brevity, BERT). BERT uses a self-attention mechanism that allows learning the bidirectional context representation of a word in a sentence, which constitutes one of the main advantages over the previously proposed solutions. However, due to the large size of BERT, it is difficult for it to put it into production. To address this issue, a smaller, faster, cheaper and lighter version of BERT, named DistilBERT, has been introduced at the end of 2019. This paper compares the efficacy of BERT and DistilBERT, combined with the Logistic Regression, in predicting bug-fixing time from bug reports of a large-scale open-source software project, LiveCode. In the experimentation carried out, DistilBERT retains almost 100% of its language understanding capabilities and, in the best case, it is 63.28% faster than BERT. Moreover, with a not time-consuming tuning of the C parameter in Logistic Regression, the DistilBERT provides an accuracy value even better than BERT

    Knowledge extraction from on-line open source bug tracking systems to predict bug-fixing time

    No full text
    For large scale software systems, many bugs can be reported over a long period of time. For software quality assurance and software project management, it is important to assign adequate resources to resolve the reported bug. An important issue concerning assignment is the ability to predict bug-fixing time because it can help a project team better estimate software maintenance efforts and better manage software projects. In this paper, we propose a model that can predict the bug-fixing time using the text information extracted from Bugzilla, an on-line open source Bug Tracking System (BTS). We perform an empirical investigation for the bugs of Novell, OpenOffice and LiveCode, three open source projects using Bugzilla. Proposed model is based on historical data stored on the BTS. For each bug-report we build a classification model to predict the time of its resolution, as slow or fast. In this work we used, as classifier, Support Vector Machine (SVM) but different classifier can be easily used. Our model, differently from existing work reported in the literature, selects all and only the attributes useful for prediction and filters appropriately attributes for the test-set. Experimental results show the model is effective. In the future, we will use and compare other different classification method to select the best one for a specific data-set

    Building a Knowledge Experience Base for Facilitating Innovation

    No full text
    This paper presents a framework aimed at supporting knowledge transferring inside and outside an organization for innovation purposes. For this goal, the authors propose a Knowledge Experience Base KEB, which collects Knowledge Experience Packages KEP, to support the formalization and packaging of knowledge and experience of innovation stakeholders, encouraging gradual explanation of tacit information of bearers of knowledge to facilitate the transfer, minimizing costs and risks

    Experience Formalized as a Service for Geographical and Temporal Remote Collaboration

    No full text
    Many technological solutions, especially in the fields of computer science and software engineering, are poorly supported by empirical evidences of their effectiveness and by the experience of acquiring the application in different industrial contexts. The lack of empirical evidences makes managers less confident in applying technological solutions proposed by the research community. Moreover, the lack of experience in the acquisition of technological solutions in different industrial contexts makes acquisition of the technological solution highly risky. These two issues represent a barrier to the diffusion of innovative technological solutions. This paper presents a Knowledge Management System (KMS), called PROMETHEUS, which consists of a platform that manages the Knowledge Experience Base (KEB), which collects Knowledge Experience Packages (KEP). The KMS thus formed supports the formalization and packaging of knowledge and experience of producers and innovation transferors encouraging gradual elicitation of tacit information of bearers of knowledge to facilitate the transfer. The KMS enables the cooperative production and evolution of KEP between different authors and user

    Maintenance-oriented selection of software components

    No full text
    Component-based software engineering is a new, promising, and rapidly growing discipline in both academia and industry. However, maintaining component-based systems (CBSs) introduces new issues: the choice of the components requires identifying a set of parameters that characterize them, in order to select the appropriate ones for a specific software system. In our research we propose a characterization of components aimed at foreseeing the maintenance effort of the CBS. In this paper we perform an empirical study in the context of three industrial software projects to assess these parameters. Our experience suggests a number of components characteristics, which can be useful for the purpose above. Moreover, the study produced some lessons learned, useful for building software applications easy to maintain. The results show that the lessons learned could be generalized, although further empirical studies are require

    Distributed Software Development with Knowledge Experience Packages

    No full text
    In software production process, a lot of knowledge is created and remain silent. Therefore, it cannot be reused to improve the effectiveness and the efficiency of these processes. This problem is amplified in the case of a distributed production. In fact, distributed software development requires complex context specific knowledge regarding the particularities of different technologies, the potential of existing software, the needs and expectations of the users. This knowledge, which is gained during the project execution, is usually tacit and is completely lost by the company when the production is completed. Moreover, each time a new production unit is hired, despite the diversity of culture and capacity of people, it is necessary to standardize the working skills and methods of the different teams if the company wants to keep the quality level of processes and products. In this context, we used the concept of Knowledge Experience Package (KEP), already specified in previous works and the tool realized to support KEP approach. In this work, we have carried out an experiment in an industrial context in which we compared the software development supported by KEPs with the development achieved without it
    corecore