6 research outputs found

    A Bibliometric Survey on the Reliable Software Delivery Using Predictive Analysis

    Get PDF
    Delivering a reliable software product is a fairly complex process, which involves proper coordination from the various teams in planning, execution, and testing for delivering software. Most of the development time and the software budget\u27s cost is getting spent finding and fixing bugs. Rework and side effect costs are mostly not visible in the planned estimates, caused by inherent bugs in the modified code, which impact the software delivery timeline and increase the cost. Artificial intelligence advancements can predict the probable defects with classification based on the software code changes, helping the software development team make rational decisions. Optimizing the software cost and improving the software quality is the topmost priority of the industry to remain profitable in the competitive market. Hence, there is a great urge to improve software delivery quality by minimizing defects and having reasonable control over predicted defects. This paper presents the bibliometric study for Reliable Software Delivery using Predictive analysis by selecting 450 documents from the Scopus database, choosing keywords like software defect prediction, machine learning, and artificial intelligence. The study is conducted for a year starting from 2010 to 2021. As per the survey, it is observed that Software defect prediction achieved an excellent focus among the researchers. There are great possibilities to predict and improve overall software product quality using artificial intelligence techniques

    Software prozesuen hobekuntzarako ekimenen biziraupen-analisia eta sailkapen-ikasketa, eta horien ondorioak enpresa txikietan

    Get PDF
    116 p.Softwareak funtsezko papera dauka negozio gehienetan. Hain zuzen ere, edozein negozioren abantaila lehiakorraren gako nagusietako bat dela esan daiteke. Software hori enpresa handi, ertain edo txikiek sor dezakete. Testuinguru horretan, erakunde mota horiek prozesuak hobetzeko ekimenak martxan jartzeko hautua egiten dute, merkatuan eskaintzen dituzten zerbitzuen edo azken produktuen kalitatea hobetzeko helburuarekin. Hortaz, ohikoa izaten da enpresa handi eta ertainek azken produktuen garapen-prozesuak zehaztea, are eredugarriak diren kalitate-ereduak erabiltzea, industriatik eratorritako jardunbide egokiekin. Izan ere, hobekuntza-ekimen bat aurrera eramaten laguntzeko erreferentziazko eredu eta estandar asko daude. Hortaz, erakundeek hainbat eredutako eskakizunak bete behar izaten dituzte aldi berean. Estandar horien barruan antzekoak diren praktika edo eskakizunak egon ohi dira (bikoiztasunak), edo neurri handiko erakundeentzat pentsatuta daudenak. Erakunde txikien esparruan, bikoiztasun horiek gainkostua eragiten dute ekimen hauetan. Horren ondorioz, erreferentziazko ereduekin loturiko prozesuak zehazteko orduan, burokrazia-lana handitu egiten da. Horrez gain, eredu hauen bikoiztasunak ezabatzera eta bere prozesuak hainbat arau aldi berean aintzat hartuta berraztertzera behartzen ditu. Egoera hori bereziki delikatua da 25 langiletik behera dituzten erakunde txikientzat, Very Small Entities (VSE) izenez ere ezagunak direnak. Erakunde mota hauek ahal duten modurik onenean erabiltzen dituzte haien baliabideak, eta, haien ikuspegitik, erreferentziazko eredu hauek gastu bat dira inbertsio bat baino gehiago. Hortaz, ez dute prozesuak hobetzeko ekimenik martxan jartzen. Ildo horretatik, erakunde horiei VSE-en beharretara egokituko zen eredu bat eskaintzeko sortu zen ISO/IEC 29110.ISO/IEC 29110 arauaren lehen edizioa 2011n sortu zen eta, ordutik, zenbait ikerketa-lan eta industria-esperientzia garatu dira testuinguru horren barruan. Batetik, ez dago VSE-ekin loturik dauden nahikoa industria-esperientzia, eta, beraz, ez da erraza jakitea zein den VSE-en portaera. 2011tik, ISO/IEC29110 arauarekin zerikusia duten hainbat lan argitaratu dira, baina, orain arte, lan horien tipologia oso desberdina izan da. Horrenbestez, ezinbestekoa da lehen esperientzia hauek aztertu eta ezagutzea, egindako lehen lan horiek sailkatu ahal izateko. Bestetik, prozesuak hobetzeko ekimenek ez dute beti arrakastarik izaten, eta mota honetako ekimen baten iraupena zein izango den ere ez da gauza ziurra izaten. Hartara, ekimen hauek testuinguru hauetan daukaten biziraupen maila zein den aztertu behar da, bai eta VSE-etan prozesuak hobetzeko ekimenak garatu eta ezarri bitartean eman daitezkeen lan-ereduak identifikatzea ere. Azkenik, garatzen dituzten produktuen segurtasun-arloarekin kezka berezia izan ohi dute VSEk. Hortaz, segurtasun-alderdi nagusiak kudeatzeko mekanismoak ezarri behar izaten dituzte.Lehenik eta behin, lan honetan, ISO/IEC 29110 arauarekin loturiko artikuluen azterketa metodiko bat egin dugu, eta ikerketa-esparru nagusiak eta egindako lan mota garrantzitsuenak jaso ditugu. Bigarrenik, VSEk prozesuak hobetzeko martxan jarritako mota honetako ekimenen biziraupena aztertzeko marko bat proposatu dugu. Hirugarrenik, haien portaeraren ezaugarriak zehazteko, ekimen hauetan ematen diren ereduak identifikatzeko ikuspegia landu dugu. Laugarrenik, VSEn softwarearen garapenaren bizi-zikloan segurtasun-arloko alderdiak gehitzeko eta zor teknikoa kudeatzeko proposamena egin dugu

    Predicting the delay of issues with due dates in software projects

    No full text
    Issue-tracking systems (e.g. JIRA) have increasingly been used in many software projects. An issue could represent a software bug, a new requirement or a user story, or even a project task. A deadline can be imposed on an issue by either explicitly assigning a due date to it, or implicitly assigning it to a release and having it inherit the release’s deadline. This paper presents a novel approach to providing automated support for project managers and other decision makers in predicting whether an issue is at risk of being delayed against its deadline. A set of features (hereafter called risk factors) characterizing delayed issues were extracted from eight open source projects: Apache, Duraspace, Java.net, JBoss, JIRA, Moodle, Mulesoft, and WSO2. Risk factors with good discriminative power were selected to build predictive models to predict if the resolution of an issue will be at risk of being delayed. Our predictive models are able to predict both the the extend of the delay and the likelihood of the delay occurrence. The evaluation results demonstrate the effectiveness of our predictive models, achieving on average 79 % precision, 61 % recall, 68 % F-measure, and 83 % Area Under the ROC Curve. Our predictive models also have low error rates: on average 0.66 for Macro-averaged Mean Cost-Error and 0.72 Macro-averaged Mean Absolute Error

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    Intelligent Software Tooling For Improving Software Development

    Get PDF
    Software has eaten the world with many of the necessities and quality of life services people use requiring software. Therefore, tools that improve the software development experience can have a significant impact on the world such as generating code and test cases, detecting bugs, question and answering, etc. The success of Deep Learning (DL) over the past decade has shown huge advancements in automation across many domains, including Software Development processes. One of the main reasons behind this success is the availability of large datasets such as open-source code available through GitHub or image datasets of mobile Graphical User Interfaces (GUIs) with RICO and ReDRAW to be trained on. Therefore, the central research question my dissertation explores is: In what ways can the software development process be improved through leveraging DL techniques on the vast amounts of unstructured software engineering artifacts? We coin the approaches that leverage DL to automate or augment various software development task as Intelligent Software Tools. To guide our research of these intelligent software tools, we performed a systematic literature review to understand the current landscape of research on applying DL techniques to software tasks and any gaps that exist. From this literature review, we found code generation to be one of the most studied tasks with other tasks and artifacts such as impact analysis or tasks involving images and videos to be understudied. Therefore, we set out to explore the application of DL to these understudied tasks and artifacts as well as the limitations of DL models under the well studied task code completion, a subfield in code generation. Specifically, we developed a tool for automatically detecting duplicate mobile bug reports from user submitted videos. We used the popular Convolutional Neural Network (CNN) to learn important features from a large collection of mobile screenshots. Using this model, we could then compute similarity between a newly submitted bug report and existing ones to produce a ranked list of duplicate candidates that can be reviewed by a developer. Next, we explored impact analysis, a critical software maintenance task that identifies potential adverse effects of a given code change on the larger software system. To this end, we created Athena, a novel approach to impact analysis that integrates knowledge of a software system through its call-graph along with high-level representations of the code inside the system to improve impact analysis performance. Lastly, we explored the task of code completion, which has seen heavy interest from industry and academia. Specifically, we explored various methods that modify the positional encoding scheme of the Transformer architecture for allowing these models to incorporate longer sequences of tokens when predicting completions than seen during their training as this can significantly improve training times

    Deep Learning In Software Engineering

    Get PDF
    Software evolves and therefore requires an evolving field of Software Engineering. The evolution of software can be seen on an individual project level through the software life cycle, as well as on a collective level, as we study the trends and uses of software in the real world. As the needs and requirements of users change, so must software evolve to reflect those changes. This cycle is never ending and has led to continuous and rapid development of software projects. More importantly, it has put a great responsibility on software engineers, causing them to adopt practices and tools that allow them to increase their efficiency. However, these tools suffer the same fate as software designed for the general population; they need to change in order to reflect the user’s needs. Fortunately, the demand for this evolving software has given software engineers a plethora of data and artifacts to analyze. The challenge arises when attempting to identify and apply patterns learned from the vast amount of data. In this dissertation, we explore and develop techniques to take advantage of the vast amount of software data and to aid developers in software development tasks. Specifically, we exploit the tool of deep learning to automatically learn patterns discovered within previous software data and automatically apply those patterns to present day software development. We first set out to investigate the current impact of deep learning in software engineering by performing a systematic literature review of top tier conferences and journals. This review provides guidelines and common pitfalls for researchers to consider when implementing DL (Deep Learning) approaches in SE (Software Engineering). In addition, the review provides a research road map for areas within SE where DL could be applicable. Our next piece of work developed an approach that simultaneously learned different representations of source code for the task of clone detection. We found that the use of multiple representations, such as Identifiers, ASTs, CFGs and bytecode, can lead to the identification of similar code fragments. Through the use of deep learning strategies, we automatically learned these different representations without the requirement of hand-crafted features. Lastly, we designed a novel approach for automating the generation of assert statements through seq2seq learning, with the goal of increasing the efficiency of software testing. Given the test method and the context of the associated focal method, we automatically generated semantically and syntactically correct assert statements for a given, unseen test method. We exemplify that the techniques presented in this dissertation provide a meaningful advancement to the field of software engineering and the automation of software development tasks. We provide analytical evaluations and empirical evidence that substantiate the impact of our findings and usefulness of our approaches toward the software engineering community
    corecore