16 research outputs found

    A systematic review of unsupervised learning techniques for software defect prediction

    Get PDF
    National Key Basic Research Program of China [2018YFB1004401]; the National Natural Science Foundation of China [61972317, 61402370]

    A new transformation for embedded convolutional neural network approach toward real-time servo motor overload fault-detection

    Full text link
    Overloading in DC servo motors is a major concern in industries, as many companies face the problem of finding expert operators, and also human monitoring may not be an effective solution. Therefore, this paper proposed an embedded Artificial intelligence (AI) approach using a Convolutional Neural Network (CNN) using a new transformation to extract faults from real-time input signals without human interference. Our main purpose is to extract as many as possible features from the input signal to achieve a relaxed dataset that results in an effective but compact network to provide real-time fault detection even in a low-memory microcontroller. Besides, fault detection method a synchronous dual-motor system is also proposed to take action in faulty events. To fulfill this intention, a one-dimensional input signal from the output current of each DC servo motor is monitored and transformed into a 3d stack of data and then the CNN is implemented into the processor to detect any fault corresponding to overloading, finally experimental setup results in 99.9997% accuracy during testing for a model with nearly 8000 parameters. In addition, the proposed dual-motor system could achieve overload reduction and provide a fault-tolerant system and it is shown that this system also takes advantage of less energy consumption

    Improved point center algorithm for K-Means clustering to increase software defect prediction

    Get PDF
    The k-means is a clustering algorithm that is often and easy to use. This algorithm is susceptible to randomly chosen centroid points so that it cannot produce optimal results. This research aimed to improve the k-means algorithm’s performance by applying a proposed algorithm called point center. The proposed algorithm overcame the random centroid value in k-means and then applied it to predict software defects modules’ errors. The point center algorithm was proposed to determine the initial centroid value for the k-means algorithm optimization. Then, the selection of X and Y variables determined the cluster center members. The ten datasets were used to perform the testing, of which nine datasets were used for predicting software defects. The proposed center point algorithm showed the lowest errors. It also improved the k-means algorithm’s performance by an average of 12.82% cluster errors in the software compared to the centroid value obtained randomly on the simple k-means algorithm. The findings are beneficial and contribute to developing a clustering model to handle data, such as to predict software defect modules more accurately

    Forecasting model with machine learning in higher education ICFES exams

    Get PDF
    In this paper, we proposed to make different forecasting models in the University education through the algorithms K-means, K-closest neighbor, neural network, and naïve Bayes, which apply to specific exams of engineering, licensed and scientific mathematical thinking in Saber Pro of Colombia. ICFES Saber Pro is an exam required for the degree of all students who carry out undergraduate programs in higher education. The Colombian government regulated this exam in 2009 in the decree 3963 intending to verify the development of competencies, knowledge level, and quality of the programs and institutions. The objective is to use data to convert into information, search patterns, and select the best variables and harness the potential of data (average 650.000 data per semester). The study has found that the combination of features was: women have greater participation (68%) in Mathematics, Engineering, and Teaching careers, the urban area continues to be the preferred place to apply for higher studies (94%), Internet use increased by 50% in the last year, the support of the family nucleus is still relevant for the support in the formation of the children

    A Bibliometric Survey on the Reliable Software Delivery Using Predictive Analysis

    Get PDF
    Delivering a reliable software product is a fairly complex process, which involves proper coordination from the various teams in planning, execution, and testing for delivering software. Most of the development time and the software budget\u27s cost is getting spent finding and fixing bugs. Rework and side effect costs are mostly not visible in the planned estimates, caused by inherent bugs in the modified code, which impact the software delivery timeline and increase the cost. Artificial intelligence advancements can predict the probable defects with classification based on the software code changes, helping the software development team make rational decisions. Optimizing the software cost and improving the software quality is the topmost priority of the industry to remain profitable in the competitive market. Hence, there is a great urge to improve software delivery quality by minimizing defects and having reasonable control over predicted defects. This paper presents the bibliometric study for Reliable Software Delivery using Predictive analysis by selecting 450 documents from the Scopus database, choosing keywords like software defect prediction, machine learning, and artificial intelligence. The study is conducted for a year starting from 2010 to 2021. As per the survey, it is observed that Software defect prediction achieved an excellent focus among the researchers. There are great possibilities to predict and improve overall software product quality using artificial intelligence techniques

    Improving practices in a medium french company: First step

    Get PDF
    Legacy systems are old software that still does useful tasks. In industrial software companies, legacy systems are often crucial for the company business model and represent a long-term business investment. Legacy systems are known to be hard to maintain. This is the case in a french company whose main product is twenty years old software written in PowerBuilder. Our long-term goal is to help it re-engineer this system. But how to validate our intervention? Little data is available on the system and specifically, past versions of the source code are not easy to recover. This constrained us on the metrics we could use. In this paper, we present a lightweight model to characterize the situation of the system and allow us to monitor it in the future

    A Cross-project Defect Prediction Model Using Feature Transfer and Ensemble Learning

    Get PDF
    Cross-project defect prediction (CPDP) trains the prediction models with existing data from other projects (the source projects) and uses the trained model to predict the target projects. To solve two major problems in CPDP, namely, variability in data distribution and class imbalance, in this paper we raise a CPDP model combining feature transfer and ensemble learning, with two stages of feature transfer and the classification. The feature transfer method is based on Pearson correlation coefficient, which reduces the dimension of feature space and the difference of feature distribution between items. The class imbalance is solved by SMOTE and Voting on both algorithm and data levels. The experimental results on 20 source-target projects show that our method can yield significant improvement on CPDP

    COMPUTERIZED SOFTWARE QUALITY EVALUATION WITH NOVEL ARTIFICIAL INTELLIGENCE APPROACH

    Get PDF
    Software quality assurance has grown in importance in the fast-paced world of software development. One of trickiest parts of creating and maintaining software is predicting how well it will perform. The term "computer evaluation" refers to use of advanced AI techniques in software quality assurance, replacing human evaluations and paving the way for a new era in software evaluation. We proposed Hybrid Elephant herding optimized Conditional Long short-term memory (HEHO-CLSTM) to estimate Software Quality Prediction. Software quality prediction and assurance has grown in importance in ever-changing world of software development. Software quality prediction encompasses a wide range of activities aimed at improving the quality of software systems via the use of data-driven approaches for prediction, evaluation and enhancement. We have collected Software Defects data and we feature extracted the attributes using linear discriminant Analysis (LDA). The suggested system improves the accuracy, AUC and Buggy instance compared with the current methods

    Statistical Analysis for Revealing Defects in Software Projects

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementDefect detection in software is the procedure to identify parts of software that may comprise defects. Software companies always seek to improve the performance of software projects in terms of quality and efficiency. They also seek to deliver the soft-ware projects without any defects to the communities and just in time. The early revelation of defects in software projects is also tried to avoid failure of those projects, save costs, team effort, and time. Therefore, these companies need to build an intelligent model capable of detecting software defects accurately and efficiently. This study seeks to achieve two main objectives. The first goal is to build a statistical model to identify the critical defect factors that influence software projects. The second objective is to build a statistical model to reveal defects early in software pro-jects as reasonable accurately. A bibliometric map (VOSviewer) was used to find the relationships between the common terms in those domains. The results of this study are divided into three parts: In the first part The term "software engineering" is connected to "cluster," "regression," and "neural network." Moreover, the terms "random forest" and "feature selection" are connected to "neural network," "recall," and "software engineering," "cluster," "regression," and "fault prediction model" and "software defect prediction" and "defect density." In the second part We have checked and analyzed 29 manuscripts in detail, summarized their major contributions, and identified a few research gaps. In the third part Finally, software companies try to find the critical factors that affect the detection of software defects and find any of the intelligent or statistical methods that help to build a model capable of detecting those defects with high accuracy. Two statistical models (Multiple linear regression (MLR) and logistic regression (LR)) were used to find the critical factors and through them to detect software defects accurately. MLR is executed by using two methods which are critical defect factors (CDF) and premier list of software defect factors (PLSDF). The accuracy of MLR-CDF and MLR-PLSDF is 82.3 and 79.9 respectively. The standard error of MLR-CDF and MLR-PLSDF is 26% and 28% respectively. In addition, LR is executed by using two methods which are CDF and PLSDF. The accuracy of LR-CDF and LR-PLSDF is 86.4 and 83.8 respectively. The standard error of LR-CDF and LR-PLSDF is 22% and 25% respectively. Therefore, LRCDF outperforms on all the proposed models and state-of-the-art methods in terms of accuracy and standard error

    Computer vision-based wood identification: a review

    Get PDF
    Wood identification is an important tool in many areas, from biology to cultural heritage. In the fight against illegal logging, it has a more necessary and impactful application. Identifying a wood sample to genus or species level is difficult, expensive and time-consuming, even when using the most recent methods, resulting in a growing need for a readily accessible and field-applicable method for scientific wood identification. Providing fast results and ease of use, computer vision-based technology is an economically accessible option currently applied to meet the demand for automated wood identification. However, despite the promising characteristics and accurate results of this method, it remains a niche research area in wood sciences and is little known in other fields of application such as cultural heritage. To share the results and applicability of computer vision-based wood identification, this paper reviews the most frequently cited and relevant published research based on computer vision and machine learning techniques, aiming to facilitate and promote the use of this technology in research and encourage its application among end-users who need quick and reliable results.info:eu-repo/semantics/publishedVersio
    corecore