56 research outputs found

    Enhancing similarity distances using mandatory and optional forearly fault detection

    Get PDF
    Software Product Line (SPL) describes procedures, techniques, and tools in software engineering by using a common method of production for producing a group of software systems that identical from a shared set of software assets. In SPL, the similarity-based prioritization can resemble combinatorial interaction testing in scalable and efficient way by choosing and prioritize configurations that most dissimilar. However, the similarity distances in SPL still not so much cover the basic detail of feature models which are the notations. Plus, the configurations always have been prioritized based on domain knowledge but not much attention has been paid to feature model notations. In this paper, we proposed the usage of mandatory and optional notations for similarity distances. The objective is to improve the average percentage of faults detected (APFD). We investigate four different distances and make modifications on the distances to increase APFD value. These modifications are the inclusion of mandatory and optional notations with the similarity distances. The results are the APFD values for all the similarity distances including the original and modified similarity distances. Overall, the results shown that by subtracting the optional notation value can increase the APFD by 3.71% from the original similarity distance

    A reliability estimation model using integrated tasks and resources

    Get PDF
    With the growing size of modern systems, the composition of a number of resources for a system is becoming increasingly more complex. Thus, a reliability analysis for that system is essential, especially during design time. The reliability estimation model is rapidly becoming a crucial part of the system development life cycle, and a new model is needed to enable an early analysis of reliability estimation, especially for the system under study. However, the existing approach neglects to address the correlation between resource and system task for estimation of system reliability. This subsequently restricts the accuracy of estimation results and thus, could misguide the reliability analysis in general. This paper proposes a reliability estimation model that enables the computation of the system reliability as a product of resource and system task. The system task reliability is treated as a transition probability that the resource may execute for subsequent resources. To validate the model, one real case study is used and the accuracy of the estimation result is compared with the actual reliability values. The result shows the estimation accuracy is considered at an acceptable level and some of the scenarios have recorded higher accuracy values than previous models. To evaluate our model, the result is compared with that of the existing model and shows our model providing a more accurate estimation for a more complex scenario

    Test case prioritization approaches in regression testing: A systematic literature review

    Get PDF
    Context Software quality can be assured by going through software testing process. However, software testing phase is an expensive process as it consumes a longer time. By scheduling test cases execution order through a prioritization approach, software testing efficiency can be improved especially during regression testing. Objective It is a notable step to be taken in constructing important software testing environment so that a system's commercial value can increase. The main idea of this review is to examine and classify the current test case prioritization approaches based on the articulated research questions. Method Set of search keywords with appropriate repositories were utilized to extract most important studies that fulfill all the criteria defined and classified under journal, conference paper, symposiums and workshops categories. 69 primary studies were nominated from the review strategy. Results There were 40 journal articles, 21 conference papers, three workshop articles, and five symposium articles collected from the primary studies. As for the result, it can be said that TCP approaches are still broadly open for improvements. Each approach in TCP has specified potential values, advantages, and limitation. Additionally, we found that variations in the starting point of TCP process among the approaches provide a different timeline and benefit to project manager to choose which approaches suite with the project schedule and available resources. Conclusion Test case prioritization has already been considerably discussed in the software testing domain. However, it is commonly learned that there are quite a number of existing prioritization techniques that can still be improved especially in data used and execution process for each approach

    The Enhancement of Evolving Spiking Neural Network with Firefly Algorithm

    Get PDF
    This study presents the integration between Evolving Spiking Neural Network (ESNN) and Firefly Algorithm (FA) for parameter optimization of ESNN model. Since ESNN lacks the ability to automatically select the optimum parameters, Firefly Algorithm (FA), as one of nature inspired metaheuristic algorithms is used as a new parameter optimizer for ESNN. The proposed method, ESNN-FA is used to determine the optimum value of ESNN parameter which is modulation factor (Mod), similarity factor (Sim) and threshold factor (C). Five standard datasets from UCI machine learning are used to measure the effectiveness of the proposed work. The classification results obtained shown an increase in accuracy than standard ESNN for all dataset except for iris dataset. Classification accuracy for iris dataset is 84% which less than standard ESNN with 89.33%. The rest of datasets achieved higher classification accuracy than standard ESNN which for breast cancer with 92.12% than 66.18%, diabetes with 68.25% than 38.46%, heart with 78.15% than 66.3% and wine with 78.66% than 44.45%

    Strategy for scalable scenarios modeling and calculation in early software reliability engineering

    Get PDF
    System scenarios derived from requirements specification play an important role in the early software reliability engineering. A great deal of research effort has been devoted to predict reliability of a system at early design stages. The existing approaches are unable to handle scalability and calculation of scenarios reliability for large systems. This paper proposes modeling of scenarios in a scalable way by using a scenario language that describes system scenarios in a compact and concise manner which can results in a reduced number of scenarios. Furthermore, it proposes a calculation strategy to achieve better traceability of scenarios, and avoid computational complexity. The scenarios are pragmatically modeled and translated to finite state machines, where each state machine represents the behaviour of component instance within the scenario. The probability of failure of each component exhibited in the scenario is calculated separately based on the finite state machines. Finally, the reliability of the whole scenario is calculated based on the components’ behaviour models and their failure information using modified mathematical formula. In this paper, an example related to a case study of an automated railcar system is used to verify and validate the proposed strategy for scalability of system modeling

    Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy

    Get PDF
    Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for “widely used,” which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples

    A Comparison on Similarity Distances and Prioritization Techniques for Early Fault Detection Rate

    Get PDF
    Nowadays, the Software Product Line (SPL) had replaced the conventional product development system. Many researches have been carried out to ensure the SPL usage prune the benefits toward the recent technologies. However, there are still some problems exist within the concept itself, such as variability and commonality. Due to its variability, exhaustive testing is not possible. Various solutions have been proposed to lessen this problem. One of them is prioritization technique, in which it is used to arrange back the test cases to achieve a specific performance goal. In this paper, the early fault detection is selected as the performance goal. Similarity function is used within our prioritization approach. Five different types of prioritization techniques are used in the experiment. The experiment results indicate that the greed-aided-clustering ordered sequence (GOS) shows the highest rate of early fault detection

    Organizational socialization process of MBA graduates

    Get PDF
    Extant literature on organizational socialization has not focused on the relation of education and learning with adjusting to work. As a result, little is known about the contribution of an MBA learning experience towards the process of organizational socialization. The purpose of this study is to understand the organizational socialization process of MBA graduates in their first six months of work. This qualitative study involved interviews with MBA graduates who had been employed within a period of one to six months. This study found that MBA graduates utilized their communication and analytical skills that were enhanced during the MBA education in support of their socialization tactics of relationship building, information gathering and learning. Graduates’ skills and prior experiences were mobilized through the facilitation of immediate superiors and/or supported by mentorship and help by senior co-workers

    Adopting the appropriate performance measures for soft computing based estimation by analogy

    Get PDF
    Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for "widely used, " which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples
    corecore