246 research outputs found
A Strategy for Automatic Quality Signing and Verification Processes for Hardware and Software Testing
We propose a novel strategy to optimize the test suite required for testing both hardware and software in a production line. Here,
the strategy is based on two processes: Quality Signing Process and Quality Verification Process, respectively. Unlike earlier work,
the proposed strategy is based on integration of black box and white box techniques in order to derive an optimum test suite
during the Quality Signing Process. In this case, the generated optimal test suite significantly improves the Quality Verification
Process. Considering both processes, the novelty of the proposed strategy is the fact that the optimization and reduction of test
suite is performed by selecting only mutant killing test cases from cumulating t-way test cases. As such, the proposed strategy can
potentially enhance the quality of product with minimal cost in terms of overall resource usage and time execution. As a case study,
this paper describes the step-by-step application of the strategy for testing a 4-bit Magnitude Comparator Integrated Circuits in a
production line. Comparatively, our result demonstrates that the proposed strategy outperforms the traditional block partitioning
strategy with the mutant score of 100% to 90%, respectively, with the same number of test cases
Adopting Jaya Algorithm for Team Formation Problem
This paper presents a simple and mighty metaheuristic algorithm, Jaya, which is applied to solve the team formation (TF) problem and it is a very fundamental problem in many databases and expert collaboration networks or web applications. The Jaya does not need any distinctive parameters that require comprehensive tuning, which is usually troublesome and inefficient. Among several optimization methods, Jaya is chosen for TFP because of its simplicity and it always avoids the worst solutions and moving towards the global best solution. This victorious nature makes Jaya Algorithm more powerful and significant as compared to any other contemporary optimization algorithms. To evaluate the efficiency of the Jaya Algorithm (JA) against another metaheuristic algorithm, Sine-Cosine Algorithm (SCA), both algorithms are tested and assessed for the TF problem solution using an ACM dataset containing experts and their skills. The experimental results validate the improved performance of the optimization solutions and the potential of JA with fast convergence for solving TF problems which are better than SCA
Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation
Combinatorial interaction testing is an important software testing technique
that has seen lots of recent interest. It can reduce the number of test cases
needed by considering interactions between combinations of input parameters.
Empirical evidence shows that it effectively detects faults, in particular, for
highly configurable software systems. In real-world software testing, the input
variables may vary in how strongly they interact, variable strength
combinatorial interaction testing (VS-CIT) can exploit this for higher
effectiveness. The generation of variable strength test suites is a
non-deterministic polynomial-time (NP) hard computational problem
\cite{BestounKamalFuzzy2017}. Research has shown that stochastic
population-based algorithms such as particle swarm optimization (PSO) can be
efficient compared to alternatives for VS-CIT problems. Nevertheless, they
require detailed control for the exploitation and exploration trade-off to
avoid premature convergence (i.e. being trapped in local optima) as well as to
enhance the solution diversity. Here, we present a new variant of PSO based on
Mamdani fuzzy inference system
\cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive
selection of its global and local search operations. We detail the design of
this combined algorithm and evaluate it through experiments on multiple
synthetic and benchmark problems. We conclude that fuzzy adaptive selection of
global and local search operations is, at least, feasible as it performs only
second-best to a discrete variant of PSO, called DPSO. Concerning obtaining the
best mean test suite size, the fuzzy adaptation even outperforms DPSO
occasionally. We discuss the reasons behind this performance and outline
relevant areas of future work.Comment: 21 page
Constraints T-Way Testing Strategy With Modified Condition /Decision Coverage (MC/DC)
Modern society in today’s digital era depends heavily on software in almost every aspect of daily life. In fact, whenever possible, most hardware implementation is now being replaced by the software counterparts. From the washing machine controllers, mobile phone applications to the sophisticated airplane control systems, the growing dependency on software can be attributed to a number of factors. Unlike hardware, software does not wear out. Thus, the use of software can also help to control maintenance costs. Additionally, software is also malleable and can easily be changed and customized as the need arises. With the advent of advancement in computer hardware technology, software applications grow drastically in terms of lines of codes, that is, to keep up with ever increasing customer demands for new functionalities and innovations. As such, ensuring software quality can be a daunting task. Exhaustive testing is practically infeasible given the large domain of inputs and possibly too many possible execution paths. Over the years, many sampling techniques (or strategies) have been proposed to select subsets of test cases for testing consideration. In many applications, sampling strategies based on boundary value analysis, equivalence partioning, cause and effect analysis, and decision tables are sufficiently useful but they are not designed to address faults due to interaction. In other applications particularly involving structural (predicate) testing (e.g. in avionic industry), sampling strategies based on coverage criteria such as statements, decisions, and path coverage are deemed necessary, however, they often suffer from the effect of masking (i.e. due to the resulting AND and OR operations). Currently, researchers in combinatorial testing have already developed strategies based on interaction
testing (termed t-way testing) in order to detect bugs due to interaction. Here, depending on the value of interaction strength (t), all desired t-way interactions are faithfuly covered in the resulting test cases.Although useful, much existing work t-way testing has not sufficiently considered modified conditions/decision coverage (MC/DC) as the criteria for test generation. In many critical applications
particularly involving the airborne system, compliants to MC/DC are required by law [1]. Proposed by NASA in 1992, the MC/DC is a white box testing criterion ensuring each condition within a predicate can independently influence the outcome of the decision - while the outcome of all other conditions remains constant. In this manner, MC/DC criterion subsumes other well known coverage such as statements, decisions, and path [2]. Addressing some of the aforementioned issues, this research discusses the design of a new constraints based t-way strategy with MC/DC criterion for structural (predicate) testing. In doing so, this paper also highlights the possible implementations
Construction of Prioritized T-Way Test Suite Using Bi-Objective Dragonfly Algorithm
Software testing is important for ensuring the reliability of software systems. In software testing, effective test case generation is essential as an alternative to exhaustive testing. For improving the software testing technology, the t-way testing technique combined with metaheuristic algorithm has been great to analyze a large number of combinations for getting optimal solutions. However, most of the existing t-way strategies consider test case weights while generating test suites. Priority of test cases hasn’t been fully considered in previous works, but in practice, it’s frequently necessary to distinguish between high-priority and low-priority test cases. Therefore, the significance of test case prioritization is quite high. For this reason, this paper has proposed a t-way strategy that implements an adaptive Dragonfly Algorithm (DA) to construct prioritized t-way test suites. Both test case weight and test case priority have equal significance during test suite generation in this strategy. We have designed and implemented a Bi-objective Dragonfly Algorithm (BDA) for prioritized t-way test suite generation, and the two objectives are test case weight and test case priority. The test results demonstrate that BDA performs competitively against existing t-way strategies in terms of test suite size, and in addition, BDA generates prioritized test suites.©2022 Authors. Published by IEEE. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/fi=vertaisarvioitu|en=peerReviewed
Comparative Study to Measure the Quality of Big Scholarly Data and Its Hypothetical Mapping towards Granular Computing
Nowadays, researchers are interested on granular computing in order to solve the big data problem. The volume of Big Scholarly Data (BSD) is rapidly growing. In order to evaluate the research performance, it’s becoming essential to evaluate the impact of BSD. Traditionally, journals have been ranked by their journal impact factor (JIF). However, several impact evaluation methods have been used by different BSD digital systems, such as the citation analysis, G-Index, H-index, i10-index, jurnal impact (JIF), and the Eigenfactor. In this paper, a detailed study of these different impact evaluation methods is shown along with their advantages and disadvantages. From this study, we can say that although the evaluation methods appear highly correlated but they lead to large differences in BSD impact evaluation. We conclude that no one evaluation method is superior and the present research gap is to develop standard rubrics and standard benchmarks in order to evaluate these existing methods. Furthermore, we have hypothetically modeled a new fuzzy granular approach as evolving structural fuzzy model (ESFM) which consider the concept of granular computing. Therefore, information granules exhibit the expressive and functional depiction of the global concept
- …