36,790 research outputs found
Enhancing mirror adaptive random testing through dynamic partitioning
Context: Adaptive random testing (ART), originally proposed as an enhancement of random testing, is often criticized for the high computation overhead of many ART algorithms. Mirror ART (MART) is a novel approach that can be generally applied to improve the efficiency of various ART algorithms based on the combination of ''divide-and-conquer'' and ''heuristic'' strategies. Objective: The computation overhead of the existing MART methods is actually on the same order of magnitude as that of the original ART algorithms. In this paper, we aim to further decrease the order of computation overhead for MART. Method: We conjecture that the mirroring scheme in MART should be dynamic instead of static to deliver a higher efficiency. We thus propose a new approach, namely dynamic mirror ART (DMART), which incrementally partitions the input domain and adopts new mirror functions. Results: Our simulations demonstrate that the new DMART approach delivers comparable failure-detection effectiveness as the original MART and ART algorithms while having much lower computation overhead. The experimental studies further show that the new approach also delivers a better and more reliable performance on programs with failure-unrelated parameters. Conclusion: In general, DMART is much more cost-effective than MART. Since its mirroring scheme is independent of concrete ART algorithms, DMART can be generally applied to improve the cost-effectiveness of various ART algorithms
A survey on adaptive random testing
Random testing (RT) is a well-studied testing method that has been widely applied to the testing of many applications, including embedded software systems, SQL database systems, and Android applications. Adaptive random testing (ART) aims to enhance RT's failure-detection ability by more evenly spreading the test cases over the input domain. Since its introduction in 2001, there have been many contributions to the development of ART, including various approaches, implementations, assessment and evaluation methods, and applications. This paper provides a comprehensive survey on ART, classifying techniques, summarizing application areas, and analyzing experimental evaluations. This paper also addresses some misconceptions about ART, and identifies open research challenges to be further investigated in the future work
One-domain-one-input: adaptive random testing by orthogonal recursive bisection with restriction
One goal of software testing may be the identification
or generation of a series of test cases that can detect a fault with as few test executions as possible. Motivated by insights from research into failure-causing regions of input domains, the even-spreading (even distribution) of tests across the input domain has been identified as a useful heuristic to more quickly find failures. This finding has encouraged a shift in focus from traditional random testing (RT) to its enhancement, adaptive random testing (ART), which retains the randomness of test input selection, but also attempts to maintain a more evenly distributed spread of test inputs across the input domain. Given that there are different ways to achieve the even distribution, several different ART methods and approaches have been proposed. This paper presents a new ART method, called ART-ORB, which explores the advantages of repeated geometric bisection of the input domain, combined with restriction regions, to evenly spread test inputs. Experimental results show a better performance in terms of fewer test executions than RT to find failures. Compared with other ART methods, ART-ORB has comparable performance (in terms of required test executions), but incurs lower test input selection overheads, especially in higher dimensional input space. It is recommended that ART-ORB be used in testing situations involving expensive test input execution
Enhancing adaptive random testing for programs with high dimensional input domains or failure-unrelated parameters
Adaptive random testing (ART), an enhancement of random testing (RT), aims to both randomly select and evenly spread test cases. Recently, it has been observed that the effectiveness of some ART algorithms may deteriorate as the number of program input parameters (dimensionality) increases. In this article, we analyse various problems of one ART algorithm, namely fixed-sized-candidate-set ART (FSCS-ART), in the high dimensional input domain setting, and study how FSCS-ART can be further enhanced to address these problems. We propose to add a filtering process of inputs into FSCS-ART to achieve a more even-spread of test cases and better failure detection effectiveness in high dimensional space. Our study shows that this solution, termed as FSCS-ART-FE, can improve FSCS-ART not only in the case of high dimensional space, but also in the case of having failure-unrelated parameters. Both cases are common in real life programs. Therefore, we recommend using FSCS-ART-FE instead of FSCS-ART whenever possible. Other ART algorithms may face similar problems as FSCS-ART; hence our study also brings insight into the improvement of other ART algorithms in high dimensional space
PVSNet: Palm Vein Authentication Siamese Network Trained using Triplet Loss and Adaptive Hard Mining by Learning Enforced Domain Specific Features
Designing an end-to-end deep learning network to match the biometric features
with limited training samples is an extremely challenging task. To address this
problem, we propose a new way to design an end-to-end deep CNN framework i.e.,
PVSNet that works in two major steps: first, an encoder-decoder network is used
to learn generative domain-specific features followed by a Siamese network in
which convolutional layers are pre-trained in an unsupervised fashion as an
autoencoder. The proposed model is trained via triplet loss function that is
adjusted for learning feature embeddings in a way that minimizes the distance
between embedding-pairs from the same subject and maximizes the distance with
those from different subjects, with a margin. In particular, a triplet Siamese
matching network using an adaptive margin based hard negative mining has been
suggested. The hyper-parameters associated with the training strategy, like the
adaptive margin, have been tuned to make the learning more effective on
biometric datasets. In extensive experimentation, the proposed network
outperforms most of the existing deep learning solutions on three type of
typical vein datasets which clearly demonstrates the effectiveness of our
proposed method.Comment: Accepted in 5th IEEE International Conference on Identity, Security
and Behavior Analysis (ISBA), 2019, Hyderabad, Indi
Selective sampling for combined learning from labelled and unlabelled data
This paper examines the problem of selecting a suitable subset of data to be labelled when building pattern classifiers from labelled and unlabelled data. The selection of representative set is guided by a clustering information and various options of allocating a number of samples within clusters and their distributions are investigated. The experimental results show that hybrid methods like Semi-supervised clustering with selective sampling can result in building a classifier which requires much less labelled data in order to achieve a comparable classification performance to classifiers built only on the basis of labelled data
Randomized Local Model Order Reduction
In this paper we propose local approximation spaces for localized model order
reduction procedures such as domain decomposition and multiscale methods. Those
spaces are constructed from local solutions of the partial differential
equation (PDE) with random boundary conditions, yield an approximation that
converges provably at a nearly optimal rate, and can be generated at close to
optimal computational complexity. In many localized model order reduction
approaches like the generalized finite element method, static condensation
procedures, and the multiscale finite element method local approximation spaces
can be constructed by approximating the range of a suitably defined transfer
operator that acts on the space of local solutions of the PDE. Optimal local
approximation spaces that yield in general an exponentially convergent
approximation are given by the left singular vectors of this transfer operator
[I. Babu\v{s}ka and R. Lipton 2011, K. Smetana and A. T. Patera 2016]. However,
the direct calculation of these singular vectors is computationally very
expensive. In this paper, we propose an adaptive randomized algorithm based on
methods from randomized linear algebra [N. Halko et al. 2011], which constructs
a local reduced space approximating the range of the transfer operator and thus
the optimal local approximation spaces. The adaptive algorithm relies on a
probabilistic a posteriori error estimator for which we prove that it is both
efficient and reliable with high probability. Several numerical experiments
confirm the theoretical findings.Comment: 31 pages, 14 figures, 1 table, 1 algorith
- ā¦