221 research outputs found
Automatic generation of smell-free unit tests
Tese de mestrado, Engenharia Informática, 2022, Universidade de Lisboa, Faculdade de CiênciasAutomated test generation tools (such as EvoSuite) typically aim to maximize code
coverage. However, they frequently disregard non-coverage aspects that can be relevant
for testers, such as the quality of the generated tests. Therefore, automatically generated
tests are often affected by a set of test-specific bad programming practices that may hinder
the quality of both test and production code, i.e., test smells. Given that other researchers
have successfully integrated non-coverage quality metrics into EvoSuite, we decided to
extend the EvoSuite tool such that the generated test code is smell-free. To this aim, we
compiled 54 test smells from several sources and selected 16 smells that are relevant to the
context of this work. We then augmented the tool with the respective test smell metrics
and investigated the diffusion of the selected smells and the distribution of the metrics.
Finally, we implemented an approach to optimize the test smell metrics as secondary
criteria. After establishing the optimal configuration to optimize as secondary criteria
(which we used throughout the remainder of the study), we conducted an empirical study
to assess whether the tests became significantly less smelly. Furthermore, we studied
how the proposed metrics affect the fault detection effectiveness, coverage, and size of
the generated tests. Our study revealed that the proposed approach reduces the overall
smelliness of the generated tests; in particular, the diffusion of the “Indirect Testing” and
“Unrelated Assertions” smells improved considerably. Moreover, our approach improved
the smelliness of the tests generated by EvoSuite without compromising the code coverage
or fault detection effectiveness. The size and length of the generated tests were also not
affected by the new secondary criteria
AD for optimization in electromagnetism applied to semi analytical models combining composed functions
International audienc
Application of Genetic Programming and Artificial Neural Network Approaches for Reconstruction of Turbulent Jet Flow Fields
Two Machine Learning (ML) methods are considered for reconstruction of turbulet signals corresponding to
the Large Eddy Simulation database obtained by application of the high-resolution CABARET method accelerated on GPU cards for flow solutions of NASA Small Hot Jet Acoustic Rig (SHJAR) jets. The first method is the Feedforward Neural Networks technique, which was successfully implemented for a turbulent flow over a plunging aerofoil in (Lui and Wolf, 2019). The second method is based on the application of Genetic Programming, which is well-known in optimisation research, but has not been applied for turbulent flow reconstruction before. The reconstruction of local flow velocity and pressure signals as well as timedependent principle coefficients of the Spectral Proper Orthogonal Decomposition of turbulent pressure fluctuations are considered. Stability and dependency of the ML algorithms on the smoothness property and the sampling rate of the underlying turbulent flow signals are discussed
Recommended from our members
Efficient Neural Network Verification Using Branch and Bound
Neural networks have demonstrated great success in modern machine learning systems. However, they remain susceptible to incorrect corner-case behaviors, often behaving unpredictably and producing surprisingly wrong results. Therefore, it is desirable to formally guarantee their trustworthiness for certain robustness properties when applied to safety-/security-sensitive systems like autonomous vehicles and aircraft. Unfortunately, the task is extremely challenging due to the complexity of neural networks, and traditional formal methods were not efficient enough to verify practical properties. Recently, a Branch and Bound (BaB) framework is generally extended for neural network verification and shows great success in accelerating the verification.
This dissertation focuses on state-of-the-art neural network verifiers using BaB. We will first introduce two efficient neural network verifiers ReluVal and Neurify using basic BaB approaches involving two main steps: (1) They will recursively split the original verification problem into easier independent subproblems by splitting input or hidden neurons; (2) For each split subproblem, we propose an efficient and tight bound propagation method called symbolic interval analysis, producing sound estimated bounds for outputs using convex linear relaxations. Both ReluVal and Neurify are three orders of magnitude faster than previously state-of-the-art formal analysis systems on standard verification benchmarks.
However, basic BaB approaches like Neurify have to construct each subproblem into a Linear Programming (LP) problem and solve it using expensive LP solvers, significantly limiting the overall efficiency. This is because each step of BaB will introduce neuron split constraints (e.g., a ReLU neuron larger or smaller than 0), which are hard to be handled by existing efficient bound propagation methods. We propose novel designs of bound propagation method -CROWN and its improved variance -CROWN, solving the verification problem by optimizing Lagrangian multipliers and with gradient ascent without requiring to call any expensive LP solvers. They were built based on previous work CROWN, a generalized efficient bound propagation method using linear relaxation. BaB verification using -CROWN and -CROWN cannot only provide tighter output estimations than most of the bound propagation methods but also can fully leverage the accelerations by GPUs with massive parallelization.
Combining our methods with BaB empowers the state-of-the-art verifier ,-CROWN (alpha-beta-CROWN), the winning tool in the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. Our $\alpha,-CROWN can be three orders of magnitude faster than LP solver based BaB verifiers and is notably faster than all existing approaches on GPUs. Recently, we further generalize -CROWN and propose an efficient iterative approach that can tighten all intermediate layer bounds under neuron split constraints and strengthen the bound tightness without LP solvers. This new approach in BaB can greatly improve the efficiency of ,-CROWN, especially on several challenging benchmarks.
Lastly, we study verifiable training that incorporates verification properties in training procedures to enhance the verifiable robustness of trained models and scale verification to larger models and datasets. We propose two general verifiable training frameworks: (1) MixTrain that can significantly improve verifiable training efficiency and scalability and (2) adaptive verifiable training that can improve trained verifiable robustness accounting for label similarity. The combination of verifiable training and BaB based verifiers opens promising directions for more efficient and scalable neural network verification
- …