7,577 research outputs found

    Evolve the Model Universe of a System Universe

    Full text link
    Uncertain, unpredictable, real time, and lifelong evolution causes operational failures in intelligent software systems, leading to significant damages, safety and security hazards, and tragedies. To fully unleash the potential of such systems and facilitate their wider adoption, ensuring the trustworthiness of their decision making under uncertainty is the prime challenge. To overcome this challenge, an intelligent software system and its operating environment should be continuously monitored, tested, and refined during its lifetime operation. Existing technologies, such as digital twins, can enable continuous synchronisation with such systems to reflect their most updated states. Such representations are often in the form of prior knowledge based and machine learning models, together called model universe. In this paper, we present our vision of combining techniques from software engineering, evolutionary computation, and machine learning to support the model universe evolution

    VARIwise: a general-purpose adaptive control simulation framework for spatially and temporally varied irrigation at sub-field scale

    Get PDF
    Irrigation control strategies may be used to improve the site-specific irrigation of cotton via lateral move and centre pivot irrigation machines. A simulation framework ‘VARIwise’ has been created to aid the development, evaluation and management of spatially and temporally varied site-specific irrigation control strategies. VARIwise accommodates sub-field scale variations in all input parameters using a 1 m2 cell size, and permits application of differing control strategies within the field, as well as differing irrigation amounts down to this scale. In this paper the motivation and objectives for the creation of VARIwise are discussed, the structure of the software is outlined and an example of the use and utility of VARIwise is presented. Three irrigation control strategies have been simulated in VARIwise using a cotton model with a range of input parameters including spatially variable soil properties, non-uniform irrigation application, three weather profiles and two crop varieties. The simulated yield and water use efficiency were affected by the combination of input parameters and the control strategy implemented

    Towards Quantum Software Requirements Engineering

    Full text link
    Quantum software engineering (QSE) is receiving increasing attention, as evidenced by increasing publications on topics, e.g., quantum software modeling, testing, and debugging. However, in the literature, quantum software requirements engineering (QSRE) is still a software engineering area that is relatively less investigated. To this end, in this paper, we provide an initial set of thoughts about how requirements engineering for quantum software might differ from that for classical software after making an effort to map classical requirements classifications (e.g., functional and extra-functional requirements) into the context of quantum software. Moreover, we provide discussions on various aspects of QSRE that deserve attention from the quantum software engineering community

    Multi-Objective Search-Based Software Microbenchmark Prioritization

    Full text link
    Ensuring that software performance does not degrade after a code change is paramount. A potential solution, particularly for libraries and frameworks, is regularly executing software microbenchmarks, a performance testing technique similar to (functional) unit tests. This often becomes infeasible due to the extensive runtimes of microbenchmark suites, however. To address that challenge, research has investigated regression testing techniques, such as test case prioritization (TCP), which reorder the execution within a microbenchmark suite to detect larger performance changes sooner. Such techniques are either designed for unit tests and perform sub-par on microbenchmarks or require complex performance models, reducing their potential application drastically. In this paper, we propose a search-based technique based on multi-objective evolutionary algorithms (MOEAs) to improve the current state of microbenchmark prioritization. The technique utilizes three objectives, i.e., coverage to maximize, coverage overlap to minimize, and historical performance change detection to maximize. We find that our technique improves over the best coverage-based, greedy baselines in terms of average percentage of fault-detection on performance (APFD-P) and Top-3 effectiveness by 26 percentage points (pp) and 43 pp (for Additional) and 17 pp and 32 pp (for Total) to 0.77 and 0.24, respectively. Employing the Indicator-Based Evolutionary Algorithm (IBEA) as MOEA leads to the best effectiveness among six MOEAs. Finally, the technique's runtime overhead is acceptable at 19% of the overall benchmark suite runtime, if we consider the enormous runtimes often spanning multiple hours. The added overhead compared to the greedy baselines is miniscule at 1%.These results mark a step forward for universally applicable performance regression testing techniques.Comment: 17 pages, 5 figure

    Digital Twin-based Anomaly Detection with Curriculum Learning in Cyber-physical Systems

    Full text link
    Anomaly detection is critical to ensure the security of cyber-physical systems (CPS). However, due to the increasing complexity of attacks and CPS themselves, anomaly detection in CPS is becoming more and more challenging. In our previous work, we proposed a digital twin-based anomaly detection method, called ATTAIN, which takes advantage of both historical and real-time data of CPS. However, such data vary significantly in terms of difficulty. Therefore, similar to human learning processes, deep learning models (e.g., ATTAIN) can benefit from an easy-to-difficult curriculum. To this end, in this paper, we present a novel approach, named digitaL twin-based Anomaly deTecTion wIth Curriculum lEarning (LATTICE), which extends ATTAIN by introducing curriculum learning to optimize its learning paradigm. LATTICE attributes each sample with a difficulty score, before being fed into a training scheduler. The training scheduler samples batches of training data based on these difficulty scores such that learning from easy to difficult data can be performed. To evaluate LATTICE, we use five publicly available datasets collected from five real-world CPS testbeds. We compare LATTICE with ATTAIN and two other state-of-the-art anomaly detectors. Evaluation results show that LATTICE outperforms the three baselines and ATTAIN by 0.906%-2.367% in terms of the F1 score. LATTICE also, on average, reduces the training time of ATTAIN by 4.2% on the five datasets and is on par with the baselines in terms of detection delay time

    Uncertainty-wise Test Case Generation and Minimization for Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPSs) typically operate in highly indeterminateenvironmental conditions, which require the development of testing methods that must explicitly consider uncertainty in test design, test generation, and test optimization. Towards this direction, we propose a set of uncertainty-wise test case generation and test case minimizationstrategies that rely on test ready models explicitly specifying subjective uncertainty. We propose two test case generation strategies and four test case minimizationstrategies based on the Uncertainty Theory and multi-objectivesearch. These strategies include a novel methodology for designing and introducing indeterminacy sources in the environment during test execution and a novel set of uncertainty-wise test verdicts. We performed an extensive empirical study to select the bestalgorithm out of eight commonly used multi-objective search algorithms, for each of the four minimizationstrategies, with five use cases of two industrial CPS case studies. The minimizedset of test cases obtained with the best algorithm for each minimizationstrategy were executedon the two real CPSs. The results showed that our best test strategy managed to observe 51% more uncertainties due to unknown indeterminate behaviorsof the physical environmentsof the CPSs as compared to the other test strategies. Also, the same test strategy managed to observe 118% more unknown uncertainties as compared to the unique number of known uncertainties.submittedVersio

    EpiTESTER: Testing Autonomous Vehicles with Epigenetic Algorithm and Attention Mechanism

    Full text link
    Testing autonomous vehicles (AVs) under various environmental scenarios that lead the vehicles to unsafe situations is known to be challenging. Given the infinite possible environmental scenarios, it is essential to find critical scenarios efficiently. To this end, we propose a novel testing method, named EpiTESTER, by taking inspiration from epigenetics, which enables species to adapt to sudden environmental changes. In particular, EpiTESTER adopts gene silencing as its epigenetic mechanism, which regulates gene expression to prevent the expression of a certain gene, and the probability of gene expression is dynamically computed as the environment changes. Given different data modalities (e.g., images, lidar point clouds) in the context of AV, EpiTESTER benefits from a multi-model fusion transformer to extract high-level feature representations from environmental factors and then calculates probabilities based on these features with the attention mechanism. To assess the cost-effectiveness of EpiTESTER, we compare it with a classical genetic algorithm (GA) (i.e., without any epigenetic mechanism implemented) and EpiTESTER with equal probability for each gene. We evaluate EpiTESTER with four initial environments from CARLA, an open-source simulator for autonomous driving research, and an end-to-end AV controller, Interfuser. Our results show that EpiTESTER achieved a promising performance in identifying critical scenarios compared to the baselines, showing that applying epigenetic mechanisms is a good option for solving practical problems.Comment: 17 pages, 8 figures, 2 table

    Uncertainty-Aware Prediction Validator in Deep Learning Models for Cyber-Physical System Data

    Get PDF
    The use of Deep learning in Cyber-Physical Systems (CPSs) is gaining popularity due to its ability to bring intelligence to CPS behaviors. However, both CPSs and deep learning have inherent uncertainty. Such uncertainty, if not handled adequately, can lead to unsafe CPS behavior. The first step toward addressing such uncertainty in deep learning is to quantify uncertainty. Hence, we propose a novel method called NIRVANA (uNcertaInty pRediction ValidAtor iN Ai) for prediction validation based on uncertainty metrics. To this end, we first employ prediction-time Dropout-based Neural Networks to quantify uncertainty in deep learning models applied to CPS data. Second, such quantified uncertainty is taken as the input to predict wrong labels using a support vector machine, with the aim of building a highly discriminating prediction validator model with uncertainty values. In addition, we investigated the relationship between uncertainty quantification and prediction performance and conducted experiments to obtain optimal dropout ratios. We conducted all the experiments with four real-world CPS datasets. Results show that uncertainty quantification is negatively correlated to prediction performance of a deep learning model of CPS data. Also, our dropout ratio adjustment approach is effective in reducing uncertainty of correct predictions while increasing uncertainty of wrong predictions.publishedVersio
    • …
    corecore