7,453 research outputs found

    Enhancing adaptive random testing for programs with high dimensional input domains or failure-unrelated parameters

    Get PDF
    Adaptive random testing (ART), an enhancement of random testing (RT), aims to both randomly select and evenly spread test cases. Recently, it has been observed that the effectiveness of some ART algorithms may deteriorate as the number of program input parameters (dimensionality) increases. In this article, we analyse various problems of one ART algorithm, namely fixed-sized-candidate-set ART (FSCS-ART), in the high dimensional input domain setting, and study how FSCS-ART can be further enhanced to address these problems. We propose to add a filtering process of inputs into FSCS-ART to achieve a more even-spread of test cases and better failure detection effectiveness in high dimensional space. Our study shows that this solution, termed as FSCS-ART-FE, can improve FSCS-ART not only in the case of high dimensional space, but also in the case of having failure-unrelated parameters. Both cases are common in real life programs. Therefore, we recommend using FSCS-ART-FE instead of FSCS-ART whenever possible. Other ART algorithms may face similar problems as FSCS-ART; hence our study also brings insight into the improvement of other ART algorithms in high dimensional space

    Enhancing mirror adaptive random testing through dynamic partitioning

    Get PDF
    Context: Adaptive random testing (ART), originally proposed as an enhancement of random testing, is often criticized for the high computation overhead of many ART algorithms. Mirror ART (MART) is a novel approach that can be generally applied to improve the efficiency of various ART algorithms based on the combination of ''divide-and-conquer'' and ''heuristic'' strategies. Objective: The computation overhead of the existing MART methods is actually on the same order of magnitude as that of the original ART algorithms. In this paper, we aim to further decrease the order of computation overhead for MART. Method: We conjecture that the mirroring scheme in MART should be dynamic instead of static to deliver a higher efficiency. We thus propose a new approach, namely dynamic mirror ART (DMART), which incrementally partitions the input domain and adopts new mirror functions. Results: Our simulations demonstrate that the new DMART approach delivers comparable failure-detection effectiveness as the original MART and ART algorithms while having much lower computation overhead. The experimental studies further show that the new approach also delivers a better and more reliable performance on programs with failure-unrelated parameters. Conclusion: In general, DMART is much more cost-effective than MART. Since its mirroring scheme is independent of concrete ART algorithms, DMART can be generally applied to improve the cost-effectiveness of various ART algorithms

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    Hybridizing 3-dimensional multiple object tracking with neurofeedback to enhance preparation, performance, and learning

    Full text link
    Le vaste domaine de l’amélioration cognitive traverse les applications comportementales, biochimiques et physiques. Aussi nombreuses sont les techniques que les limites de ces premières : des études de pauvre méthodologie, des pratiques éthiquement ambiguës, de faibles effets positifs, des effets secondaires significatifs, des couts financiers importants, un investissement de temps significatif, une accessibilité inégale, et encore un manque de transfert. L’objectif de cette thèse est de proposer une méthode novatrice d’intégration de l’une de ces techniques, le neurofeedback, directement dans un paradigme d’apprentissage afin d’améliorer la performance cognitive et l’apprentissage. Cette thèse propose les modalités, les fondements empiriques et des données à l’appui de ce paradigme efficace d’apprentissage ‘bouclé’. En manipulant la difficulté dans une tâche en fonction de l’activité cérébrale en temps réel, il est démontré que dans un paradigme d’apprentissage traditionnel (3-dimentional multiple object tracking), la vitesse et le degré d’apprentissage peuvent être améliorés de manière significative lorsque comparés au paradigme traditionnel ou encore à un groupe de contrôle actif. La performance améliorée demeure observée même avec un retrait du signal de rétroaction, ce qui suggère que les effets de l’entrainement amélioré sont consolidés et ne dépendent pas d’une rétroaction continue. Ensuite, cette thèse révèle comment de tels effets se produisent, en examinant les corrélés neuronaux des états de préparation et de performance à travers les conditions d’état de base et pendant la tâche, de plus qu’en fonction du résultat (réussite/échec) et de la difficulté (basse/moyenne/haute vitesse). La préparation, la performance et la charge cognitive sont mesurées via des liens robustement établis dans un contexte d’activité cérébrale fonctionnelle mesurée par l’électroencéphalographie quantitative. Il est démontré que l’ajout d’une assistance- à-la-tâche apportée par la fréquence alpha dominante est non seulement appropriée aux conditions de ce paradigme, mais influence la charge cognitive afin de favoriser un maintien du sujet dans sa zone de développement proximale, ce qui facilite l’apprentissage et améliore la performance. Ce type de paradigme d’apprentissage peut contribuer à surmonter, au minimum, un des limites fondamentales du neurofeedback et des autres techniques d’amélioration cognitive : le manque de transfert, en utilisant une méthode pouvant être intégrée directement dans le contexte dans lequel l’amélioration de la performance est souhaitée.The domain of cognitive enhancement is vast, spanning behavioral, biochemical and physical applications. The techniques are as numerous as are the limitations: poorly conducted studies, ethically ambiguous practices, limited positive effects, significant side-effects, high financial costs, significant time investment, unequal accessibility, and lack of transfer. The purpose of this thesis is to propose a novel way of integrating one of these techniques, neurofeedback, directly into a learning context in order to enhance cognitive performance and learning. This thesis provides the framework, empirical foundations, and supporting evidence for a highly efficient ‘closed-loop’ learning paradigm. By manipulating task difficulty based on a measure of cognitive load within a classic learning scenario (3-dimentional multiple object tracking) using real-time brain activity, results demonstrate that over 10 sessions, speed and degree of learning can be substantially improved compared with a classic learning system or an active sham-control group. Superior performance persists even once the feedback signal is removed, which suggests that the effects of enhanced training are consolidated and do not rely on continued feedback. Next, this thesis examines how these effects occur, exploring the neural correlates of the states of preparedness and performance across baseline and task conditions, further examining correlates related to trial results (correct/incorrect) and task difficulty (slow/medium/fast speeds). Cognitive preparedness, performance and load are measured using well-established relationships between real-time quantified brain activity as measured by quantitative electroencephalography. It is shown that the addition of neurofeedback-based task assistance based on peak alpha frequency is appropriate to task conditions and manages to influence cognitive load, keeping the subject in the zone of proximal development more often, facilitating learning and improving performance. This type of learning paradigm could contribute to overcoming at least one of the fundamental limitations of neurofeedback and other cognitive enhancement techniques : a lack of observable transfer effects, by utilizing a method that can be directly integrated into the context in which improved performance is sought

    Service innovation in an evolutionary perspective

    Get PDF

    Metamorphic Testing for Software Libraries and Graphics Compilers

    Get PDF
    Metamorphic Testing is a testing technique which mutates existing test cases in semantically equivalent forms, by making use of metamorphic relations, while avoiding the oracle problem. However, these required relations are not readily available for a given system under test. Defining effective metamorphic relations is difficult, and arguably the main obstacle towards adoption of metamorphic testing in production-level software development. One example application is testing graphics compilers, where the approximate and under-specified nature of the domain makes it hard to apply more traditional techniques. We propose an approach with a lower barrier of entry to applying metamorphic testing for a software library. The user must still identify relations that hold over their particular library, but can do so within a development-like environment. We apply methods from the domains of metamorphic testing and fuzzing to produce complex test cases. We consider the user interaction a bonus, as they can control what parts of the target codebase is tested, potentially focusing on less-tested or critical sections of the codebase. We implement our proposed approach in a tool, MF++, which synthesises C++ test cases for a C++ library, defined by user-provided ingredients. We applied MF++ to 7 libraries in the domains of satisfiability modulo theories and Presburger arithmetic,. Our evaluation of MF++ was able to identify 21 bugs in these tools. We additionally provide an automatic reducer for tests generated by MF++, named MF++R. In addition to minimising tests exposing issues, MF++R can also be used to identify incorrect user-provided relations. Additionally, we investigate the combined use of MF++ and MF++R in order to augment code coverage of library test suites. We assess the utility of this application by contributing 21 tests aimed at improving coverage across 3 libraries.Open Acces

    Management And Security Of Multi-Cloud Applications

    Get PDF
    Single cloud management platform technology has reached maturity and is quite successful in information technology applications. Enterprises and application service providers are increasingly adopting a multi-cloud strategy to reduce the risk of cloud service provider lock-in and cloud blackouts and, at the same time, get the benefits like competitive pricing, the flexibility of resource provisioning and better points of presence. Another class of applications that are getting cloud service providers increasingly interested in is the carriers\u27 virtualized network services. However, virtualized carrier services require high levels of availability and performance and impose stringent requirements on cloud services. They necessitate the use of multi-cloud management and innovative techniques for placement and performance management. We consider two classes of distributed applications – the virtual network services and the next generation of healthcare – that would benefit immensely from deployment over multiple clouds. This thesis deals with the design and development of new processes and algorithms to enable these classes of applications. We have evolved a method for optimization of multi-cloud platforms that will pave the way for obtaining optimized placement for both classes of services. The approach that we have followed for placement itself is predictive cost optimized latency controlled virtual resource placement for both types of applications. To improve the availability of virtual network services, we have made innovative use of the machine and deep learning for developing a framework for fault detection and localization. Finally, to secure patient data flowing through the wide expanse of sensors, cloud hierarchy, virtualized network, and visualization domain, we have evolved hierarchical autoencoder models for data in motion between the IoT domain and the multi-cloud domain and within the multi-cloud hierarchy
    corecore