23,367 research outputs found

    The ABACOC Algorithm: a Novel Approach for Nonparametric Classification of Data Streams

    Full text link
    Stream mining poses unique challenges to machine learning: predictive models are required to be scalable, incrementally trainable, must remain bounded in size (even when the data stream is arbitrarily long), and be nonparametric in order to achieve high accuracy even in complex and dynamic environments. Moreover, the learning system must be parameterless ---traditional tuning methods are problematic in streaming settings--- and avoid requiring prior knowledge of the number of distinct class labels occurring in the stream. In this paper, we introduce a new algorithmic approach for nonparametric learning in data streams. Our approach addresses all above mentioned challenges by learning a model that covers the input space using simple local classifiers. The distribution of these classifiers dynamically adapts to the local (unknown) complexity of the classification problem, thus achieving a good balance between model complexity and predictive accuracy. We design four variants of our approach of increasing adaptivity. By means of an extensive empirical evaluation against standard nonparametric baselines, we show state-of-the-art results in terms of accuracy versus model size. For the variant that imposes a strict bound on the model size, we show better performance against all other methods measured at the same model size value. Our empirical analysis is complemented by a theoretical performance guarantee which does not rely on any stochastic assumption on the source generating the stream

    Adapting Quality Assurance to Adaptive Systems: The Scenario Coevolution Paradigm

    Full text link
    From formal and practical analysis, we identify new challenges that self-adaptive systems pose to the process of quality assurance. When tackling these, the effort spent on various tasks in the process of software engineering is naturally re-distributed. We claim that all steps related to testing need to become self-adaptive to match the capabilities of the self-adaptive system-under-test. Otherwise, the adaptive system's behavior might elude traditional variants of quality assurance. We thus propose the paradigm of scenario coevolution, which describes a pool of test cases and other constraints on system behavior that evolves in parallel to the (in part autonomous) development of behavior in the system-under-test. Scenario coevolution offers a simple structure for the organization of adaptive testing that allows for both human-controlled and autonomous intervention, supporting software engineering for adaptive systems on a procedural as well as technical level.Comment: 17 pages, published at ISOLA 201

    Softer perspectives on enhancing the patient experience using IS/IT

    Get PDF
    Purpose – This paper aims to argue that the implementation of the Choose and Book system has failed due to the inability of project sponsors to appreciate the complex and far-reaching softer implications of the implementation, especially in a complex organisation such as the NHS, which has multifarious stakeholders. Design/methodology/approach – The authors use practice-oriented research to try and isolate key parameters. These parameters are compared with existing conventional thinking in a number of focused areas. Findings – Like many previous NHS initiatives, the focus of this system is in its obvious link to patients. However we find that although this project has cultural, social and organisational implications, programme managers and champions of the Connecting for Health programme emphasised the technical domains to IS/IT adoption. Research limitations/implications – This paper has been written in advance of a fully implemented Choose and Book system. Practical implications – The paper requests that more attention be paid to the softer side of IS/IT delivery, implementation, introduction and adoption. Originality/value – The paper shows that patient experience within the UK healthcare sector is still well below what is desired

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Principles in Patterns (PiP) : Institutional Approaches to Curriculum Design Institutional Story

    Get PDF
    The principal outputs of the PiP Project surround the Course and Class Approval (C-CAP) system. This web-based system built on Microsoft SharePoint addresses and resolves many of the issues identified by the project. Generally well received by both academic and support staff, the system provides personalised views, adaptive forms and contextualised support for all phases of the approval process. Although the system deliberately encapsulates and facilitates existing approval processes thus achieving buy-in, it is already achieving significant improvements over the previous processes, not only in reducing the administrative overheads but also in supporting curriculum design and academic quality. The system is now embedded across three faculties and is now considered by the University of Strathclyde to be a "core institutional service". Alongside the C-CAP system the PiP Project also cultivated a suite of approaches: an incremental systems development methodology; a structured and replicable evaluation approach, and; Strathclyde's Lean Approach to Efficiencies in Education Kit (SLEEK) business process improvement methodology Each is based on recognised formal techniques, providing the basis for a rigorous approach. This is contextualised within and adapted to the HE institutional context thus building the foundation not only for the project but ultimately for institution wide process improvement. This "institutional story" report summarises the principal outcomes of the Project

    Modellbasiertes Regressionstesten von Varianten und Variantenversionen

    Get PDF
    The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr großen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten Testfallausführungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-Testansätze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhängig getestet, ohne dabei das Wissen über Gemeinsamkeiten und Variabilität auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen für das SPL-Testen, da nicht nur für Varianten sondern auch für ihre Versionen die Qualität sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework für das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Änderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens über gemeinsame Funktionalität und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. Für die Testmodellierung entwickeln wir einen Ansatz, der Variabilitäts- sowie Versionsinformation von evolvierenden SPL gleichermaßen für die Modellierung einbezieht. Für die Änderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Änderungen in Ausführungsabhängigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. Für die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen über einen Wiederholungstest von wiederverwendbaren Testfällen durchzuführen. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten Testfallausführungen während des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion für das Testen evolvierender SPL erreicht wird

    Structural testing techniques for the selective revalidation of software

    Get PDF
    The research in this thesis addresses the subject of regression testing. Emphasis is placed on developing a technique for selective revalidation which can be used during software maintenance to analyse and retest only those parts of the program affected by changes. In response to proposed program modifications, the technique assists the maintenance programmer in assessing the extent of the program alterations, in selecting a representative set of test cases to rerun, and in identifying any test cases in the test suite which are no longer required because of the program changes. The proposed technique involves the application of code analysis techniques and operations research. Code analysis techniques are described which derive information about the structure of a program and are used to determine the impact of any modifications on the existing program code. Methods adopted from operations research are then used to select an optimal set of regression tests and to identify any redundant test cases. These methods enable software, which has been validated using a variety of structural testing techniques, to be retested. The development of a prototype tool suite, which can be used to realise the technique for selective revalidation, is described. In particular, the interface between the prototype and existing regression testing tools is discussed. Moreover, the effectiveness of the technique is demonstrated by means of a case study and the results are compared with traditional regression testing strategies and other selective revalidation techniques described in this thesis
    corecore