3 research outputs found

    How Simulation can Illuminate Pedagogical and System Design Issues in Dynamic Open Ended Learning Environments

    Get PDF
    A Dynamic Open-Ended Learning Environment (DOELE) is a collection of learners and learning objects (LOs) that could be constantly changing. In DOELEs, learners need the support of Advanced Learning Technology (ALT), but most ALT is not designed to run in such environments. An architecture for designing advanced learning technology that is compatible with DOELEs is the ecological approach (EA). This thesis looks at how to test and develop ALT based on the EA, and argues that this process would benefit from the use of simulation. The essential components of an EA-based simulation are: simulated learners, simulated LOs, and their simulated interactions. In this thesis the value of simulation is demonstrated with two experiments. The first experiment focuses on the pedagogical issue of peer impact, how learning is impacted by the performance of peers. By systematically varying the number and type of learners and LOs in a DOELE, the simulation uncovers behaviours that would otherwise go unseen. The second experiment shows how to validate and tune a new instructional planner built on the EA, the Collaborative Filtering based on Learning Sequences planner (CFLS). When the CFLS planner is configured appropriately, simulated learners achieve higher performance measurements that those learners using the baseline planners. Simulation results lead to predictions that ultimately need to be proven in the real world, but even without real world validation such predictions can be useful to researchers to inform the ALT system design process. This thesis work shows that it is not necessary to model all the details of the real world to come to a better understanding of a pedagogical issue such as peer impact. And, simulation allowed for the design of the first known instructional planner to be based on usage data, the CFLS planner. The use of simulation for the design of EA-based systems opens new possibilities for instructional planning without knowledge engineering. Such systems can find niche learning paths that may have never been thought of by a human designer. By exploring pedagogical and ALT system design issues for DOELEs, this thesis shows that simulation is a valuable addition to the toolkit for ALT researchers

    Generative Acceptance Testing for Difficult-to-Test Software

    No full text
    While there are many excellent acceptance testing tools and frame-works available today, this paper presents an alternative approach, involving generating code from tests specified in a declarative tabular format within Excel spreadsheets. While this is a general approach, it is most applicable to difficult-to-test situations. Two such situations are presented: one involving complex fixture setup, and another involving complex application workflow concerns. Key Words: automated testing, code generation, domain specific testing lan-guage, test automation patterns, testing strategy, user acceptance testing, XML

    Modellgetriebene Testfallkonstruktion durch Domänenexperten im Kontext von Systemfamilien

    Get PDF
    This work presents MTCC (Model-Driven Test Case Construction), an approach to the construction of acceptance tests by domain experts for testing system families based on feature models. MTCC is applied to the application domain of Digital Libraries. The basic hypothesis of this thesis is that the involvement of domain experts in the testing process for members of system families is possible on the basis of feature models and that such a testing approach has a positive influence on the efficiency and effectiveness of testing. Application quality benefits from the involvement of domain experts because tests specified by domain experts reflect their needs and requirements and therefore can serve as an executable specification. One prerequisite for the inclusion of domain experts is tooling that supports the specification of automated tests without formal modeling or programming skills. In MTCC, models of automated acceptance tests are constructed with a graphical editor based on models that represent the test-relevant functionality of a system under test as feature models and finite state machines. Feature models for individual testable systems are derived from domain-level systems for the system family. The use of feature models by the test reuse system of MTCC facilitates the systematic reuse of test models for the members of system families. MTCC is a Model-Driven test automation approach that aims at increasing the efficiency of test execution by automation while keeping independence from the implementation of the testee or the test harness in use. Because tests in MTCC are abstract models that represent the intent of the test independent from implementation specifics, MTCC employs a template-based code generation approach to generate executable test cases.Diese Arbeit stellt den Model-driven Test Case Construction (MTCC) Ansatz zur Konstruktion von wiederverwendbaren Akzeptanztests durch Domänenexperten im Kontext von Systemfamilien vor. Die Hypothese der vorliegenden Arbeit ist, dass die Einbeziehung von Domänenexperten in einen automatisierten Testprozess für die Systeme einer Systemfamilie auf Grundlage eines modellgetriebenen Ansatzes möglich ist und dass ein solcher Ansatz die Effektivität und Effizienz des Testens steigert. Die Qualität von Systemen profitiert von der Einbeziehung von Domänenexperten da deren Anforderungen durch die Tests wiedergegeben werden und diese somit als ausführbare Spezifikation dienen. Eine Voraussetzung für die Einbeziehung von Domänenexperten in das Testen ist eine Form der Werkzeugunterstützung die die Spezifikation von automatisierten Tests ohne Kenntnisse der Programmierung oder der formalen Modellierung gestattet. In MTCC werden abstrakte Modelle von Akzeptanztests mittels eines einfachen, graphischen Editors realisiert der seinerseits auf Modellen basiert die die testrelevant Funktionalität von Testlingen mittels Featuremodellen und endlichen Automaten darstellen. Featuremodelle repräsentieren einzelne Systeme der zu testenden Systemfamilie die von Domänenmodellen auf Ebene der der Systemfamilie abgeleitet sind. MTCC ist ein modellgetriebener Ansatz zur Automatisierung von Tests der der Steigerung der Effektivität der Testausführung dient und zugleich von den Spezifika der Implementierung des Testlings und der jeweiligen Software zur Testausführung entkoppelt. Da Tests in MTCC abstrakte Modelle sind, die die Semantik eines Tests unabhängig von Implementierungswissen ausdrücken, verwendet MTCC ein auf Templates basierendes Verfahren zur Generierung von ausführbaren Testskripten
    corecore