7,424 research outputs found

    Topic driven testing

    Get PDF
    Modern interactive applications offer so many interaction opportunities that automated exploration and testing becomes practically impossible without some domain specific guidance towards relevant functionality. In this dissertation, we present a novel fundamental graphical user interface testing method called topic-driven testing. We mine the semantic meaning of interactive elements, guide testing, and identify core functionality of applications. The semantic interpretation is close to human understanding and allows us to learn specifications and transfer knowledge across multiple applications independent of the underlying device, platform, programming language, or technology stack—to the best of our knowledge a unique feature of our technique. Our tool ATTABOY is able to take an existing Web application test suite say from Amazon, execute it on ebay, and thus guide testing to relevant core functionality. Tested on different application domains such as eCommerce, news pages, mail clients, it can trans- fer on average sixty percent of the tested application behavior to new apps—without any human intervention. On top of that, topic-driven testing can go with even more vague instructions of how-to descriptions or use-case descriptions. Given an instruction, say “add item to shopping cart”, it tests the specified behavior in an application–both in a browser as well as in mobile apps. It thus improves state-of-the-art UI testing frame- works, creates change resilient UI tests, and lays the foundation for learning, transfer- ring, and enforcing common application behavior. The prototype is up to five times faster than existing random testing frameworks and tests functions that are hard to cover by non-trained approaches.Moderne interaktive Anwendungen bieten so viele Interaktionsmöglichkeiten, dass eine vollstĂ€ndige automatische Exploration und das Testen aller Szenarien praktisch unmöglich ist. Stattdessen muss die Testprozedur auf relevante KernfunktionalitĂ€t ausgerichtet werden. Diese Arbeit stellt ein neues fundamentales Testprinzip genannt thematisches Testen vor, das beliebige Anwendungen u ̈ber die graphische OberflĂ€che testet. Wir untersuchen die semantische Bedeutung von interagierbaren Elementen um die Kernfunktionenen von Anwendungen zu identifizieren und entsprechende Tests zu erzeugen. Statt typischen starren Testinstruktionen orientiert sich diese Art von Tests an menschlichen AnwendungsfĂ€llen in natĂŒrlicher Sprache. Dies erlaubt es, Software Spezifikationen zu erlernen und Wissen von einer Anwendung auf andere zu ĂŒbertragen unabhĂ€ngig von der Anwendungsart, der Programmiersprache, dem TestgerĂ€t oder der -Plattform. Nach unserem Kenntnisstand ist unser Ansatz der Erste dieser Art. Wir prĂ€sentieren ATTABOY, ein Programm, das eine existierende Testsammlung fĂŒr eine Webanwendung (z.B. fĂŒr Amazon) nimmt und in einer beliebigen anderen Anwendung (sagen wir ebay) ausfĂŒhrt. Dadurch werden Tests fĂŒr Kernfunktionen generiert. Bei der ersten AusfĂŒhrung auf Anwendungen aus den DomĂ€nen Online Shopping, Nachrichtenseiten und eMail, erzeugt der Prototyp sechzig Prozent der Tests automatisch. Ohne zusĂ€tzlichen manuellen Aufwand. DarĂŒber hinaus interpretiert themen- getriebenes Testen auch vage Anweisungen beispielsweise von How-to Anleitungen oder Anwendungsbeschreibungen. Eine Anweisung wie "FĂŒgen Sie das Produkt in den Warenkorb hinzu" testet das entsprechende Verhalten in der Anwendung. Sowohl im Browser, als auch in einer mobilen Anwendung. Die erzeugten Tests sind robuster und effektiver als vergleichbar erzeugte Tests. Der Prototyp testet die ZielfunktionalitĂ€t fĂŒnf mal schneller und testet dabei Funktionen die durch nicht spezialisierte AnsĂ€tze kaum zu erreichen sind

    Automated specification-based testing of graphical user interfaces

    Get PDF
    Tese de doutoramento. Engenharia ElectrĂłnica e de Computadores. 2006. Faculdade de Engenharia. Universidade do Porto, Departamento de InformĂĄtica, Escola de Engenharia. Universidade do Minh

    IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic

    Full text link
    Benchmarking and performance analysis play an important role in understanding the behaviour of iterative optimization heuristics (IOHs) such as local search algorithms, genetic and evolutionary algorithms, Bayesian optimization algorithms, etc. This task, however, involves manual setup, execution, and analysis of the experiment on an individual basis, which is laborious and can be mitigated by a generic and well-designed platform. For this purpose, we propose IOHanalyzer, a new user-friendly tool for the analysis, comparison, and visualization of performance data of IOHs. Implemented in R and C++, IOHanalyzer is fully open source. It is available on CRAN and GitHub. IOHanalyzer provides detailed statistics about fixed-target running times and about fixed-budget performance of the benchmarked algorithms on real-valued, single-objective optimization tasks. Performance aggregation over several benchmark problems is possible, for example in the form of empirical cumulative distribution functions. Key advantages of IOHanalyzer over other performance analysis packages are its highly interactive design, which allows users to specify the performance measures, ranges, and granularity that are most useful for their experiments, and the possibility to analyze not only performance traces, but also the evolution of dynamic state parameters. IOHanalyzer can directly process performance data from the main benchmarking platforms, including the COCO platform, Nevergrad, and our own IOHexperimenter. An R programming interface is provided for users preferring to have a finer control over the implemented functionalities

    Simultaneous multichannel photometry with BUSCA

    Get PDF
    This thesis deals with the multicolour simultaneous camera BUSCA (Bonn University Simultaneous CAmera), which was build at the Sternwarte of the University of Bonn for the the 2.2m telescope at the Calar Alto Observatory (Spain). The advantage of this new instrument is the capability of observing simultaneously in four different colour bands ranging from the ultraviolet to the near-infrared. This technique saves on one hand the amount of time needed for a full colour coverage of astronomical objects. In comparison, single filter systems waste all the light except that light passing the band filter. On the other hand, the simultaneous observation in four colours should provide a stability of the colour indices during non-photometric conditions caused by variable extinction, e.g., thin clouds (droplets, ice crystals) or dust. This includes the assumption that thin clouds have a grey colour, which means an equal flux loss in all colour bands. Additionally, a new shutter, the "Bonn Shutter'', was developed. It provides homogeneous illumination over the whole field of view at small exposure times ( As the first science case the first simultaneous observations of the rapidly variable sdB (sub-luminous B) star PG1605+072 are presented. PG1605+072 belongs to a new class of pulsating stars named V361 Hya stars for which more than 30 sdB pulsators are known. The stellar pulsations allow insight into the structure of the stellar atmosphere and therefore indirectly into the evolutionary history of the sdB stars. With this asteroseismological analysis tool it is then possible to determine the stellar mass as well as the envelope mass. PG1605+072 is an ideal target for a photometric and spectroscopic analysis, because it has the longest pulsation periods known for this class of variable stars, and with 50 known pulsation frequencies it possesses by far the richest frequency spectrum. The second science project deals with the Stroemgren photometry of Galactic globular clusters. One advantage of the Stroemgren photometry is that it can be used as a reliable metallicity indicator. The Stroemgren v filter includes several iron absorption lines and is as well as the m1=(v-b)-(b-y) index sensitive to the iron abundance. For giant and supergiant stars the iron metallicity can be determined directly from the photometry. In this work two Galactic globular clusters M12 and M71, for which no Stroemgren photometry were available, were observed and analysed with BUSCA. Moreover, M71 shows additional CN variations among the red giant branch (RGB) and main sequence (MS) stars. The origin of these CN variations is not yet understood. Possible explanations may be that CN at the star surface is enriched with processed material (out of the CNO-cycle) from inside the star, that CN is accreted by stellar winds (self-pollution) from, e.g., AGB stars and novae, or that the CN distribution in the molecular cloud in which the globular cluster was born was not homogeneous. In the past the CN variations were detected spectroscopically for only a few RGB and MS stars, so with that photometric approach more RGB stars can be investigated with BUSCA

    Non Contact Test Points - A high frequency measurement technique for printed circuit boards

    Get PDF
    A rising problem within electronics today is that, since the frequencies have increased dramatically, it is now very difficult to measure signals. If a traditional probe is used, then the direct contact will destroy the signal. The general idea behind this master thesis project is to make use of the crosstalk between transmission lines to create non contact test points. This thesis aims to evaluate different designs, in order to optimize the crosstalk, and also make use of signal processing to recover the original signal. Firstly, the problem was tackled with simulations and the results were analyzed in an attempt to optimize the design. Secondly, an actual circuit board was produced and the process was tested in reality. It turned out that the idea of non contact test points was solid, and it is shown in the report that good measurements can be acquired with little effect upon the original signal

    Mining Sandboxes

    Get PDF
    Modern software is ubiquitous, yet insecure. It has the potential to expose billions of humans to serious harm, up to and including losing fortunes and taking lives. Existing approaches for securing programs are either exceedingly hard and costly to apply, significantly decrease usability, or just don’t work well enough against a determined attacker. In this thesis we propose a new solution that significantly increases application security yet it is cheap, easy to deploy, and has minimal usability impact. We combine in a novel way the best of what existing techniques of test generation, dynamic program analysis and runtime enforcement have to offer: We introduce the concept of sandbox mining. First, in a phase called mining, we use automatic test generation to discover application behavior. Second, we apply a sandbox to limit any behavior during normal usage to the one discovered during mining. Users of an application running in a mined sandbox are thus protected from the application suddenly changing its behavior, as compared to the one observed during automatic test generation. As a consequence, backdoors, advanced persistent threats and other kinds of attacks based on the passage of time become exceedingly hard to conduct covertly. They are either discovered in the secure mining phase, where they can do no damage, or are blocked altogether. Mining is cheap because we leverage fully automated test generation to provide baseline behavior. Usability is not degraded: the sandbox runtime enforcement impact is negligible; the mined behavior is comprehensive and presented in a human readable format, thus any unexpected behavior changes are rare and easy to reason about. Our BOXMATE prototype for Android applications shows the approach is technically feasible, has an easy setup process, and is widely applicable to existing apps. Experiments conducted with BOXMATE show less than one hour is required to mine Android applications sandboxes, requiring few to no confirmations for frequently used functionality.Moderne Software ist allgegenwĂ€rtig und zeitgleich unsicher. Dies stellt ein Risiko dar, welches Milliarden Menschen verwundbar gegenĂŒber Schadsoftware macht und dessen Folgen sich bis hin zu Vermögensverlust und Lebensgefahr ausweiten können. GegenwĂ€rtige AnsĂ€tze zur GewĂ€hrleistung der Sicherheit in Computerprogrammen gestalten sich entweder höchst kompliziert und aufwendig, beeinflussen massiv die Benutzbarkeit oder aber stellen sich als nicht effektiv genug gegen resolute Angreifer heraus. In dieser Arbeit prĂ€sentieren wir einen neuen Lösungsansatz, welcher die Sicherheit einer Applikation drastisch erhöht, zeitgleich sowohl kostengĂŒnstig als auch einfach einzusetzen ist und ferner nur minimalen Einfluss auf die Benutzbarkeit des Programmes nimmt. In einem neuartigen Verfahren kombinieren wir die Vorteile von etablierten Methoden der Testgenerierung, dynamischer Programmanalyse und kontrolliert restriktiver Laufzeitumgebung und stellen das Konzept des Sandbox Mining vor. Im ersten Schritt verwenden wir automatische Testgenerierung in der Mining Phase, um das Verhalten der Applikation zu erkunden und zu beobachten. In einer weiteren Phase verwenden wir eine sogenannte Sandbox, um jegliches bisher nicht beobachtete Verhalten der Applikation wĂ€hrend des normalen Betriebes zu unterbinden. Bei Nutzung einer Applikation in solch einer Sandbox sind Nutzer somit geschĂŒtzt vor plötzlicher Änderung des Verhaltens der Applikation im Vergleich zu dem bereits beobachteten Verhalten wĂ€hrend der Testgenerierung. Folglich sind HintertĂŒren, komplexe, persistente Bedrohungen sowie andere Angriffe, welche auf der Verzögerung ihrer DurchfĂŒhrung beruhen außerordentlich schwer umzusetzen, ohne dass diese dabei entdeckt werden. Diese Bedrohungen werden entweder wĂ€hrend der abgesicherten Mining Phase, in welcher sie keinen Schaden anrichten können, entdeckt oder werden wĂ€hrend der AusfĂŒhrung in der Sandbox verhindert. Der Mining-Prozess ist gĂŒnstig in seiner Umsetzung, da das normale Verhalten des Programmes vollkommen automatisch erlernt wird. Zur gleichen Zeit bleibt die Benutzbarkeit des Programmes unbeeinflusst und der Mehraufwand der Laufzeitabsicherung durch die Sandbox vernachlĂ€ssigbar gering. Ferner ist das erlernte Verhalten verstĂ€ndlich und in einem von Menschen lesbaren Format aufbereitet; daher sind jegliche unvorhergesehenen Änderungen im Verhalten des Programmes selten und einfach zu erklĂ€ren. Unser BOXMATE Prototyp fĂŒr Android Applikationen zeigt, dass das Verfahren technisch realisierbar ist, einen einfachen Einrichtungsprozess bietet und weitflĂ€chig anwendbar auf bestehende Applikation ist. Bei der DurchfĂŒhrung von Versuchen mit BOXMATE hat sich gezeigt, dass es weniger als eine Stunde bedarf um Sandboxes fĂŒr Android Applikation zu generieren und es derweil nur wenige oder gar keine Konfirmation der Regeln fĂŒr die hĂ€ufig genutzten Funktionen erfordert

    FROM MUSIC INFORMATION RETRIEVAL (MIR) TO INFORMATION RETRIEVAL FOR MUSIC (IRM)

    Get PDF
    This thesis reviews and discusses certain techniques from the domain of (Music) Information Retrieval, in particular some general data mining algorithms. It also describes their specific adaptations for use as building blocks in the CACE4 software application. The use of Augmented Transition Networks (ATN) from the field of (Music) Information Retrieval is, to a certain extent, adequate as long as one keeps the underlying tonal constraints and rules as a guide to understanding the structure one is looking for. However since a large proportion of algorithmic music, including music composed by the author, is atonal, tonal constraints and rules are of little use. Analysis methods from Hierarchical Clustering Techniques (HCT) such as k-means and Expectation-Maximisation (EM) facilitate other approaches and are better suited for finding (clustered) structures in large data sets. ART2 Neural Networks (Adaptive Resonance Theory) for example, can be used for analysing and categorising these data sets. Statistical tools such as histogram analysis, mean, variance as well as correlation calculations can provide information about connections between members in a data set. Altogether this provides a diverse palette of usable data analysis methods and strategies for creating algorithmic atonal music. Now acting as (software) strategy tools, their use is determined by the quality of their output within a musical context, as demonstrated when developed and programmed into the Computer Assisted Composition Environment: CACE4. Music Information Retrieval techniques are therefore inverted: their specific techniques and associated methods of Information Retrieval and general data mining are used to access the organisation and constraints of abstract (non-specific musical) data in order to use and transform it in a musical composition
    • 

    corecore