27,991 research outputs found

    Suorituksen kattavuustietojen käyttämiseen perustuva testien valintatapa Python-ohjelmille

    Get PDF
    Regression testing is a type of testing that aims to verify that the existing test suite will not find any defects in a modified program. Regression tests are usually run after each program modification and may take lots of processing time to complete. Regression test selection is a process where only a relevant subset of tests are selected from the test suite for execution with the goal of reducing the time the regression test execution takes. Safe regression test selection methods are one that can prove that none of the deselected test cases would have found any defects, so that running them is not necessary. Researchers have proposed multiple different methods for safe and unsafe regression test selection. Many of them require control flow graph or similar information that can be extracted during code compilation step. Therefore most of these methods are unsuitable for dynamically typed programming languages where that information can not be extracted. This thesis presents a test coverage based regression test selection method that can be used with interpreted programming languages. The presented method does not require any changes to tested program's source code. The presented method's test selection precision was tested with existing medium sized proprietary web application, and the results are somewhat mixed. The overhead imposed by the coverage based test selection method increased the test suite's execution time significantly. The test selection method managed to select a small subset of the test suite roughly half of the time. In the other half of the time the test selection had to re-run all tests.Regressiotestaus on testauksen muoto, jonka tarkoituksena on varmistaa, että ohjelman olemassa olevat testit eivät löydä virheitä muokatusta ohjelmasta. Regressiotestaus suoritetaan yleensä jokaisen ohjelman muutoksen jälkeen, ja sen suoritus voi viedä paljon prosessointiaikaa. Regressiotestien valinta on prosessi, jossa ohjelman kaikkien testien joukosta valitaan muutoksen kannalta oleellinen testien alijoukko. Valinnan tavoitteena on pienentää testien määrää ja näin vähentää testien suoritukseen kuluvaa aikaa. Turvalliset regressiotestien valintamenetelmät ovat menetelmiä jossa voidaan todistaa, että valitsemattomat testit eivät olisi voineet löytää virheitä, ja täten ne voidaan jättää suorittamatta. Tutkijat ovat kehitelleet useita eri menetelmiä turvalliseen ja epäturvalliseen regressiotestien valintaan. Useat menetelmistä tarvitsevat ohjelmien ohjausvuokaavion tai vastaavaa informaatiota, jota voidaan laskea ohjelmien käännöksen yhteydessä. Tämän vuoksi menetelmät eivät ole yhteensopivia dynaamisesti tyypitettyjen tulkattujen ohjelmointikielten kanssa, joissa tätä informaatiota ei ole saatavilla. Tämä työ esittelee testien kattavuuteen perustuvan menetelmän regressiotestien valintaan, jota voidaan käyttää tulkattujen ohjelmointikielien kanssa. Esitelty menetelmä ei tarvitse muutoksia testattavan ohjelman ohjelmakoodiin. Esitellyn menetelmän testien valinnan tarkkuutta testattiin keskikokoisella verkkosovelluksella, ja tulokset olivat osittain ristiriitaisia. Testien valintamenetelmä onnistui valitsemaan pienen testijoukon noin puolessa testitilanteita. Lopuissa testitilanteista menetelmä joutui suorittamaan kaikki testijoukon testit. Menetelmän käyttämisen havaittiin kuitenkin hidastavan valittujen testien suoritusaikaa merkittävästi

    Analysis and evaluation of SafeDroid v2.0, a framework for detecting malicious Android applications

    Get PDF
    Android smartphones have become a vital component of the daily routine of millions of people, running a plethora of applications available in the official and alternative marketplaces. Although there are many security mechanisms to scan and filter malicious applications, malware is still able to reach the devices of many end-users. In this paper, we introduce the SafeDroid v2.0 framework, that is a flexible, robust, and versatile open-source solution for statically analysing Android applications, based on machine learning techniques. The main goal of our work, besides the automated production of fully sufficient prediction and classification models in terms of maximum accuracy scores and minimum negative errors, is to offer an out-of-the-box framework that can be employed by the Android security researchers to efficiently experiment to find effective solutions: the SafeDroid v2.0 framework makes it possible to test many different combinations of machine learning classifiers, with a high degree of freedom and flexibility in the choice of features to consider, such as dataset balance and dataset selection. The framework also provides a server, for generating experiment reports, and an Android application, for the verification of the produced models in real-life scenarios. An extensive campaign of experiments is also presented to show how it is possible to efficiently find competitive solutions: the results of our experiments confirm that SafeDroid v2.0 can reach very good performances, even with highly unbalanced dataset inputs and always with a very limited overhead

    Bounding rare event probabilities in computer experiments

    Full text link
    We are interested in bounding probabilities of rare events in the context of computer experiments. These rare events depend on the output of a physical model with random input variables. Since the model is only known through an expensive black box function, standard efficient Monte Carlo methods designed for rare events cannot be used. We then propose a strategy to deal with this difficulty based on importance sampling methods. This proposal relies on Kriging metamodeling and is able to achieve sharp upper confidence bounds on the rare event probabilities. The variability due to the Kriging metamodeling step is properly taken into account. The proposed methodology is applied to a toy example and compared to more standard Bayesian bounds. Finally, a challenging real case study is analyzed. It consists of finding an upper bound of the probability that the trajectory of an airborne load will collide with the aircraft that has released it.Comment: 21 pages, 6 figure

    Photometric redshift estimation based on data mining with PhotoRApToR

    Get PDF
    Photometric redshifts (photo-z) are crucial to the scientific exploitation of modern panchromatic digital surveys. In this paper we present PhotoRApToR (Photometric Research Application To Redshift): a Java/C++ based desktop application capable to solve non-linear regression and multi-variate classification problems, in particular specialized for photo-z estimation. It embeds a machine learning algorithm, namely a multilayer neural network trained by the Quasi Newton learning rule, and special tools dedicated to pre- and postprocessing data. PhotoRApToR has been successfully tested on several scientific cases. The application is available for free download from the DAME Program web site.Comment: To appear on Experimental Astronomy, Springer, 20 pages, 15 figure

    Tupleware: Redefining Modern Analytics

    Full text link
    There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world---petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to a few terabytes, and perform primarily compute-intensive operations. Targeting these users fundamentally changes the way we should build analytics systems. This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware's architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Darwinian Data Structure Selection

    Get PDF
    Data structure selection and tuning is laborious but can vastly improve an application's performance and memory footprint. Some data structures share a common interface and enjoy multiple implementations. We call them Darwinian Data Structures (DDS), since we can subject their implementations to survival of the fittest. We introduce ARTEMIS a multi-objective, cloud-based search-based optimisation framework that automatically finds optimal, tuned DDS modulo a test suite, then changes an application to use that DDS. ARTEMIS achieves substantial performance improvements for \emph{every} project in 55 Java projects from DaCapo benchmark, 88 popular projects and 3030 uniformly sampled projects from GitHub. For execution time, CPU usage, and memory consumption, ARTEMIS finds at least one solution that improves \emph{all} measures for 86%86\% (37/4337/43) of the projects. The median improvement across the best solutions is 4.8%4.8\%, 10.1%10.1\%, 5.1%5.1\% for runtime, memory and CPU usage. These aggregate results understate ARTEMIS's potential impact. Some of the benchmarks it improves are libraries or utility functions. Two examples are gson, a ubiquitous Java serialization framework, and xalan, Apache's XML transformation tool. ARTEMIS improves gson by 16.516.5\%, 1%1\% and 2.2%2.2\% for memory, runtime, and CPU; ARTEMIS improves xalan's memory consumption by 23.523.5\%. \emph{Every} client of these projects will benefit from these performance improvements.Comment: 11 page

    The Value Driven Pharmacist: Basics of Access, Cost, and Quality 2nd Edition

    Get PDF
    https://digitalcommons.butler.edu/butlerbooks/1017/thumbnail.jp
    corecore