2,601 research outputs found

    FairFuzz: Targeting Rare Branches to Rapidly Increase Greybox Fuzz Testing Coverage

    Full text link
    In recent years, fuzz testing has proven itself to be one of the most effective techniques for finding correctness bugs and security vulnerabilities in practice. One particular fuzz testing tool, American Fuzzy Lop or AFL, has become popular thanks to its ease-of-use and bug-finding power. However, AFL remains limited in the depth of program coverage it achieves, in particular because it does not consider which parts of program inputs should not be mutated in order to maintain deep program coverage. We propose an approach, FairFuzz, that helps alleviate this limitation in two key steps. First, FairFuzz automatically prioritizes inputs exercising rare parts of the program under test. Second, it automatically adjusts the mutation of inputs so that the mutated inputs are more likely to exercise these same rare parts of the program. We conduct evaluation on real-world programs against state-of-the-art versions of AFL, thoroughly repeating experiments to get good measures of variability. We find that on certain benchmarks FairFuzz shows significant coverage increases after 24 hours compared to state-of-the-art versions of AFL, while on others it achieves high program coverage at a significantly faster rate

    Project scheduling under undertainty – survey and research potentials.

    Get PDF
    The vast majority of the research efforts in project scheduling assume complete information about the scheduling problem to be solved and a static deterministic environment within which the pre-computed baseline schedule will be executed. However, in the real world, project activities are subject to considerable uncertainty, that is gradually resolved during project execution. In this survey we review the fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, stochastic GERT network scheduling, fuzzy project scheduling, robust (proactive) scheduling and sensitivity analysis. We discuss the potentials of these approaches for scheduling projects under uncertainty.Management; Project management; Robustness; Scheduling; Stability;

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin

    Proceedings of the 4th field robot event 2006, Stuttgart/Hohenheim, Germany, 23-24th June 2006

    Get PDF
    Zeer uitgebreid verslag van het 4e Fieldrobotevent, dat gehouden werd op 23 en 24 juni 2006 in Stuttgart/Hohenhei

    Optimal metric for condition rating of existing buildings: is five the right number?

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in Structure and Infrastructure Engineering on January 2019, available online: http://www.tandfonline.com/10.1080/15732479.2018.1557702In the context of the built environment in the recent years, the concept of maintenance has changed from corrective to preventive maintenance. There is evidence that preventive maintenance is much more efficient than corrective maintenance, since severe deteriorations that may represent danger to people are avoided, and also money is saved. To make periodic inspections of the buildings is useful to quantify, the extent to which deteriorations are severe or not, in order to facilitate decision making and prioritise interventions. To this purpose, many scales have been used and are used to assess the severity of damage and degradation of the building components. But it appears evident that there is not consensus among users and these scales are different between them, with different number of degrees and metrics for the measurement of the condition state. The main goal of this paper is to calculate which is the optimal metric (which is the optimal number of degrees) of a severity scale of damages in buildings, so the corresponding scale could be of widespread and of common use among professionals, avoiding the problems of comparison between different evaluators. The proposed methodology to calculate the optimal metric of a scale can be also extended to other scopes.Peer ReviewedPostprint (author's final draft

    Software development process mining: discovery, conformance checking and enhancement

    Get PDF
    Context. Modern software projects require the proper allocation of human, technical and financial resources. Very often, project managers make decisions supported only by their personal experience, intuition or simply by mirroring activities performed by others in similar contexts. Most attempts to avoid such practices use models based on lines of code, cyclomatic complexity or effort estimators, thus commonly supported by software repositories which are known to contain several flaws. Objective. Demonstrate the usefulness of process data and mining methods to enhance the software development practices, by assessing efficiency and unveil unknown process insights, thus contributing to the creation of novel models within the software development analytics realm. Method. We mined the development process fragments of multiple developers in three different scenarios by collecting Integrated Development Environment (IDE) events during their development sessions. Furthermore, we used process and text mining to discovery developers’ workflows and their fingerprints, respectively. Results. We discovered and modeled with good quality developers’ processes during programming sessions based on events extracted from their IDEs. We unveiled insights from coding practices in distinct refactoring tasks, built accurate software complexity forecast models based only on process metrics and setup a method for characterizing coherently developers’ behaviors. The latter may ultimately lead to the creation of a catalog of software development process smells. Conclusions. Our approach is agnostic to programming languages, geographic location or development practices, making it suitable for challenging contexts such as in modern global software development projects using either traditional IDEs or sophisticated low/no code platforms.Contexto. Projetos de software modernos requerem a correta alocação de recursos humanos, técnicos e financeiros. Frequentemente, os gestores de projeto tomam decisões suportadas apenas na sua própria experiência, intuição ou simplesmente espelhando atividades executadas por terceiros em contextos similares. As tentativas para evitar tais práticas baseiam-se em modelos que usam linhas de código, a complexidade ciclomática ou em estimativas de esforço, sendo estes tradicionalmente suportados por repositórios de software conhecidos por conterem várias limitações. Objetivo. Demonstrar a utilidade dos dados de processo e respetivos métodos de análise na melhoria das práticas de desenvolvimento de software, colocando o foco na análise da eficiência e revelando aspetos dos processos até então desconhecidos, contribuindo para a criação de novos modelos no contexto de análises avançadas para o desenvolvimento de software. Método. Explorámos os fragmentos de processo de vários programadores em três cenários diferentes, recolhendo eventos durante as suas sessões de desenvolvimento no IDE. Adicionalmente, usámos métodos de descoberta e análise de processos e texto no sentido de modelar o fluxo de trabalho dos programadores e as suas características individuais, respetivamente. Resultados. Descobrimos e modelámos com boa qualidade os processos dos programadores durante as suas sessões de trabalho, usando eventos provenientes dos seus IDEs. Revelámos factos desconhecidos sobre práticas de refabricação, construímos modelos de previsão da complexidade ciclomática usando apenas métricas de processo e criámos um método para caracterizar coerentemente os comportamentos dos programadores. Este último, pode levar à criação de um catálogo de boas/más práticas no processo de desenvolvimento de software. Conclusões. A nossa abordagem é agnóstica em termos de linguagens de programação, localização geográfica ou prática de desenvolvimento, tornando-a aplicável em contextos complexos tal como em projetos modernos de desenvolvimento global que utilizam tanto os IDEs tradicionais como as atuais e sofisticadas plataformas "low/no code"

    EXPECTATIONS - an autonomous mobile vehicle simulator

    Get PDF
    This paper describes a fully integrated mobile vehicle simulator - EXPECTATIONS. The structure of the simulator is one of modular and object-oriented, where the virtual environment, static and dynamic objects and their interactions are hierarchically constructed. It supports 2D/3D real-time graphic rendering of the composite environment, which can be visualized on multiple X-windows in a time synchronized manner, in which vehicle or object movement can be animated in accordance with the calculation of the algorithms written in C/C++. Algorithms such as path planning, behavior learning, collision avoidance and navigation strategies can be `plug-and-play' easily through the so called Action Decision Interchange concept. Apart from providing a realistic visualization tool for AMV development, it also supports fast algorithmic study and development, and the knowledge learnt through the simulation may potentially be used by the physical vehicle in real operations.published_or_final_versio

    Fuzzy adaptation for information acces to digital libraries

    Get PDF
    P. 287-297Different models have been proposed to build systems that adapt to the user reducing the complexity in the information browsing process, saving user time and offering personalised interfaces. The work described in this paper, named Way-Z39.50, is an example of adaptive system specially built for accessing digital libraries. A fuzzy decision mechanism is used to implement the adaptation of both the user interface and the information searching process. This paper describes how this system has been designed and implemented, focusing on its fuzzy-based components. Finally, it also briefly discusses the advantages of using fuzzy logic to implement adaptability of the user interface and the searching processS
    • …
    corecore