9 research outputs found

    Incomplete distinguishing sequences for finite state machines

    Get PDF
    Given a Finite State Machine (FSM) M, a Distinguishing Sequence (DS) is a test that identifies the state of M. While there are two types of DSs, preset DSs (PDSs) and adaptive DSs (ADSs), not all FSMs possess a DS. In this paper, we examine the problem of finding incomplete PDSs and ADSs, exploring associated optimisation problems: finding a largest set of states that has a DS and finding a smallest set of DSs that, between them, distinguish all of the states. We also propose a greedy algorithm to produce a small set of incomplete ADSs and use experiments to compare this with two previously published algorithms for generating state identifiers. We show that the optimisation problems related to incomplete ADSs and PDSs are PSPACE-Complete as are corresponding approximation problems. In the experiments we found that incomplete ADSs produced by the proposed greedy algorithm led to relatively compact state identifiers

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC

    Evidence-driven testing and debugging of software systems

    Get PDF
    Program debugging is the process of testing, exposing, reproducing, diagnosing and fixing software bugs. Many techniques have been proposed to aid developers during software testing and debugging. However, researchers have found that developers hardly use or adopt the proposed techniques in software practice. Evidently, this is because there is a gap between proposed methods and the state of software practice. Most methods fail to address the actual needs of software developers. In this dissertation, we pose the following scientific question: How can we bridge the gap between software practice and the state-of-the-art automated testing and debugging techniques? To address this challenge, we put forward the following thesis: Software testing and debugging should be driven by empirical evidence collected from software practice. In particular, we posit that the feedback from software practice should shape and guide (the automation) of testing and debugging activities. In this thesis, we focus on gathering evidence from software practice by conducting several empirical studies on software testing and debugging activities in the real-world. We then build tools and methods that are well-grounded and driven by the empirical evidence obtained from these experiments. Firstly, we conduct an empirical study on the state of debugging in practice using a survey and a human study. In this study, we ask developers about their debugging needs and observe the tools and strategies employed by developers while testing, diagnosing and repairing real bugs. Secondly, we evaluate the effectiveness of the state-of-the-art automated fault localization (AFL) methods on real bugs and programs. Thirdly, we conducted an experiment to evaluate the causes of invalid inputs in software practice. Lastly, we study how to learn input distributions from real-world sample inputs, using probabilistic grammars. To bridge the gap between software practice and the state of the art in software testing and debugging, we proffer the following empirical results and techniques: (1) We collect evidence on the state of practice in program debugging and indeed, we found that there is a chasm between (available) debugging tools and developer needs. We elicit the actual needs and concerns of developers when testing and diagnosing real faults and provide a benchmark (called DBGBench) to aid the automated evaluation of debugging and repair tools. (2) We provide empirical evidence on the effectiveness of several state-of-the-art AFL techniques (such as statistical debugging formulas and dynamic slicing). Building on the obtained empirical evidence, we provide a hybrid approach that outperforms the state-of-the-art AFL techniques. (3) We evaluate the prevalence and causes of invalid inputs in software practice, and we build on the lessons learned from this experiment to build a general-purpose algorithm (called ddmax) that automatically diagnoses and repairs real-world invalid inputs. (4) We provide a method to learn the distribution of input elements in software practice using probabilistic grammars and we further employ the learned distribution to drive the test generation of inputs that are similar (or dissimilar) to sample inputs found in the wild. In summary, we propose an evidence-driven approach to software testing and debugging, which is based on collecting empirical evidence from software practice to guide and direct software testing and debugging. In our evaluation, we found that our approach is effective in improving the effectiveness of several debugging activities in practice. In particular, using our evidence-driven approach, we elicit the actual debugging needs of developers, improve the effectiveness of several automated fault localization techniques, effectively debug and repair invalid inputs, and generate test inputs that are (dis)similar to real-world inputs. Our proposed methods are built on empirical evidence and they improve over the state-of-the-art techniques in testing and debugging.Software-Debugging bezeichnet das Testen, Aufspüren, Reproduzieren, Diagnostizieren und das Beheben von Fehlern in Programmen. Es wurden bereits viele Debugging-Techniken vorgestellt, die Softwareentwicklern beim Testen und Debuggen unterstützen. Dennoch hat sich in der Forschung gezeigt, dass Entwickler diese Techniken in der Praxis kaum anwenden oder adaptieren. Das könnte daran liegen, dass es einen großen Abstand zwischen den vorgestellten und in der Praxis tatsächlich genutzten Techniken gibt. Die meisten Techniken genügen den Anforderungen der Entwickler nicht. In dieser Dissertation stellen wir die folgende wissenschaftliche Frage: Wie können wir die Kluft zwischen Software-Praxis und den aktuellen wissenschaftlichen Techniken für automatisiertes Testen und Debugging schließen? Um diese Herausforderung anzugehen, stellen wir die folgende These auf: Das Testen und Debuggen von Software sollte von empirischen Daten, die in der Software-Praxis gesammelt wurden, vorangetrieben werden. Genauer gesagt postulieren wir, dass das Feedback aus der Software-Praxis die Automation des Testens und Debuggens formen und bestimmen sollte. In dieser Arbeit fokussieren wir uns auf das Sammeln von Daten aus der Software-Praxis, indem wir einige empirische Studien über das Testen und Debuggen von Software in der echten Welt durchführen. Auf Basis der gesammelten Daten entwickeln wir dann Werkzeuge, die sich auf die Daten der durchgeführten Experimente stützen. Als erstes führen wir eine empirische Studie über den Stand des Debuggens in der Praxis durch, wobei wir eine Umfrage und eine Humanstudie nutzen. In dieser Studie befragen wir Entwickler zu ihren Bedürfnissen, die sie beim Debuggen haben und beobachten die Werkzeuge und Strategien, die sie beim Diagnostizieren, Testen und Aufspüren echter Fehler einsetzen. Als nächstes bewerten wir die Effektivität der aktuellen Automated Fault Localization (AFL)- Methoden zum automatischen Aufspüren von echten Fehlern in echten Programmen. Unser dritter Schritt ist ein Experiment, um die Ursachen von defekten Eingaben in der Software-Praxis zu ermitteln. Zuletzt erforschen wir, wie Häufigkeitsverteilungen von Teileingaben mithilfe einer Grammatik von echten Beispiel-Eingaben aus der Praxis gelernt werden können. Um die Lücke zwischen Software-Praxis und der aktuellen Forschung über Testen und Debuggen von Software zu schließen, bieten wir die folgenden empirischen Ergebnisse und Techniken: (1) Wir sammeln aktuelle Forschungsergebnisse zum Stand des Software-Debuggens und finden in der Tat eine Diskrepanz zwischen (vorhandenen) Debugging-Werkzeugen und dem, was der Entwickler tatsächlich benötigt. Wir sammeln die tatsächlichen Bedürfnisse von Entwicklern beim Testen und Debuggen von Fehlern aus der echten Welt und entwickeln einen Benchmark (DbgBench), um das automatische Evaluieren von Debugging-Werkzeugen zu erleichtern. (2) Wir stellen empirische Daten zur Effektivität einiger aktueller AFL-Techniken vor (z.B. Statistical Debugging-Formeln und Dynamic Slicing). Auf diese Daten aufbauend, stellen wir einen hybriden Algorithmus vor, der die Leistung der aktuellen AFL-Techniken übertrifft. (3) Wir evaluieren die Häufigkeit und Ursachen von ungültigen Eingaben in der Softwarepraxis und stellen einen auf diesen Daten aufbauenden universell einsetzbaren Algorithmus (ddmax) vor, der automatisch defekte Eingaben diagnostiziert und behebt. (4) Wir stellen eine Methode vor, die Verteilung von Schnipseln von Eingaben in der Software-Praxis zu lernen, indem wir Grammatiken mit Wahrscheinlichkeiten nutzen. Die gelernten Verteilungen benutzen wir dann, um den Beispiel-Eingaben ähnliche (oder verschiedene) Eingaben zu erzeugen. Zusammenfassend stellen wir einen auf der Praxis beruhenden Ansatz zum Testen und Debuggen von Software vor, welcher auf empirischen Daten aus der Software-Praxis basiert, um das Testen und Debuggen zu unterstützen. In unserer Evaluierung haben wir festgestellt, dass unser Ansatz effektiv viele Debugging-Disziplinen in der Praxis verbessert. Genauer gesagt finden wir mit unserem Ansatz die genauen Bedürfnisse von Entwicklern, verbessern die Effektivität vieler AFL-Techniken, debuggen und beheben effektiv fehlerhafte Eingaben und generieren Test-Eingaben, die (un)ähnlich zu Eingaben aus der echten Welt sind. Unsere vorgestellten Methoden basieren auf empirischen Daten und verbessern die aktuellen Techniken des Testens und Debuggens

    Improvements in finite state machines

    Get PDF
    Finite State Machine (FSM) based testing methods have a history of over half a century, starting in 1956 with the works on machine identi cation. This was then followed by works checking the conformance of a given implementation to a given speci cation. When it is possible to identify the states of an FSM using an appropriate input sequence, it's been long known that it is possible to generate a Fault Detection Experiment with fault coverage with respect to a certain fault model in polynomial time. In this thesis, we investigate two notions of fault detection sequences; Checking Sequence (CS), Checking Experiment (CE). Since a fault detection sequence (either a CS or a CE) is constructed once but used many times, the importance of having short fault detection sequences is obvious and hence recent works in this eld aim to generate shorter fault detection sequences. In this thesis, we rst investigate a strategy and related problems to reduce the length of a CS. A CS consists several components such as Reset Sequences and State Identi - cation Sequences. All works assume that for a given FSM, a reset sequence and a state identi cation sequence are also given together with the speci cation FSM M. Using the given reset and state identi cation sequences, a CS is formed that gives full fault coverage under certain assumptions. In other words, any faulty implementation N can be identi ed by using this test sequence. In the literature, di erent methods for CS construction take di erent approaches to put these components together, with the aim of coming up with a shorter CS incorporating all of these components. One obvious way of keeping the CS short is to keep components short. As the reset sequence and the state identi cation sequence are the biggest components, having short reset and state identi cation sequences is very important as well. It was shown in 1991 that for a given FSM M, shortest reset sequence cannot be computed in polynomial time if P 6≠NP. Recently it was shown that when the FSM has particular type (\monotonic") of transition structure, constructing one of the shortest reset word is polynomial time solvable. However there has been no work on constructing one of the shortest reset word for a monotonic partially speci ed machines. In this thesis, we showed that this problem is NP-hard. On the other hand, in 1994 it was shown that one can check if M has special type of state identi cation sequence (known as an adaptive distinguishing sequence) in polynomial time. The same work also suggests a polynomial time algorithm to construct a state identi cation sequence when one exists. However, this algorithm generates a state identi cation sequence without any particular emphasis on generating a short one. There has been no work on the generation of state identi cation sequences for complete or partial machines after this work. In this thesis, we showed that construction of short state identi cation sequences is NP-complete and NP-hard to approximate. We propose methods of generating short state identi cation sequences and experimentally validate that such state identi cation sequences can reduce the length of fault detection sequences by 29:2% on the average. Another line of research, in this thesis, devoted for reducing the cost of checking experiments. A checking experiment consist of a set of input sequences each of which aim to test di erent properties of the implementation. As in the case of CSs, a large portion of these input sequences contain state identi cation sequences. There are several kinds of state identi cation sequences that are applicable in CEs. In this work, we propose a new kind of state identi cation sequence and show that construction of such sequences are PSPACE-complete. We propose a heuristic and we perform experiments on benchmark FSMs and experimentally show that the proposed notion of state identi cation sequence can reduce the cost of CEs by 65% in the extreme case. Testing distributed architectures is another interesting eld for FSM based fault detection sequence generation. The additional challenge when such distributed architectures are considered is to generate a fault detection sequence which does not pose controllability or observability problem. Although the existing methods again assume that a state identi cation sequence is given using which a fault detection sequence is constructed, there is no work on how to generate a state identi cation sequence which do not have controllability/observability problem itself. In this thesis we investigate the computational complexities to generate such state identi cation sequences and show that no polynomial time algorithm can construct a state identi cation sequence for a given distributed FSM

    Aplicación de técnicas de pruebas automáticas basadas en propiedades a los diferentes niveles de prueba del software

    Get PDF
    [Resumen]Las pruebas son una de las actividades clave en el desarrollo de software, puesto que ayudan a detectar defectos que, de otro modo, pasarían desapercibidos hasta que el software sea desplegado. Sin embargo, al contrario que en otras etapas del ciclo de vida del software, como son el análisis, el diseño o la implementación, para las que existen metodologías y técnicas bien definidas y ampliamente aceptadas en la comunidad informática, junto con herramientas que permiten llevar a cabo dichas tareas, no hay una uniformidad sobre las metodologías, técnicas o herramientas a utilizar para llevar a cabo las pruebas del software de una manera eficiente y eficaz. Este hecho provoca que, muchas veces, éstas sean omitidas o no realizadas con todo el rigor necesario. Esta tesis presenta una aproximación, basada en propiedades y puramente funcional, para la realización de las pruebas del software, que intenta paliar estos problemas. Para ello, se definen metodologías y técnicas de pruebas, integradas en el proceso de desarrollo de software, que pueden ser aplicadas a los diferentes niveles de pruebas del software. Así, pueden utilizarse para llevar a cabo pruebas unitarias y de componente, en las que se comprueba que cada componente individual se comporta de la manera esperada, pruebas de integración, que comprueban las interacciones de los componentes que forman parte de un sistema, y pruebas de sistema, que se encargan de comprobar diferentes aspectos del sistema como un todo. Además, se utiliza un lenguaje de especificación de pruebas común en todas las aproximaciones desarrolladas, el lenguaje de programación funcional Erlang, y las metodologías se definen de manera independiente a la estructura del software concreto a probar o el lenguaje de programación en el que éste esté implementado. Por último, cabe destacar que el uso de estas metodologías y técnicas de pruebas se ilustra a través de un ejemplo industrial, en concreto, el sistema VoDKATV. Este sistema ofrece acceso a servicios multimedia (canales de televisión, videoclub, aplicaciones, juegos, entre otros) a través de diferentes tipos de dispositivos, como, por ejemplo, televisiones, ordenadores, tabletas o móviles. Con respecto a la arquitectura, el sistema VoDKATV está compuesto por múltiples componentes implementados con diferentes tecnologías (Java, Erlang, C, etc.) que se integran entre sí. La complejidad de este sistema permite ilustrar cada una de las metodologías y técnicas de pruebas desarrolladas con un ejemplo real

    Model Checking and Model-Based Testing : Improving Their Feasibility by Lazy Techniques, Parallelization, and Other Optimizations

    Get PDF
    This thesis focuses on the lightweight formal method of model-based testing for checking safety properties, and derives a new and more feasible approach. For liveness properties, dynamic testing is impossible, so feasibility is increased by specializing on an important class of properties, livelock freedom, and deriving a more feasible model checking algorithm for it. All mentioned improvements are substantiated by experiments
    corecore