128 research outputs found

    A Survey of Constrained Combinatorial Testing

    Get PDF
    Combinatorial Testing (CT) is a potentially powerful testing technique, whereas its failure revealing ability might be dramatically reduced if it fails to handle constraints in an adequate and efficient manner. To ensure the wider applicability of CT in the presence of constrained problem domains, large and diverse efforts have been invested towards the techniques and applications of constrained combinatorial testing. In this paper, we provide a comprehensive survey of representations, influences, and techniques that pertain to constraints in CT, covering 129 papers published between 1987 and 2018. This survey not only categorises the various constraint handling techniques, but also reviews comparatively less well-studied, yet potentially important, constraint identification and maintenance techniques. Since real-world programs are usually constrained, this survey can be of interest to researchers and practitioners who are looking to use and study constrained combinatorial testing techniques

    Feature-Trace : an approach to generate operational profile and to support regression testing from BDD features

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2020.O conceito de Perfil Operacional fornece informações quantitativas sobre como o software será usado, o que permite destacar os componentes de software mais sensíveis à confiabilidade com base no perfil do uso do software. Entretanto, a geração de Perfis Operacionais geralmente requer um esforço considerável da equipe para conectar a especificação de requisitos com as unidades de código que as compõem. Nesse sentido, torna-se primordial, no ciclo de vida do software, a capacidade de executar de maneira fácil ou eficiente a rastreabilidade do requisito ao código, adotando o processo de teste como um meio de garantir que os requisitos sejam atendidos e abordados de maneira satisfatória. Neste trabalho, propomos a abordagem Feature- Trace, que mescla as vantagens do Perfil Operacional e os benefícios da rastreabilidade de requisitos para código presente na abordagem BDD (Behavior-Driven Development). O objetivo principal do nosso trabalho é usar a abordagem BDD como fonte de informação para a geração semi-automatizada do Perfil Operacional, mas também visa extrair várias outras métricas relacionadas ao processo de priorização e seleção de casos de teste, como o Métricas de Program Spectrum e complexidade de código. A abordagem proposta foi avaliada no software Diáspora, um software de código aberto, disponível no GitHub, que contém 68 features BDD, especificadas em 267 cenários e ≈ 72 KLOC e mais de 2.900 forks no Github. O estudo de caso revelou que a abordagem Feature-Trace é capaz de extrair perfeitamente o Perfil Operacional a partir das especificações do BDD do diáspora, além de obter e apresentar informações vitais para orientar o processo de teste de regressão. A abordagem também foi avaliada com base no feedback de 18 desenvolvedores que tiveram acesso à abordagem e ferramenta proposta neste trabalho - evidenciando a utilidade do Feature-Trace para atividades de “Priorização e seleção de casos de teste”, “Avaliação da qualidade de casos de teste" e "Manutenção e evolução de software".Operational Profiles provide quantitative information about how the software will be used, which supports highlighting those software components more sensitive to reliability based on their profile usage. However, the generation of Operational Profiles usually requires a considerable team effort to liaise requirements specification until their reification into ex- pected software artifacts. In this sense, it becomes paramount in the software life cycle the ability to seamlessly or efficiently perform traceability from requirement to code, embrac- ing the testing process as a means to ensure that the requirements are satisfiably covered and addressed. In this work, we propose the Feature-Trace approach which merges the advantages of the Operational Profile and the benefits of the requirements-to-code trace- ability present in the BDD (Behavior-Driven Development) approach. The primary goal of our work is to use the BDD approach as an information source for the semi-automated generation of the Operational Profile, but it also aims to extract several other metrics re- lated to the process of prioritizing and selecting test cases, such as the Program Spectrum and Code Complexity metrics. The proposed approach was evaluated on the Diaspora software, on a GitHub open source software, which contains 68 BDD features, specified in 267 scenarios and ≈ 72 KLOC and more than 2,900 forks and counting. The case study revealed that the Feature-Trace approach is capable of extracting the operational profile seamlessly from the specified Diaspora’s BDD features as well as obtaining and presenting vital information to guide the process of test cases prioritization. The approach was also assessed based on feedback from 18 developers who had access to the approach and tool proposed in this work — making evident the usefulness of the Feature-Trace for activities of “Prioritization and Selection of Test Cases”, “Evaluation of the quality of test cases” and “Maintenance and Software Evolution”

    Combining Error-Correcting Codes and Decision Diagrams for the Design of Fault-Tolerant Logic

    Get PDF
    In modern logic circuits, fault-tolerance is increasingly important, since even atomic-scale imperfections can result in circuit failures as the size of the components is shrinking. Therefore, in addition to existing techniques for providing fault-tolerance to logic circuits, it is important to develop new techniques for detecting and correcting possible errors resulting from faults in the circuitry. Error-correcting codes are typically used in data transmission for error detection and correction. Their theory is far developed, and linear codes, in particular, have many useful properties and fast decoding algorithms. The existing fault-tolerance techniques utilizing error-correcting codes require less redundancy than other error detection and correction schemes, and such techniques are usually implemented using special decoding circuits. Decision diagrams are an efficient graphical representation for logic functions, which, depending on the technology, directly determine the complexity and layout of the circuit. Therefore, they are easy to implement. In this thesis, error-correcting codes are combined with decision diagrams to obtain a new method for providing fault-tolerance in logic circuits. The resulting method of designing fault-tolerant logic, namely error-correcting decision diagrams, introduces redundancy already to the representations of logic functions, and as a consequence no additional checker circuits are needed in the circuit layouts obtained with the new method. The purpose of the thesis is to introduce this original concept and provide fault-tolerance analysis for the obtained decision diagrams. The fault-tolerance analysis of error-correcting decision diagrams carried out in this thesis shows that the obtained robust diagrams have a significantly reduced probability for an incorrect output in comparison with non-redundant diagrams. However, such useful properties are not obtained without a cost, since adding redundancy also adds complexity, and consequently better error-correcting properties result in increased complexity in the circuit layout. /Kir1

    Reasoning about LTL Synthesis over finite and infinite games

    Get PDF
    In the last few years, research formal methods for the analysis and the verification of properties of systems has increased greatly. A meaningful contribution in this area has been given by algorithmic methods developed in the context of synthesis. The basic idea is simple and appealing: instead of developing a system and verifying that it satisfies its specification, we look for an automated procedure that, given the specification returns a system that is correct by construction. Synthesis of reactive systems is one of the most popular variants of this problem, in which we want to synthesize a system characterized by an ongoing interaction with the environment. In this setting, large effort has been devoted to analyze specifications given as formulas of linear temporal logic, i.e., LTL synthesis. Traditional approaches to LTL synthesis rely on transforming the LTL specification into parity deterministic automata, and then to parity games, for which a so-called winning region is computed. Computing such an automaton is, in the worst-case, double-exponential in the size of the LTL formula, and this becomes a computational bottleneck in using the synthesis process in practice. The first part of this thesis is devoted to improve the solution of parity games as they are used in solving LTL synthesis, trying to give efficient techniques, in terms of running time and space consumption, for solving parity games. We start with the study and the implementation of an automata-theoretic technique to solve parity games. More precisely, we consider an algorithm introduced by Kupferman and Vardi that solves a parity game by solving the emptiness problem of a corresponding alternating parity automaton. Our empirical evaluation demonstrates that this algorithm outperforms other algorithms when the game has a small number of priorities relative to the size of the game. In many concrete applications, we do indeed end up with parity games where the number of priorities is relatively small. This makes the new algorithm quite useful in practice. We then provide a broad investigation of the symbolic approach for solving parity games. Specifically, we implement in a fresh tool, called SPGSolver, four symbolic algorithms to solve parity games and compare their performances to the corresponding explicit versions for different classes of games. By means of benchmarks, we show that for random games, even for constrained random games, explicit algorithms actually perform better than symbolic algorithms. The situation changes, however, for structured games, where symbolic algorithms seem to have the advantage. This suggests that when evaluating algorithms for parity-game solving, it would be useful to have real benchmarks and not only random benchmarks, as the common practice has been. LTL synthesis has been largely investigated also in artificial intelligence, and specifically in automated planning. Indeed, LTL synthesis corresponds to fully observable nondeterministic planning in which the domain is given compactly and the goal is an LTL formula, that in turn is related to two-player games with LTL goals. Finding a strategy for these games means to synthesize a plan for the planning problem. The last part of this thesis is then dedicated to investigate LTL synthesis under this different view. In particular, we study a generalized form of planning under partial observability, in which we have multiple, possibly infinitely many, planning domains with the same actions and observations, and goals expressed over observations, which are possibly temporally extended. By building on work on two-player games with imperfect information in the Formal Methods literature, we devise a general technique, generalizing the belief-state construction, to remove partial observability. This reduces the planning problem to a game of perfect information with a tight correspondence between plans and strategies. Then we instantiate the technique and solve some generalized planning problems

    Dagstuhl News January - December 2002

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Logic synthesis and testing techniques for switching nano-crossbar arrays

    Get PDF
    Beyond CMOS, new technologies are emerging to extend electronic systems with features unavailable to silicon-based devices. Emerging technologies provide new logic and interconnection structures for computation, storage and communication that may require new design paradigms, and therefore trigger the development of a new generation of design automation tools. In the last decade, several emerging technologies have been proposed and the time has come for studying new ad-hoc techniques and tools for logic synthesis, physical design and testing. The main goal of this project is developing a complete synthesis and optimization methodology for switching nano-crossbar arrays that leads to the design and construction of an emerging nanocomputer. New models for diode, FET, and four-terminal switch based nanoarrays are developed. The proposed methodology implements logic, arithmetic, and memory elements by considering performance parameters such as area, delay, power dissipation, and reliability. With combination of logic, arithmetic, and memory elements a synchronous state machine (SSM), representation of a computer, is realized. The proposed methodology targets variety of emerging technologies including nanowire/nanotube crossbar arrays, magnetic switch-based structures, and crossbar memories. The results of this project will be a foundation of nano-crossbar based circuit design techniques and greatly contribute to the construction of emerging computers beyond CMOS. The topic of this project can be considered under the research area of â\u80\u9cEmerging Computing Modelsâ\u80\u9d or â\u80\u9cComputational Nanoelectronicsâ\u80\u9d, more specifically the design, modeling, and simulation of new nanoscale switches beyond CMOS

    Proactive defense strategies against net load redistribution attacks in cyber-physical smart grids

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringHongyu WuRecent advances in the cyber-physical smart grid (CPSG) have enabled a broad range of new devices based on information and communication technology (ICT). An open network environment in CPSG provides frequent interaction between information and physical components. However, this interaction also exposes the ICT-enabled devices to a growing threat of cyberattacks. Such threats have been alerted by recent cybersecurity incidents, and the security issues have strongly restricted the development of CPSG. Among various CPS cybersecurity incidents, cyber data attacks invade the cyber layer to destroy data integrity. Through elaborately eavesdropping on the transferred measurement data, the attacks can mislead the state estimation (SE) while keeping stealthy to conventional bad data detection (BDD). Due to the SE being the critical function of CPSG control, the cyber data attacks may cause massive economic loss, power system instability, or even cascading failures. Therefore, this dissertation focuses on the detection of stealthy data integrity attacks. This dissertation first performs a thorough review of the state-of-the-art cyber-physical security of the smart grid. By focusing on the physical layer of the CPSG, this work provides an abstracted and unified state-space model in which cyber-physical attack and defense models can be effectively generalized. The existing cyber-physical attacks are categorized in terms of their target components. In addition, this work discusses several operational and informational defense approaches that present the current state-of-the-art in the field, including moving target defense (MTD), watermarking, and data-driven strategies. The challenges and future opportunities associated with the smart grid cyber-physical security is also discussed. Further, a real-time digital simulator, namely Typhoon HIL, is utilized to visualize the random MTD against false data injection (FDI) attacks. Given the review section as a background, a hidden, coordinated net load redistribution attack (NLRA) in an AC distribution system is proposed. The attacker's goal is to create violations in nodal voltage magnitude estimation. An attacker can implement the NLRA strategy by using the local information of an attack region and power flow enhanced deep learning (PFEDL) state estimators. The NLRA is modeled as an attacker's modified AC optimal power flow problem to maximize the attack impact. Case study results indicate the PFEDL-based SE can provide the attacker with accurate system states in a low observable distribution system where conventional lease square-based SE cannot converge. The stealthiness of the hidden NLRA is validated in multiple attack cases. The influence of NLRA on the distribution system is assessed, and the impact of attack regions, attack timing, and attack area size are also revealed. Next, this dissertation highlights that current MTD strategies myopically perturb the reactance of D-FACTS lines without considering the system voltage stability. Voltage instability induced by MTDs is illustrated in a three-bus system and two more complicated systems with real-world load profiles. Further, a novel MTD framework that explicitly considers system voltage stability using continuation power flow and voltage stability indices is proposed to avoid MTD-induced voltage instability. In addition, this dissertation mathematically derives the sensitivity matrix of voltage stability index to line impedance, on which an optimization problem for maximizing voltage stability index is formulated. This framework is tested on the IEEE 14-bus and the IEEE 118-bus transmission systems, in which sophisticated attackers launch NLRAs. The simulation results show the effectiveness of the proposed framework in circumventing voltage instability while maintaining the detection effectiveness of MTD. Case studies are conducted with and without the proposed framework under different MTD planning and operational methods. The impacts of the proposed two methods on attack detection effectiveness and system economic metrics are also revealed. Finally, this dissertation proposes utilizing smart inverters to implement a novel meter encoding scheme in distribution systems. The proposed meter encoding scheme is a software-based active detection method, which neither requires additional hardware devices nor causes system instability, compared with MTD and watermarking. By elaborately constructing the encoding vector, the proposed smart-inverter-based meter encoding can mislead the attacker's SE while being hidden from alert attackers. In addition, by utilizing the topology of radial distribution systems, the proposed encoding scheme encodes fewer meters than current schemes when protecting the same number of buses, which decreases the encoding cost. Simulation results from the IEEE 69-bus distribution system demonstrate that the proposed meter encoding scheme can mislead the attacker's state estimation on all the downstream buses of an encoded bus without arousing the attacker's suspicion. FDI attacks constructed based on the misled estimated states are highly possible to trigger the defender's BDD alarm
    corecore