577 research outputs found

    REDUNET: reducing test suites by integrating set cover and network-based optimization

    Get PDF
    Abstract The availability of effective test suites is critical for the development and maintenance of reliable software systems. To increase test effectiveness, software developers tend to employ larger and larger test suites. The recent availability of software tools for automatic test generation makes building large test suites affordable, therefore contributing to accelerating this trend. However, large test suites, though more effective, are resources and time consuming and therefore cannot be executed frequently. Reducing them without decreasing code coverage is a needed compromise between efficiency and effectiveness of the test, hence enabling a more regular check of the software under development. We propose a novel approach, namely REDUNET, to reduce a test suite while keeping the same code coverage. We integrate this approach in a complete framework for the automatic generation of efficient and effective test suites, which includes test suite generation, code coverage analysis, and test suite reduction. Our approach formulates the test suite reduction as a set cover problem and applies integer linear programming and a network-based optimisation, which takes advantage of the properties of the control flow graph. We find the optimal set of test cases that keeps the same code coverage in fractions of seconds on real software projects and test suites generated automatically by Randoop. The results on ten real software systems show that the proposed approach finds the optimal minimisation and achieves up to 90% reduction and more than 50% reduction on all systems under analysis. On the largest project our reduction algorithm performs more than three times faster than both integer linear programming alone and the state-of-the-art heuristic Harrold Gupta Soffa

    Trace-level reuse

    Get PDF
    Trace-level reuse is based on the observation that some traces (dynamic sequences of instructions) are frequently repeated during the execution of a program, and in many cases, the instructions that make up such traces have the same source operand values. The execution of such traces will obviously produce the same outcome and thus, their execution can be skipped if the processor records the outcome of previous executions. This paper presents an analysis of the performance potential of trace-level reuse and discusses a preliminary realistic implementation. Like instruction-level reuse, trace-level reuse can improve performance by decreasing resource contention and the latency of some instructions. However, we show that trace-level reuse is more effective than instruction-level reuse because the former can avoid fetching the instructions of reused traces. This has two important benefits: it reduces the fetch bandwidth requirements, and it increases the effective instruction window size since these instructions do not occupy window entries. Moreover, trace-level reuse can compute all at once the result of a chain of dependent instructions, which may allow the processor to avoid the serialization caused by data dependences and thus, to potentially exceed the dataflow limit.Peer ReviewedPostprint (published version

    Enhanced sharing analysis techniques: a comprehensive evaluation

    Get PDF
    Sharing, an abstract domain developed by D. Jacobs and A. Langen for the analysis of logic programs, derives useful aliasing information. It is well-known that a commonly used core of techniques, such as the integration of Sharing with freeness and linearity information, can significantly improve the precision of the analysis. However, a number of other proposals for refined domain combinations have been circulating for years. One feature that is common to these proposals is that they do not seem to have undergone a thorough experimental evaluation even with respect to the expected precision gains. In this paper we experimentally evaluate: helping Sharing with the definitely ground variables found using Pos, the domain of positive Boolean formulas; the incorporation of explicit structural information; a full implementation of the reduced product of Sharing and Pos; the issue of reordering the bindings in the computation of the abstract mgu; an original proposal for the addition of a new mode recording the set of variables that are deemed to be ground or free; a refined way of using linearity to improve the analysis; the recovery of hidden information in the combination of Sharing with freeness information. Finally, we discuss the issue of whether tracking compoundness allows the computation of more sharing information

    Inductive Logic Programming for Compiler Tuning

    Get PDF

    Flexible compiler-managed L0 buffers for clustered VLIW processors

    Get PDF
    Wire delays are a major concern for current and forthcoming processors. One approach to attack this problem is to divide the processor into semi-independent units referred to as clusters. A cluster usually consists of a local register file and a subset of the functional units, while the data cache remains centralized. However, as technology evolves, the latency of such a centralized cache increase leading to an important performance impact. In this paper, we propose to include flexible low-latency buffers in each cluster in order to reduce the performance impact of higher cache latencies. The reduced number of entries in each buffer permits the design of flexible ways to map data from L1 to these buffers. The proposed L0 buffers are managed by the compiler, which is responsible to decide which memory instructions make us of them. Effective instruction scheduling techniques are proposed to generate code that exploits these buffers. Results for the Mediabench benchmark suite show that the performance of a clustered VLIW processor with a unified L1 data cache is improved by 16% when such buffers are used. In addition, the proposed architecture also shows significant advantages over both MultiVLIW processors and clustered processors with a word-interleaved cache, two state-of-the-art designs with a distributed L1 data cache.Peer ReviewedPostprint (published version

    Strategic approaches to restoring ecosystems can triple conservation gains and halve costs.

    Get PDF
    International commitments for ecosystem restoration add up to one-quarter of the world's arable land. Fulfilling them would ease global challenges such as climate change and biodiversity decline but could displace food production and impose financial costs on farmers. Here, we present a restoration prioritization approach capable of revealing these synergies and trade-offs, incorporating ecological and economic efficiencies of scale and modelling specific policy options. Using an actual large-scale restoration target of the Atlantic Forest hotspot, we show that our approach can deliver an eightfold increase in cost-effectiveness for biodiversity conservation compared with a baseline of non-systematic restoration. A compromise solution avoids 26% of the biome's current extinction debt of 2,864 plant and animal species (an increase of 257% compared with the baseline). Moreover, this solution sequesters 1 billion tonnes of CO2-equivalent (a 105% increase) while reducing costs by US$28 billion (a 57% decrease). Seizing similar opportunities elsewhere would offer substantial contributions to some of the greatest challenges for humankind

    Comprehensive review on the state-of- the-arts and solutions to the test redundancy reduction problem with taxonomy

    Get PDF
    The process of software testing is of utmost importance and requires a major allocation of resources. It has a substantial influence on the quality and dependability of software products. Nevertheless, as the quantity of test cases escalates, the feasibility of executing all of them diminishes, and the accompanying expenses related to preparation, execution time, and upkeep grow excessively exorbitant. The objective of Test Redundancy Reduction (TRR) is to mitigate this issue by determining a minimal subset of the test suite that satisfies all the requirements of the primary test suite while lowering the number of test cases. In order to attain this objective, multiple methodologies have been suggested, encompassing heuristics, meta-heuristics, exact algorithms, hybrid approaches, and machine-learning techniques. This work provides a thorough examination of prior research on TRR, addressing deficiencies and making a valuable contribution to the current scholarly understanding. The literature study encompasses a comprehensive examination of the complete chronology of TRR, incorporating all pertinent scholarly articles and practitioner-authored research papers published in English. This study aims to provide managers with valuable insights into the strengths and shortcomings of different TRR methodologies, enabling them to make well-informed decisions regarding the most appropriate approach for their specific needs. The primary objective of this study is to offer a comprehensive analysis of Test Result Reduction (TRR) and its consequential impact on mitigating expenses related to software testing. This study makes a valuable contribution to extant literature by elucidating the present state-of-the-art and delineating potential avenues for future research

    Enforcing Predictability of Many-cores with DCFNoC

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] The ever need for higher performance forces industry to include technology based on multi-processors system on chip (MPSoCs) in their safety-critical embedded systems. MPSoCs include a network-on-chip (NoC) to interconnect the cores between them and with memory and the rest of shared resources. Unfortunately, the inclusion of NoCs compromises guaranteeing time predictability as network-level conflicts may occur. To overcome this problem, in this paper we propose DCFNoC, a new time-predictable NoC design paradigm where conflicts within the network are eliminated by design. This new paradigm builds on top of the Channel Dependency Graph (CDG) in order to deterministically avoid network conflicts. The network guarantees predictability to applications and is able to naturally inject messages using a TDM period equal to the optimal theoretical bound without the need of using a computationally demanding offline process. DCFNoC is integrated in a tile-based many-core system and adapted to its memory hierarchy. Our results show that DCFNoC guarantees time predictability avoiding network interference among multiple running applications. DCFNoC always guarantees performance and also improves wormhole performance in a 4 × 4 setting by a factor of 3.7× when interference traffic is injected. For a 8 × 8 network differences are even larger. In addition, DCFNoC obtains a total area saving of 10.79% over a standard wormhole implementation.This work has been supported by MINECO under Grant BES-2016-076885, by MINECO and funds from the European ERDF under Grant TIN2015-66972-C05-1-R and Grant RTI2018-098156-B-C51, and by the EC H2020 RECIPE project under Grant 801137.Picornell-Sanjuan, T.; Flich Cardo, J.; Hernández Luz, C.; Duato Marín, JF. (2021). Enforcing Predictability of Many-cores with DCFNoC. IEEE Transactions on Computers. 70(2):270-283. https://doi.org/10.1109/TC.2020.2987797S27028370
    • …
    corecore