91,566 research outputs found

    Data-Mining Synthesised Schedulers for Hard Real-Time Systems

    Get PDF
    The analysis of hard real-time systems, traditionally performed using RMA/PCP or simulation, is nowadays also studied as a scheduler synthesis problem, where one automatically constructs a scheduler which can guarantee avoidance of deadlock and deadline-miss system states. Even though this approach has the potential for a finer control of a hard real-time system, using fewer resources and easily adapting to further quality aspects (memory/energy consumption, jitter minimisation, etc.), synthesised schedulers are usually extremely large and difficult to understand. Their big size is a consequence of their inherent precision, since they attempt to describe exactly the frontier among the safe and unsafe system states. It nevertheless hinders their application in practise, since it is extremely difficult to validate them or to use them for better understanding the behaviour of the system. In this paper, we show how one can adapt data-mining techniques to decrease the size of a synthesised scheduler and force its inherent structure to appear, thus giving the system designer a wealth of additional information for understanding and optimising the scheduler and the underlying system. We present, in particular, how it can be used for obtaining hints for a good task distribution to different processing units, for optimising the scheduler itself (sometimes even removing it altogether in a safe manner) and obtaining both per-task and per-system views of the schedulability of the system

    Energy and precious fuels requirements of fuel alcohol production. Volume 4: Appendices G and H, methanol from coal

    Get PDF
    Coal mine location, mining technology, energy consumption in mining, coal transport, and potential availability of coal are discussed. Methanol from coal is also discussed

    Do the Fix Ingredients Already Exist? An Empirical Inquiry into the Redundancy Assumptions of Program Repair Approaches

    Get PDF
    Much initial research on automatic program repair has focused on experimental results to probe their potential to find patches and reduce development effort. Relatively less effort has been put into understanding the hows and whys of such approaches. For example, a critical assumption of the GenProg technique is that certain bugs can be fixed by copying and re-arranging existing code. In other words, GenProg assumes that the fix ingredients already exist elsewhere in the code. In this paper, we formalize these assumptions around the concept of ''temporal redundancy''. A temporally redundant commit is only composed of what has already existed in previous commits. Our experiments show that a large proportion of commits that add existing code are temporally redundant. This validates the fundamental redundancy assumption of GenProg.Comment: ICSE - 36th IEEE International Conference on Software Engineering (2014

    Low-Effort Specification Debugging and Analysis

    Get PDF
    Reactive synthesis deals with the automated construction of implementations of reactive systems from their specifications. To make the approach feasible in practice, systems engineers need effective and efficient means of debugging these specifications. In this paper, we provide techniques for report-based specification debugging, wherein salient properties of a specification are analyzed, and the result presented to the user in the form of a report. This provides a low-effort way to debug specifications, complementing high-effort techniques including the simulation of synthesized implementations. We demonstrate the usefulness of our report-based specification debugging toolkit by providing examples in the context of generalized reactivity(1) synthesis.Comment: In Proceedings SYNT 2014, arXiv:1407.493

    New Method of Measuring TCP Performance of IP Network using Bio-computing

    Full text link
    The measurement of performance of Internet Protocol IP network can be done by Transmission Control Protocol TCP because it guarantees send data from one end of the connection actually gets to the other end and in the same order it was send, otherwise an error is reported. There are several methods to measure the performance of TCP among these methods genetic algorithms, neural network, data mining etc, all these methods have weakness and can't reach to correct measure of TCP performance. This paper proposed a new method of measuring TCP performance for real time IP network using Biocomputing, especially molecular calculation because it provides wisdom results and it can exploit all facilities of phylogentic analysis. Applying the new method at real time on Biological Kurdish Messenger BIOKM model designed to measure the TCP performance in two types of protocols File Transfer Protocol FTP and Internet Relay Chat Daemon IRCD. This application gives very close result of TCP performance comparing with TCP performance which obtains from Little's law using same model (BIOKM), i.e. the different percentage of utilization (Busy or traffic industry) and the idle time which are obtained from a new method base on Bio-computing comparing with Little's law was (nearly) 0.13%. KEYWORDS Bio-computing, TCP performance, Phylogenetic tree, Hybridized Model (Normalized), FTP, IRCDComment: 17 Pages,10 Figures,5 Table

    Efficient mining of discriminative molecular fragments

    Get PDF
    Frequent pattern discovery in structured data is receiving an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset
    • …
    corecore