91,566 research outputs found
Data-Mining Synthesised Schedulers for Hard Real-Time Systems
The analysis of hard real-time systems, traditionally performed using RMA/PCP or simulation, is nowadays also studied as a scheduler synthesis problem, where one automatically constructs a scheduler which can guarantee avoidance of deadlock and deadline-miss system states. Even though this approach has the potential for a finer control of a hard real-time system, using fewer resources and easily adapting to further quality aspects (memory/energy consumption, jitter minimisation, etc.), synthesised schedulers are usually extremely large and difficult to understand. Their big size is a consequence of their inherent precision, since they attempt to describe exactly the frontier among the safe and unsafe system states. It nevertheless hinders their application in practise, since it is extremely difficult to validate them or to use them for better understanding the behaviour of the system. In this paper, we show how one can adapt data-mining techniques to decrease the size of a synthesised scheduler and force its inherent structure to appear, thus giving the system designer a wealth of additional information for understanding and optimising the scheduler and the underlying system. We present, in particular, how it can be used for obtaining hints for a good task distribution to different processing units, for optimising the scheduler itself (sometimes even removing it altogether in a safe manner) and obtaining both per-task and per-system views of the schedulability of the system
Energy and precious fuels requirements of fuel alcohol production. Volume 4: Appendices G and H, methanol from coal
Coal mine location, mining technology, energy consumption in mining, coal transport, and potential availability of coal are discussed. Methanol from coal is also discussed
Do the Fix Ingredients Already Exist? An Empirical Inquiry into the Redundancy Assumptions of Program Repair Approaches
Much initial research on automatic program repair has focused on experimental
results to probe their potential to find patches and reduce development effort.
Relatively less effort has been put into understanding the hows and whys of
such approaches. For example, a critical assumption of the GenProg technique is
that certain bugs can be fixed by copying and re-arranging existing code. In
other words, GenProg assumes that the fix ingredients already exist elsewhere
in the code. In this paper, we formalize these assumptions around the concept
of ''temporal redundancy''. A temporally redundant commit is only composed of
what has already existed in previous commits. Our experiments show that a large
proportion of commits that add existing code are temporally redundant. This
validates the fundamental redundancy assumption of GenProg.Comment: ICSE - 36th IEEE International Conference on Software Engineering
(2014
Low-Effort Specification Debugging and Analysis
Reactive synthesis deals with the automated construction of implementations
of reactive systems from their specifications. To make the approach feasible in
practice, systems engineers need effective and efficient means of debugging
these specifications.
In this paper, we provide techniques for report-based specification
debugging, wherein salient properties of a specification are analyzed, and the
result presented to the user in the form of a report. This provides a
low-effort way to debug specifications, complementing high-effort techniques
including the simulation of synthesized implementations.
We demonstrate the usefulness of our report-based specification debugging
toolkit by providing examples in the context of generalized reactivity(1)
synthesis.Comment: In Proceedings SYNT 2014, arXiv:1407.493
Recommended from our members
Segmenting Publics
This research synthesis was commissioned by the National Co-ordinating Centre for Public Engagement (NCCPE) and the Economic and Social Research Council (ESRC) to examine audience segmentation methods and tools in the area of public engagement. It provides resources for assessing the ways in which segmentation tools might be used to enhance the various activities through which models of public engagement in higher education are implemented. Understanding the opinions, values, and motivations of members of the public is a crucial feature of successful engagement. Segmentation methods can offer potential resources to help understand the complex set of interests and attitudes that the public have towards higher education.
Key findings:
There exist a number of existing segmentations which address many of the areas of activity found in Universities and HEIs. These include segmentations which inform strategic planning of communications; segmentations which inform the design of collaborative engagement activities by museums, galleries, and libraries; and segmentations that are used to identify under-represented users and consumers.
Segmentation is, on its own, only a tool, used in different ways in different contexts. The broader strategic rationale shaping the application and design of segmentation methods is a crucial factor in determining the utility of segmentation tools.
Four issues emerged of particular importance:
1. Segmentation exercises are costly and technically complex. Undertaking segmentations therefore requires significant commitment of financial and professional resources by HEIs; the appropriate interpretation, analysis, and application of segmentation exercises also require high levels of professional capacity and expertise
2. Undertaking a segmentation exercise has implications for the internal organisational operations of HEIs, not only for how they engage with external publics and stakeholders
3. Segmentation tools are adopted to inform interventions of various sorts, and superficially to differentiate and sometime discriminate between how groups of people are addressed and engaged.
4. For HEIs, the ethical issues and reputational risks which have been identified in this Research Synthesis as endemic to the application of segmentation methods for public purposes are particularly relevant
New Method of Measuring TCP Performance of IP Network using Bio-computing
The measurement of performance of Internet Protocol IP network can be done by
Transmission Control Protocol TCP because it guarantees send data from one end
of the connection actually gets to the other end and in the same order it was
send, otherwise an error is reported. There are several methods to measure the
performance of TCP among these methods genetic algorithms, neural network, data
mining etc, all these methods have weakness and can't reach to correct measure
of TCP performance. This paper proposed a new method of measuring TCP
performance for real time IP network using Biocomputing, especially molecular
calculation because it provides wisdom results and it can exploit all
facilities of phylogentic analysis. Applying the new method at real time on
Biological Kurdish Messenger BIOKM model designed to measure the TCP
performance in two types of protocols File Transfer Protocol FTP and Internet
Relay Chat Daemon IRCD. This application gives very close result of TCP
performance comparing with TCP performance which obtains from Little's law
using same model (BIOKM), i.e. the different percentage of utilization (Busy or
traffic industry) and the idle time which are obtained from a new method base
on Bio-computing comparing with Little's law was (nearly) 0.13%.
KEYWORDS Bio-computing, TCP performance, Phylogenetic tree, Hybridized Model
(Normalized), FTP, IRCDComment: 17 Pages,10 Figures,5 Table
Efficient mining of discriminative molecular fragments
Frequent pattern discovery in structured data is receiving
an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset
- …