8,326 research outputs found
Improving the efficiency of ILP systems
Inductive Logic Programming (ILP) is a promising technol-ogy for knowledge extraction applications. ILP has produced intelligiblesolutions for a wide variety of domains where it has been applied. TheILP lack of eciency is, however, a major impediment for its scalabilityto applications requiring large amounts of data. In this paper we pro-pose a set of techniques that improve ILP systems eciency and makethen more likely to scale up to applications of knowledge extraction fromlarge datasets. We propose and evaluate the lazy evaluation of examples,to improve the eciency of ILP systems. Lazy evaluation is essentiallya way to avoid or postpone the evaluation of the generated hypotheses(coverage tests).The techniques were evaluated using the IndLog system on ILP datasetsreferenced in the literature. The proposals lead to substantial eficiencyimprovements and are generally applicable to any ILP system
Improving the efficiency of ILP systems using an incremental language level search
We propose and evaluate a technique to improve the eciency of an ILP system. The technique avoids the generation of useless hypotheses.It denes a language bias coupled with a search strategy and is called Incremental Language Level Search (ILLS). The techniques havebeen encoded in the ILP system IndLog. The proposal leads to substantial eciency improvements in a set of ILP datasets referenced on theliterature
Efficient resources assignment schemes for clustered multithreaded processors
New feature sizes provide larger number of transistors per chip that architects could use in order to further exploit instruction level parallelism. However, these technologies bring also new challenges that complicate conventional monolithic processor designs. On the one hand, exploiting instruction level parallelism is leading us to diminishing returns and therefore exploiting other sources of parallelism like thread level parallelism is needed in order to keep raising performance with a reasonable hardware complexity. On the other hand, clustering architectures have been widely studied in order to reduce the inherent complexity of current monolithic processors. This paper studies the synergies and trade-offs between two concepts, clustering and simultaneous multithreading (SMT), in order to understand the reasons why conventional SMT resource assignment schemes are not so effective in clustered processors. These trade-offs are used to propose a novel resource assignment scheme that gets and average speed up of 17.6% versus Icount improving fairness in 24%.Peer ReviewedPostprint (published version
A Comparative Study of Scheduling Techniques for Multimedia Applications on SIMD Pipelines
Parallel architectures are essential in order to take advantage of the
parallelism inherent in streaming applications. One particular branch of these
employ hardware SIMD pipelines. In this paper, we analyse several scheduling
techniques, namely ad hoc overlapped execution, modulo scheduling and modulo
scheduling with unrolling, all of which aim to efficiently utilize the special
architecture design. Our investigation focuses on improving throughput while
analysing other metrics that are important for streaming applications, such as
register pressure, buffer sizes and code size. Through experiments conducted on
several media benchmarks, we present and discuss trade-offs involved when
selecting any one of these scheduling techniques.Comment: Presented at DATE Friday Workshop on Heterogeneous Architectures and
Design Methods for Embedded Image Systems (HIS 2015) (arXiv:1502.07241
Optimization in Knowledge-Intensive Crowdsourcing
We present SmartCrowd, a framework for optimizing collaborative
knowledge-intensive crowdsourcing. SmartCrowd distinguishes itself by
accounting for human factors in the process of assigning tasks to workers.
Human factors designate workers' expertise in different skills, their expected
minimum wage, and their availability. In SmartCrowd, we formulate task
assignment as an optimization problem, and rely on pre-indexing workers and
maintaining the indexes adaptively, in such a way that the task assignment
process gets optimized both qualitatively, and computation time-wise. We
present rigorous theoretical analyses of the optimization problem and propose
optimal and approximation algorithms. We finally perform extensive performance
and quality experiments using real and synthetic data to demonstrate that
adaptive indexing in SmartCrowd is necessary to achieve efficient high quality
task assignment.Comment: 12 page
- …