14,653 research outputs found
Automatic Music Composition using Answer Set Programming
Music composition used to be a pen and paper activity. These these days music
is often composed with the aid of computer software, even to the point where
the computer compose parts of the score autonomously. The composition of most
styles of music is governed by rules. We show that by approaching the
automation, analysis and verification of composition as a knowledge
representation task and formalising these rules in a suitable logical language,
powerful and expressive intelligent composition tools can be easily built. This
application paper describes the use of answer set programming to construct an
automated system, named ANTON, that can compose melodic, harmonic and rhythmic
music, diagnose errors in human compositions and serve as a computer-aided
composition tool. The combination of harmonic, rhythmic and melodic composition
in a single framework makes ANTON unique in the growing area of algorithmic
composition. With near real-time composition, ANTON reaches the point where it
can not only be used as a component in an interactive composition tool but also
has the potential for live performances and concerts or automatically generated
background music in a variety of applications. With the use of a fully
declarative language and an "off-the-shelf" reasoning engine, ANTON provides
the human composer a tool which is significantly simpler, more compact and more
versatile than other existing systems. This paper has been accepted for
publication in Theory and Practice of Logic Programming (TPLP).Comment: 31 pages, 10 figures. Extended version of our ICLP2008 paper.
Formatted following TPLP guideline
Workflow Partitioning and Deployment on the Cloud using Orchestra
Orchestrating service-oriented workflows is typically based on a design model
that routes both data and control through a single point - the centralised
workflow engine. This causes scalability problems that include the unnecessary
consumption of the network bandwidth, high latency in transmitting data between
the services, and performance bottlenecks. These problems are highly prominent
when orchestrating workflows that are composed from services dispersed across
distant geographical locations. This paper presents a novel workflow
partitioning approach, which attempts to improve the scalability of
orchestrating large-scale workflows. It permits the workflow computation to be
moved towards the services providing the data in order to garner optimal
performance results. This is achieved by decomposing the workflow into smaller
sub workflows for parallel execution, and determining the most appropriate
network locations to which these sub workflows are transmitted and subsequently
executed. This paper demonstrates the efficiency of our approach using a set of
experimental workflows that are orchestrated over Amazon EC2 and across several
geographic network regions.Comment: To appear in Proceedings of the IEEE/ACM 7th International Conference
on Utility and Cloud Computing (UCC 2014
On-Line Instruction-checking in Pipelined Microprocessors
Microprocessors performances have increased by more than five orders of magnitude in the last three decades. As technology scales down, these components become inherently unreliable posing major design and test challenges. This paper proposes an instruction-checking architecture to detect erroneous instruction executions caused by both permanent and transient errors in the internal logic of a microprocessor. Monitoring the correct activation sequence of a set of predefined microprocessor control/status signals allow distinguishing between correctly and not correctly executed instruction
Enhancing community detection using a network weighting strategy
A community within a network is a group of vertices densely connected to each
other but less connected to the vertices outside. The problem of detecting
communities in large networks plays a key role in a wide range of research
areas, e.g. Computer Science, Biology and Sociology. Most of the existing
algorithms to find communities count on the topological features of the network
and often do not scale well on large, real-life instances.
In this article we propose a strategy to enhance existing community detection
algorithms by adding a pre-processing step in which edges are weighted
according to their centrality w.r.t. the network topology. In our approach, the
centrality of an edge reflects its contribute to making arbitrary graph
tranversals, i.e., spreading messages over the network, as short as possible.
Our strategy is able to effectively complements information about network
topology and it can be used as an additional tool to enhance community
detection. The computation of edge centralities is carried out by performing
multiple random walks of bounded length on the network. Our method makes the
computation of edge centralities feasible also on large-scale networks. It has
been tested in conjunction with three state-of-the-art community detection
algorithms, namely the Louvain method, COPRA and OSLOM. Experimental results
show that our method raises the accuracy of existing algorithms both on
synthetic and real-life datasets.Comment: 28 pages, 2 figure
Knowledge revision in systems based on an informed tree search strategy : application to cartographic generalisation
Many real world problems can be expressed as optimisation problems. Solving
this kind of problems means to find, among all possible solutions, the one that
maximises an evaluation function. One approach to solve this kind of problem is
to use an informed search strategy. The principle of this kind of strategy is
to use problem-specific knowledge beyond the definition of the problem itself
to find solutions more efficiently than with an uninformed strategy. This kind
of strategy demands to define problem-specific knowledge (heuristics). The
efficiency and the effectiveness of systems based on it directly depend on the
used knowledge quality. Unfortunately, acquiring and maintaining such knowledge
can be fastidious. The objective of the work presented in this paper is to
propose an automatic knowledge revision approach for systems based on an
informed tree search strategy. Our approach consists in analysing the system
execution logs and revising knowledge based on these logs by modelling the
revision problem as a knowledge space exploration problem. We present an
experiment we carried out in an application domain where informed search
strategies are often used: cartographic generalisation.Comment: Knowledge Revision; Problem Solving; Informed Tree Search Strategy;
Cartographic Generalisation., Paris : France (2008
Passages in Graphs
Directed graphs can be partitioned in so-called passages. A passage P is a
set of edges such that any two edges sharing the same initial vertex or sharing
the same terminal vertex are both inside or are both outside of P. Passages
were first identified in the context of process mining where they are used to
successfully decompose process discovery and conformance checking problems. In
this article, we examine the properties of passages. We will show that passages
are closed under set operators such as union, intersection and difference.
Moreover, any passage is composed of so-called minimal passages. These
properties can be exploited when decomposing graph-based analysis and
computation problems.Comment: 8 page
On Characterizing the Data Access Complexity of Programs
Technology trends will cause data movement to account for the majority of
energy expenditure and execution time on emerging computers. Therefore,
computational complexity will no longer be a sufficient metric for comparing
algorithms, and a fundamental characterization of data access complexity will
be increasingly important. The problem of developing lower bounds for data
access complexity has been modeled using the formalism of Hong & Kung's
red/blue pebble game for computational directed acyclic graphs (CDAGs).
However, previously developed approaches to lower bounds analysis for the
red/blue pebble game are very limited in effectiveness when applied to CDAGs of
real programs, with computations comprised of multiple sub-computations with
differing DAG structure. We address this problem by developing an approach for
effectively composing lower bounds based on graph decomposition. We also
develop a static analysis algorithm to derive the asymptotic data-access lower
bounds of programs, as a function of the problem size and cache size
- …