12,351 research outputs found
Knowledge Representation Concepts for Automated SLA Management
Outsourcing of complex IT infrastructure to IT service providers has
increased substantially during the past years. IT service providers must be
able to fulfil their service-quality commitments based upon predefined Service
Level Agreements (SLAs) with the service customer. They need to manage, execute
and maintain thousands of SLAs for different customers and different types of
services, which needs new levels of flexibility and automation not available
with the current technology. The complexity of contractual logic in SLAs
requires new forms of knowledge representation to automatically draw inferences
and execute contractual agreements. A logic-based approach provides several
advantages including automated rule chaining allowing for compact knowledge
representation as well as flexibility to adapt to rapidly changing business
requirements. We suggest adequate logical formalisms for representation and
enforcement of SLA rules and describe a proof-of-concept implementation. The
article describes selected formalisms of the ContractLog KR and their adequacy
for automated SLA management and presents results of experiments to demonstrate
flexibility and scalability of the approach.Comment: Paschke, A. and Bichler, M.: Knowledge Representation Concepts for
Automated SLA Management, Int. Journal of Decision Support Systems (DSS),
submitted 19th March 200
An automated ETL for online datasets
While using online datasets for machine learning is commonplace today, the quality of these datasets impacts on the performance
of prediction algorithms. One method for improving the semantics of new data sources is to map these sources to a common
data model or ontology. While semantic and structural heterogeneities must still be resolved, this provides a well established
approach to providing clean datasets, suitable for machine learning and analysis. However, when there is a requirement for a
close to real time usage of online data, a method for dynamic Extract-Transform-Load of new sources data must be developed.
In this work, we present a framework for integrating online and enterprise data sources, in close to real time, to provide
datasets for machine learning and predictive algorithms. An exhaustive evaluation compares a human built data transformation
process with our system’s machine generated ETL process, with very favourable results, illustrating the value and impact of
an automated approach
An automated ETL for online datasets
While using online datasets for machine learning is commonplace today, the quality of these datasets impacts on the performance
of prediction algorithms. One method for improving the semantics of new data sources is to map these sources to a common
data model or ontology. While semantic and structural heterogeneities must still be resolved, this provides a well established
approach to providing clean datasets, suitable for machine learning and analysis. However, when there is a requirement for a
close to real time usage of online data, a method for dynamic Extract-Transform-Load of new sources data must be developed.
In this work, we present a framework for integrating online and enterprise data sources, in close to real time, to provide
datasets for machine learning and predictive algorithms. An exhaustive evaluation compares a human built data transformation
process with our system’s machine generated ETL process, with very favourable results, illustrating the value and impact of
an automated approach
An automated ETL for online datasets
While using online datasets for machine learning is commonplace today, the quality of these datasets impacts on the performance
of prediction algorithms. One method for improving the semantics of new data sources is to map these sources to a common
data model or ontology. While semantic and structural heterogeneities must still be resolved, this provides a well established
approach to providing clean datasets, suitable for machine learning and analysis. However, when there is a requirement for a
close to real time usage of online data, a method for dynamic Extract-Transform-Load of new sources data must be developed.
In this work, we present a framework for integrating online and enterprise data sources, in close to real time, to provide
datasets for machine learning and predictive algorithms. An exhaustive evaluation compares a human built data transformation
process with our system’s machine generated ETL process, with very favourable results, illustrating the value and impact of
an automated approach
Logic Programming Applications: What Are the Abstractions and Implementations?
This article presents an overview of applications of logic programming,
classifying them based on the abstractions and implementations of logic
languages that support the applications. The three key abstractions are join,
recursion, and constraint. Their essential implementations are for-loops, fixed
points, and backtracking, respectively. The corresponding kinds of applications
are database queries, inductive analysis, and combinatorial search,
respectively. We also discuss language extensions and programming paradigms,
summarize example application problems by application areas, and touch on
example systems that support variants of the abstractions with different
implementations
Reducing the Number of Annotations in a Verification-oriented Imperative Language
Automated software verification is a very active field of research which has
made enormous progress both in theoretical and practical aspects. Recently, an
important amount of research effort has been put into applying these techniques
on top of mainstream programming languages. These languages typically provide
powerful features such as reflection, aliasing and polymorphism which are handy
for practitioners but, in contrast, make verification a real challenge. In this
work we present Pest, a simple experimental, while-style, multiprocedural,
imperative programming language which was conceived with verifiability as one
of its main goals. This language forces developers to concurrently think about
both the statements needed to implement an algorithm and the assertions
required to prove its correctness. In order to aid programmers, we propose
several techniques to reduce the number and complexity of annotations required
to successfully verify their programs. In particular, we show that high-level
iteration constructs may alleviate the need for providing complex loop
annotations.Comment: 15 pages, 8 figure
- …