3,002 research outputs found
Online Reinforcement Learning for Dynamic Multimedia Systems
In our previous work, we proposed a systematic cross-layer framework for
dynamic multimedia systems, which allows each layer to make autonomous and
foresighted decisions that maximize the system's long-term performance, while
meeting the application's real-time delay constraints. The proposed solution
solved the cross-layer optimization offline, under the assumption that the
multimedia system's probabilistic dynamics were known a priori. In practice,
however, these dynamics are unknown a priori and therefore must be learned
online. In this paper, we address this problem by allowing the multimedia
system layers to learn, through repeated interactions with each other, to
autonomously optimize the system's long-term performance at run-time. We
propose two reinforcement learning algorithms for optimizing the system under
different design constraints: the first algorithm solves the cross-layer
optimization in a centralized manner, and the second solves it in a
decentralized manner. We analyze both algorithms in terms of their required
computation, memory, and inter-layer communication overheads. After noting that
the proposed reinforcement learning algorithms learn too slowly, we introduce a
complementary accelerated learning algorithm that exploits partial knowledge
about the system's dynamics in order to dramatically improve the system's
performance. In our experiments, we demonstrate that decentralized learning can
perform as well as centralized learning, while enabling the layers to act
autonomously. Additionally, we show that existing application-independent
reinforcement learning algorithms, and existing myopic learning algorithms
deployed in multimedia systems, perform significantly worse than our proposed
application-aware and foresighted learning methods.Comment: 35 pages, 11 figures, 10 table
Development of a Web-based land evaluation system and its application to population carrying capacity assessment using .Net technology
The multi-disciplinary approach used in this study combines the state-of-the-art IT technology with an elaborated land evaluation methodology and results in a Web-based land evaluation system (WLES). The WLES is designed in such a way that the system operates both as a Web Application and as a Web Service. Implemented on top of the .NET platform, the WLES has a loosely coupled multi-layer structure which seamlessly integrates the domain knowledge of land evaluation and the soil database. The Web Service feature makes the WLES suitable to act as a building block of a larger system such as that of the population carrying capacity (PCC) assessment. As a reference application, a framework is made to assess the PCC on the basis of the production potential calculations which are available through the WLES Web Service interface
A deliberative model for self-adaptation middleware using architectural dependency
A crucial prerequisite to externalized adaptation is an understanding of how components are interconnected, or more particularly how and why they depend on one another. Such dependencies can be used to provide an architectural model, which provides a reference point for externalized adaptation. In this paper, it is described how dependencies are used as a basis to systems' self-understanding and subsequent architectural reconfigurations. The approach is based on the combination of: instrumentation services, a dependency meta-model and a system controller. In particular, the latter uses self-healing repair rules (or conflict resolution strategies), based on extensible beliefs, desires and intention (EBDI) model, to reflect reconfiguration changes back to a target application under examination
Incorporating prediction models in the SelfLet framework: a plugin approach
A complex pervasive system is typically composed of many cooperating
\emph{nodes}, running on machines with different capabilities, and pervasively
distributed across the environment. These systems pose several new challenges
such as the need for the nodes to manage autonomously and dynamically in order
to adapt to changes detected in the environment. To address the above issue, a
number of autonomic frameworks has been proposed. These usually offer either
predefined self-management policies or programmatic mechanisms for creating new
policies at design time. From a more theoretical perspective, some works
propose the adoption of prediction models as a way to anticipate the evolution
of the system and to make timely decisions. In this context, our aim is to
experiment with the integration of prediction models within a specific
autonomic framework in order to assess the feasibility of such integration in a
setting where the characteristics of dynamicity, decentralization, and
cooperation among nodes are important. We extend an existing infrastructure
called \emph{SelfLets} in order to make it ready to host various prediction
models that can be dynamically plugged and unplugged in the various component
nodes, thus enabling a wide range of predictions to be performed. Also, we show
in a simple example how the system works when adopting a specific prediction
model from the literature
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
- …