2,720 research outputs found
State-of-the-art on evolution and reactivity
This report starts by, in Chapter 1, outlining aspects of querying and updating resources on
the Web and on the Semantic Web, including the development of query and update languages
to be carried out within the Rewerse project.
From this outline, it becomes clear that several existing research areas and topics are of
interest for this work in Rewerse. In the remainder of this report we further present state of
the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give
an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs;
in Chapter 4 event-condition-action rules, both in the context of active database systems and
in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks
Confidentiality-Preserving Publish/Subscribe: A Survey
Publish/subscribe (pub/sub) is an attractive communication paradigm for
large-scale distributed applications running across multiple administrative
domains. Pub/sub allows event-based information dissemination based on
constraints on the nature of the data rather than on pre-established
communication channels. It is a natural fit for deployment in untrusted
environments such as public clouds linking applications across multiple sites.
However, pub/sub in untrusted environments lead to major confidentiality
concerns stemming from the content-centric nature of the communications. This
survey classifies and analyzes different approaches to confidentiality
preservation for pub/sub, from applications of trust and access control models
to novel encryption techniques. It provides an overview of the current
challenges posed by confidentiality concerns and points to future research
directions in this promising field
Remote M2M healthcare : applications and algorithms
Tese de mestrado. Mestrado Integrado em Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201
Self-management for large-scale distributed systems
Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management.
In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving
self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research
on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying
the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed
by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic
managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic
and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a
management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by
presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control
Dynamic adaptation of interaction models for stateful web services
Dissertação para obtenção do Grau de Mestre em
Engenharia InformáticaWireless Sensor Networks (WSNs) are accepted as one of the fundamental technologies
for current and future science in all domains, where WSNs formed from either static
or mobile sensor devices allow a low cost high-resolution sensing of the environment.
Such opens the possibility of developing new kinds of crucial applications or providing
more accurate data to more traditional ones. For instance, examples may range from
large-scale WSNs deployed on oceans contributing to weather prediction simulations; to high number of diverse Sensor devices deployed over a geographical area at different heights from the ground for collecting more accurate data for cyclic wildfire spread simulations; or to networks of mobile phone devices contributing to urban traffic management via Participatory Sensing applications.
In order to simplify data access, network parameterisation, and WSNs aggregation,
WSNs have been integrated in Web environments, namely through high level standard interfaces like Web services. However, the typical interface access usually supports a restricted number of interaction models and the available mechanisms for their run-time adaptation are still scarce. Nevertheless, applications demand a richer and more flexible control on interface accesses – e.g. such accesses may depend on contextual information and, consequently, may evolve in time.
Additionally, Web services have become increasingly popular in the latest years, and
their usage led to the need of aggregating and coordinating them and also to represent
state in between Web services invocations. Current standard composition languages for
Web services (wsbpel,wsci,bpml) deal with the traditional forms of service aggregation
and coordination, while WS-Resource framework (wsrf) deals with accessing services pertaining state concerns (relating both executing applications and the runtime environment).
Subjacent to the notion of service coordination is the need to capture dependencies among them (through the workflow concept, for instance), reuse common interaction models, e.g. embodied in common behavioural Patterns like Client/Server, Publish/- Subscriber, Stream, and respond to dynamic events in the system (novel user requests, service failures, etc.). Dynamic adaptation, in particular, is a pressing requirement for current service-based systems due to the increasing trend on XaaS ("everything as a service")
which promises to reduce costs on application development and infrastructure
support, as is already apparent in the Cloud computing domain.
Therefore, the self-adaptive (or dynamic/adaptive) systems present themselves as a solution to the above concerns. However, since they comprise a vast area, this thesis only focus on self-adaptive software. Concretely, we propose a novel model for dynamic interactions, in particular with Stateful Web Services, i.e. services interfacing continued activities. The solution consists on a middleware prototype based on pattern abstractions
which may be able to provide (novel) richer interaction models and a few structured
dynamic adaptation mechanisms, which are captured in the context of a "Session"
abstraction.
The middleware was implemented and uses a pre-existent framework supporting
Web enabled access to WSNs, and some evaluation scenarios were tested in this setting.
Namely, this area was chosen as the application domain that contextualizes this work as it contributes to the development of increasingly important applications needing highresolution and low cost sensing of environment. The result is a novel way to specify richer and dynamic modes of accessing and acquiring data generated by WSNs.Este trabalho foi parcialmente financiado pelo Centro de Informática e Tecnologias da
Informação (CITI), e pela Fundação para a Ciência e a Tecnologia (FCT / MCTES) em
projectos de investigaçã
Intelligent monitoring and fault diagnosis for ATLAS TDAQ: a complex event processing solution
Effective monitoring and analysis tools are fundamental in modern IT
infrastructures to get insights on the overall system behavior and to deal
promptly and effectively with failures. In recent years, Complex Event
Processing (CEP) technologies have emerged as effective solutions for
information processing from the most disparate fields: from wireless sensor
networks to financial analysis. This thesis proposes an innovative approach to
monitor and operate complex and distributed computing systems, in particular
referring to the ATLAS Trigger and Data Acquisition (TDAQ) system currently
in use at the European Organization for Nuclear Research (CERN). The
result of this research, the AAL project, is currently used to provide ATLAS
data acquisition operators with automated error detection and intelligent
system analysis.
The thesis begins by describing the TDAQ system and the controlling
architecture, with a focus on the monitoring infrastructure and the expert
system used for error detection and automated recovery. It then discusses
the limitations of the current approach and how it can be improved to
maximize the ATLAS TDAQ operational efficiency.
Event processing methodologies are then laid out, with a focus on CEP
techniques for stream processing and pattern recognition. The open-source
Esper engine, the CEP solution adopted by the project is subsequently
analyzed and discussed.
Next, the AAL project is introduced as the automated and intelligent
monitoring solution developed as the result of this research. AAL
requirements and governing factors are listed, with a focus on how stream
processing functionalities can enhance the TDAQ monitoring experience. The
AAL processing model is then introduced and the architectural choices are
justified. Finally, real applications on TDAQ error detection are presented. The main conclusion from this work is that CEP techniques can be
successfully applied to detect error conditions and system misbehavior.
Moreover, the AAL project demonstrates a real application of CEP concepts
for intelligent monitoring in the demanding TDAQ scenario. The adoption of
AAL by several TDAQ communities shows that automation and intelligent
system analysis were not properly addressed in the previous infrastructure.
The results of this thesis will benefit researchers evaluating intelligent
monitoring techniques on large-scale distributed computing system
- …