208 research outputs found
Security in a Distributed Processing Environment
Distribution plays a key role in telecommunication and computing systems today. It
has become a necessity as a result of deregulation and anti-trust legislation, which has
forced businesses to move from centralised, monolithic systems to distributed systems
with the separation of applications and provisioning technologies, such as the service
and transportation layers in the Internet. The need for reliability and recovery requires
systems to use replication and secondary backup systems such as those used in ecommerce.
There are consequences to distribution. It results in systems being implemented in
heterogeneous environment; it requires systems to be scalable; it results in some loss
of control and so this contributes to the increased security issues that result from
distribution. Each of these issues has to be dealt with. A distributed processing
environment (DPE) is middleware that allows heterogeneous environments to operate
in a homogeneous manner. Scalability can be addressed by using object-oriented
technology to distribute functionality. Security is more difficult to address because it
requires the creation of a distributed trusted environment.
The problem with security in a DPE currently is that it is treated as an adjunct service,
i.e. and after-thought that is the last thing added to the system. As a result, it is not
pervasive and therefore is unable to fully support the other DPE services. DPE
security needs to provide the five basic security services, authentication, access
control, integrity, confidentiality and non-repudiation, in a distributed environment,
while ensuring simple and usable administration.
The research, detailed in this thesis, starts by highlighting the inadequacies of the
existing DPE and its services. It argues that a new management structure was
introduced that provides greater flexibility and configurability, while promoting
mechanism and service independence. A new secure interoperability framework was
introduced which provides the ability to negotiate common mechanism and service
level configurations. New facilities were added to the non-repudiation and audit
services.
The research has shown that all services should be security-aware, and therefore
would able to interact with the Enhanced Security Service in order to provide a more
secure environment within a DPE. As a proof of concept, the Trader service was
selected. Its security limitations were examined, new security behaviour policies
proposed and it was then implemented as a Security-aware Trader, which could
counteract the existing security limitations.IONA TECHNOLOGIES PLC & ORANG
Patterns for Providing Real-Time Guarantees in DOC Middleware - Doctoral Dissertation, May 2002
The advent of open and widely adopted standards such as Common Object Request Broker Architecture (CORBA) [47] has simplified and standardized the development of distributed applications. For applications with real-time constraints, including avionics, manufacturing, and defense systems, these standards are evolving to include Quality-of-Service (QoS) specifications. Operating systems such as Real-time Linux [60] have responded with interfaces and algorithms to guarantee real-time response; similarly, languages such as Real-time Java [59] include mechanisms for specifying real-time properties for threads. However, the middleware upon which large distributed applications are based has not yet addressed end-to-end guarantees of QoS specifications. Unless this challenge can be met, developers must resort to ad hoc solutions that may not scale or migrate well among different platforms. This thesis provides two contributions to the study of real-time Distributed Object Computing (DOC) middleware. First, it identifies potential bottlenecks and problems with respect to guaranteeing real-time performance in contemporary middleware. Experimental results illustrate how these problems lead to incorrect real-time behavior in contemporary middleware platforms. Second, this thesis presents designs and techniques for providing real-time QoS guarantees in DOC middleware in the context of TAO [6], an open-source and widely adopted implementation of real-time CORBA. Architectural solutions presented here are coupled with empirical evaluations of end-to-end real-time behavior. Analysis of the problems, forces, solutions, and consequences are presented in terms of patterns and frame-works, so that solutions obtained for TAO can be appropriately applied to other real-time systems
Component-based software engineering
To solve the problems coming with the current software development methodologies, component-based software engineering has caught many researchers\u27 attention recently. In component-based software engineering, a software system is considered as a set of software components assembled together instead of as a set of functions from the traditional perspective. Software components can be bought from third party vendors as off-the-shelf components and be assembled together.
Component-based software engineering, though very promising, needs to solve several core issues before it becomes a mature software development strategy. The goal of this dissertation is to establish an infrastructure for component-based software development. The author identifies and studies some of the core issues such as component planning, component building, component assembling, component representation, and component retrieval.
A software development process model is developed in this dissertation to emphasize the reuse of existing software components. The software development process model addresses how a software system should be planned and built to maximize the reuse of software components. It conducts domain engineering and application engineering simultaneously to map a software system to a set of existing components in such a way that the development of a software system can reuse the existing software components to the full extent. Besides the planning of software development based on component technology, the migration and integration of legacy systems, most of which are non-component-based systems, to the component-based software systems are studied. A framework and several methodologies are developed to serve as the guidelines of adopting component technology in legacy systems.
Component retrieval is also studied in this dissertation. One of the most important issues in component-based software engineering is how to find a software component quickly and accurately in a component repository. A component representation framework is developed in this dissertation to represent software components. Based on the component representation framework, an efficient searching method that combines neural network, information retrieval, and Bayesian inference technology is developed. Finally a prototype component retrieval system is implemented to demonstrate the correctness and feasibility of the proposed method
Organization based multiagent architecture for distributed environments
[EN]Distributed environments represent a complex field in which applied solutions should be flexible and include significant adaptation capabilities. These environments are related to problems where multiple users and devices may interact, and where simple and local solutions could possibly generate good results, but may not be effective with regards to use and interaction.
There are many techniques that can be employed to face this kind of problems, from CORBA to multi-agent systems, passing by web-services and SOA, among others. All those methodologies have their advantages and disadvantages that are properly analyzed in this documents, to finally explain the new architecture presented as a solution for distributed environment problems.
The new architecture for solving complex solutions in distributed environments presented here is called OBaMADE: Organization Based Multiagent Architecture for Distributed Environments. It is a multiagent architecture based on the organizations of agents paradigm, where the agents in the architecture are structured into organizations to improve their organizational capabilities.
The reasoning power of the architecture is based on the Case-Based Reasoning methology, being implemented in a internal organization that uses agents to create services to solve the external request made by the users.
The OBaMADE architecture has been successfully applied to two different case studies where its prediction capabilities have been properly checked. Those case studies have showed optimistic results and, being complex systems, have demonstrated the abstraction and generalizations capabilities of the architecture.
Nevertheless OBaMADE is intended to be able to solve much other kind of problems in distributed environments scenarios. It should be applied to other varieties of situations and to other knowledge fields to fully develop its potencial.[ES]Los entornos distribuidos representan un campo de conocimiento complejo en el que las soluciones a aplicar deben ser flexibles y deben contar con gran capacidad de adaptación. Este tipo de entornos está normalmente relacionado con problemas donde varios usuarios y dispositivos entran en juego. Para solucionar dichos problemas, pueden utilizarse sistemas locales que, aunque ofrezcan buenos resultados en términos de calidad de los mismos, no son tan efectivos en cuanto a la interacción y posibilidades de uso.
Existen múltiples técnicas que pueden ser empleadas para resolver este tipo de problemas, desde CORBA a sistemas multiagente, pasando por servicios web y SOA, entre otros. Todas estas mitologías tienen sus ventajas e inconvenientes, que se analizan en este documento, para explicar, finalmente, la nueva arquitectura presentada como una solución para los problemas generados en entornos distribuidos.
La nueva arquitectura aquí se llama OBaMADE, que es el acrónimo del inglés Organization Based Multiagent Architecture for Distributed Environments (Arquitectura Multiagente Basada en Organizaciones para Entornos Distribuidos). Se trata de una arquitectura multiagente basasa en el paradigma de las organizaciones de agente, donde los agentes que forman parte de la arquitectura se estructuran en organizaciones para mejorar sus capacidades organizativas.
La capacidad de razonamiento de la arquitectura está basada en la metodología de razonamiento basado en casos, que se ha implementado en una de las organizaciones internas de la arquitectura por medio de agentes que crean servicios que responden a las solicitudes externas de los usuarios.
La arquitectura OBaMADE se ha aplicado de forma exitosa a dos casos de estudio diferentes, en los que se han demostrado sus capacidades predictivas. Aplicando OBaMADE a estos casos de estudio se han obtenido resultados esperanzadores y, al ser sistemas complejos, se han demostrado las capacidades tanto de abstracción como de generalización de la arquitectura presentada.
Sin embargo, esta arquitectura está diseñada para poder ser aplicada a más tipo de problemas de entornos distribuidos. Debe ser aplicada a más variadas situaciones y a otros campos de conocimiento para desarrollar completamente el potencial de esta arquitectura
Efficient service discovery in wide area networks
Living in an increasingly networked world, with an abundant number
of services available to consumers, the consumer electronics market
is enjoying a boom. The average consumer in the developed world may
own several networked devices such as games consoles, mobile phones,
PDAs, laptops and desktops, wireless picture frames and printers to
name but a few. With this growing number of networked devices comes
a growing demand for services, defined here as functions requested
by a client and provided by a networked node. For example, a client
may wish to download and share music or pictures, find and use
printer services, or lookup information (e.g. train times, cinema
bookings).
It is notable that a significant proportion of networked devices are
now mobile. Mobile devices introduce a new dynamic to the service
discovery problem, such as lower battery and processing power and
more expensive bandwidth. Device owners expect to access services
not only in their immediate proximity, but further afield (e.g. in
their homes and offices). Solving these problems is the focus of
this research.
This Thesis offers two alternative approaches to service discovery
in Wide Area Networks (WANs). Firstly, a unique combination of the
Session Initiation Protocol (SIP) and the OSGi middleware technology
is presented to provide both mobility and service discovery
capability in WANs. Through experimentation, this technique is shown
to be successful where the number of operating domains is small, but
it does not scale well.
To address the issue of scalability, this Thesis proposes the use of
Peer-to-Peer (P2P) service overlays as a medium for service
discovery in WANs. To confirm that P2P overlays can in fact support
service discovery, a technique to utilise the Distributed Hash Table
(DHT) functionality of distributed systems is used to store and
retrieve service advertisements. Through simulation, this is shown
to be both a scalable and a flexible service discovery technique.
However, the problems associated with P2P networks with respect to
efficiency are well documented.
In a novel approach to reduce messaging costs in P2P networks,
multi-destination multicast is used. Two well known P2P overlays are
extended using the Explicit Multi-Unicast (XCAST) protocol. The
resulting analysis of this extension provides a strong argument for
multiple P2P maintenance algorithms co-existing in a single P2P
overlay to provide adaptable performance. A novel multi-tier P2P
overlay system is presented, which is tailored for service rich
mobile devices and which provides an efficient platform for service
discovery
Modelling grid architecture.
This thesis evaluates software engineering methods, especially event modelling of distributed systems architecture, by applying them to specific data-grid projects. Other methods evaluated include requirements' analysis, formal architectural definition and discrete event simulation. A novel technique for matching architectural styles to requirements is introduced. Data-grids are a new class of networked information systems arising from e-science, itself an emergent method for computer-based collaborative research in the physical sciences. The tools used in general grid systems, which federate distributed resources, are reviewed, showing that they do not clearly guide architecture. The data-grid projects, which join heterogeneous data stores specifically, put required qualities at risk. Such risk of failure is mitigated in the EGSO and AstroGrid solar physics data-grid projects' designs by modelling. Design errors are trapped by rapidly encoding and evaluating informal concepts, architecture, component interaction and objects. The success of software engineering modelling techniques depends on the models' accuracy, ability to demonstrate the required properties, and clarity (so project managers and developers can act on findings). The novel formal event modelling language chosen, FSP, meets these criteria at the diverse early lifecycle stages (unlike some techniques trialled). Models permit very early testing, finding hidden complexity, gaps in designed protocols and risks of unreliability. However, simulation is shown to be more suitable for evaluating qualities like scalability, which emerge when there are many component instances. Design patterns (which may be reused in other data-grids to resolve commonly encountered challenges) are exposed in these models. A method for generating useful models rapidly, introducing the strength of iterative lifecycles to sequential projects, also arises. Despite reported resistance to innovation in industry, the software engineering techniques demonstrated may benefit commercial information systems too
Adaptive Caching of Distributed Components
Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach
A language and toolkit for the specification, execution and monitoring of dependable distributed applications
PhD ThesisThis thesis addresses the problem of specifying the composition of distributed applications
out of existing applications, possibly legacy ones. With the automation of business processes
on the increase, more and more applications of this kind are being constructed. The resulting
applications can be quite complex, usually long-lived and are executed in a heterogeneous
environment. In a distributed environment, long-lived activities need support for fault tolerance
and dynamic reconfiguration. Indeed, it is likely that the environment where they are run will
change (nodes may fail, services may be moved elsewhere or withdrawn) during their
execution and the specification will have to be modified. There is also a need for modularity,
scalability and openness. However, most of the existing systems only consider part of these
requirements. A new area of research, called workflow management has been trying to address
these issues.
This work first looks at what needs to be addressed to support the specification and
execution of these new applications in a heterogeneous, distributed environment. A co-
ordination language (scripting language) is developed that fulfils the requirements of specifying
the composition and inter-dependencies of distributed applications with the properties of
dynamic reconfiguration, fault tolerance, modularity, scalability and openness. The architecture
of the overall workflow system and its implementation are then presented. The system has been
implemented as a set of CORBA services and the execution environment is built using a
transactional workflow management system. Next, the thesis describes the design of a toolkit
to specify, execute and monitor distributed applications. The design of the co-ordination
language and the toolkit represents the main contribution of the thesis.UK Engineering and Physical Sciences Research Council,
CaberNet,
Northern Telecom (Nortel)
- …