89 research outputs found
Security in a Distributed Processing Environment
Distribution plays a key role in telecommunication and computing systems today. It
has become a necessity as a result of deregulation and anti-trust legislation, which has
forced businesses to move from centralised, monolithic systems to distributed systems
with the separation of applications and provisioning technologies, such as the service
and transportation layers in the Internet. The need for reliability and recovery requires
systems to use replication and secondary backup systems such as those used in ecommerce.
There are consequences to distribution. It results in systems being implemented in
heterogeneous environment; it requires systems to be scalable; it results in some loss
of control and so this contributes to the increased security issues that result from
distribution. Each of these issues has to be dealt with. A distributed processing
environment (DPE) is middleware that allows heterogeneous environments to operate
in a homogeneous manner. Scalability can be addressed by using object-oriented
technology to distribute functionality. Security is more difficult to address because it
requires the creation of a distributed trusted environment.
The problem with security in a DPE currently is that it is treated as an adjunct service,
i.e. and after-thought that is the last thing added to the system. As a result, it is not
pervasive and therefore is unable to fully support the other DPE services. DPE
security needs to provide the five basic security services, authentication, access
control, integrity, confidentiality and non-repudiation, in a distributed environment,
while ensuring simple and usable administration.
The research, detailed in this thesis, starts by highlighting the inadequacies of the
existing DPE and its services. It argues that a new management structure was
introduced that provides greater flexibility and configurability, while promoting
mechanism and service independence. A new secure interoperability framework was
introduced which provides the ability to negotiate common mechanism and service
level configurations. New facilities were added to the non-repudiation and audit
services.
The research has shown that all services should be security-aware, and therefore
would able to interact with the Enhanced Security Service in order to provide a more
secure environment within a DPE. As a proof of concept, the Trader service was
selected. Its security limitations were examined, new security behaviour policies
proposed and it was then implemented as a Security-aware Trader, which could
counteract the existing security limitations.IONA TECHNOLOGIES PLC & ORANG
Integrating legacy mainframe systems: architectural issues and solutions
For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most.
In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged.
Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment.
However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications.
The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies
HIPPO -- an adaptive open hyptertext system
The hypertext paradigm offers a powerful way of modelling complex knowledge structures. Information can be arranged into networks, and connected using hypertext links. This has led to the development of more open hypertext design, which allow hypertext services to be integrated seamlessly into the user's environment. Recent research has also seen the emergence of adaptive hypertext, which uses feedback from the user to modify objects in the hypertext. The research presented in this thesis describes the HIPPO hypertext model which combines many of the ideas in open hypertext research, with existing work on adaptive hypertext systems.
The idea of fuzzy anchors are introduced which allow authors to express the uncertainty and vagueness which is inherent in a hypertext anchor. Fuzzy anchors use partial truth values which allow authors to define a "degree of membership" for anchors. Anchors no longer have fixed, discrete boundaries, but have more in common with contour lines used in map design. These fuzzy anchors are used as the basis for an adaptive model, so that anchors can be modified in response to user actions. The HIPPO linking model introduces linkbase trees which combine link collections into inheritance hierarchies. These are used to construct reusable inheritance trees, which allow authors to reuse and build on existing link collections. An adaptive model is also presented to modify these linkbase hierarchies. Finally, the HIPPO system is re-implemented using a widely distributed architecture. This distributed model implements a hypertext system as a collection of lightweight, distributed services. The benefits of this distributed hypertext model are discussed, and an adaptive model is then suggested
Optimizing decomposition of software architecture for local recovery
Cataloged from PDF version of article.The increasing size and complexity of software systems has led to an amplified number of potential failures and as such makes it harder to ensure software reliability. Since it is usually hard to prevent all the failures, fault tolerance techniques have become more important. An essential element of fault tolerance is the recovery from failures. Local recovery is an effective approach whereby only the erroneous parts of the system are recovered while the other parts remain available. For achieving local recovery, the architecture needs to be decomposed into separate units that can be recovered in isolation. Usually, there are many different alternative ways to decompose the system into recoverable units. It appears that each of these decomposition alternatives performs differently with respect to availability and performance metrics. We propose a systematic approach dedicated to optimizing the decomposition of software architecture for local recovery. The approach provides systematic guidelines to depict the design space of the possible decomposition alternatives, to reduce the design space with respect to domain and stakeholder constraints and to balance the feasible alternatives with respect to availability and performance. The approach is supported by an integrated set of tools and illustrated for the open-source MPlayer software
An architectural comparison of distributed object technologies
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (p. 115-117).by Jay Ongg.M.Eng
HIPPO -- an adaptive open hyptertext system
The hypertext paradigm offers a powerful way of modelling complex knowledge structures. Information can be arranged into networks, and connected using hypertext links. This has led to the development of more open hypertext design, which allow hypertext services to be integrated seamlessly into the user's environment. Recent research has also seen the emergence of adaptive hypertext, which uses feedback from the user to modify objects in the hypertext. The research presented in this thesis describes the HIPPO hypertext model which combines many of the ideas in open hypertext research, with existing work on adaptive hypertext systems.
The idea of fuzzy anchors are introduced which allow authors to express the uncertainty and vagueness which is inherent in a hypertext anchor. Fuzzy anchors use partial truth values which allow authors to define a "degree of membership" for anchors. Anchors no longer have fixed, discrete boundaries, but have more in common with contour lines used in map design. These fuzzy anchors are used as the basis for an adaptive model, so that anchors can be modified in response to user actions. The HIPPO linking model introduces linkbase trees which combine link collections into inheritance hierarchies. These are used to construct reusable inheritance trees, which allow authors to reuse and build on existing link collections. An adaptive model is also presented to modify these linkbase hierarchies. Finally, the HIPPO system is re-implemented using a widely distributed architecture. This distributed model implements a hypertext system as a collection of lightweight, distributed services. The benefits of this distributed hypertext model are discussed, and an adaptive model is then suggested
- …