103 research outputs found
A Generative Programming Framework for Adaptive Middleware
Historically, many distributed real-time and embedded (DRE) systems were developed manually from scratch, leading to stove-piped solutions that while correct in both functional and QoS properties were very expensive to develop and difïŹcult to maintain and extend. First-generation middleware technologies such as CORBA 2.x [1], XML [2], and SOAP [3], served to shield application developers from low-level platform details, thus raising the level of abstraction at which distributed systems are developed and supporting reuse of infrastructure to amortize development costs over the lifetime of a system. However, interdependencies between services and object interfaces resulting from these programming models signiïŹcantly limited the degree of reuse that could be achieved in practice. Component middleware technologies such as the CORBA Component Model (CCM) [4], J2EE [5], and .NET [6], were developed to address many of these limitations. In CCM, for example, standardization of component containers, ports, and homes offered a framework within which reuse of server as well as client infrastructure was facilitated. Component-oriented middleware has addressed a wide range of application domains, but unfortunately for DRE systems, the focus of these technologies has been primarily on functional and not QoS properties. For example, although CCM supports conïŹguration of functional component attributes like their interconnections, key QoS attributes for DRE systems, such as execution times and invocation rates are inadequately conïŹgurable through conventional CCM [7]. Research on QoS-aware component models such as the CIAO project [8, 7] is showing signiïŹcant promise in making QoS conïŹguration a ïŹrst-class part of the component pro-gramming model, thus further reducing accidental complex-ities of building DRE systems. However, it is important to note a fundamental difference between conïŹguration of functional and QoS properties even within such a uniïŹed compo-nent model: the dominant decomposition of functional properties is essentially object-oriented, while the dominant decomposition of QoS properties is essentially aspect-oriented. That is, functional properties tend to be stable with respect to component boundaries and conïŹguration lifecycle stages, while QoS properties tend to cross-cut component boundaries, and may be revised as more information is known in later conïŹguration stages [7]. In this paper, we describe how a focus on aspect frameworks for conïŹguring QoS properties both com-plements and extends QoS-aware component models. This paper makes three main contributions to the state of the art in DRE systems middleware. First, it describes a simple but representative problem for conïŹguring QoS aspects that cross-cut both architectural layers and system lifecycle boundaries, which motivates our focus on aspect frameworks. Second, it provides a formalization of that problem using ïŹrst order logic, which both guides the design of aspect conïŹguration infrastructure, and offers a way to connect these techniques with model-integrated computing [9] approaches to further reduce the programming burden on DRE system developers. Third, it describes alternative mechanisms to ensure correct conïŹguration of the aspects involved, and notes the phases of the DRE system lifecycle at which each such conïŹguration mechanism is most appropriate
Recommended from our members
The use of agents and objects to integrate virtual enterprises
The manufacturing complex for the Department of Energy (DOE) is distributed: design laboratories, manufacturing facilities, and industrial partners. Designers must have a concurrent engineering environment to support all aspects of the cradle-to-grave product realization process across the distributed sites. Engineers must be able to analyze and simulate processes, retrieve and process heterogeneous information, both archived and current, and access multiple databases. Manufacturers must be able to coordinate activities of various manufacturing centers, which may involve a negotiation process. Furthermore, Sandia must be able to export manufacturing capabilities, such as on-machine acceptance, to outside suppliers. A key element to making this a reality is a flexible information architecture. The DOE information architecture must support a wide-area virtual enterprise, with distributed intelligent software components. The architecture must provide for asynchronous communication; multiple programming languages and operating systems; incorporation of geographically distributed manufacturing services; various hardware platforms; and heterogeneous workstations, PC`s, machine tool controllers, and special-purpose compute engines. Further, it is critical that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. To accomplish this seamlessly, heterogeneous knowledge must be exchanged across both domain and organizational boundaries. Distributed object and software agent technologies are two methods for connecting such engineering and manufacturing systems. The two technologies have overlapping goals - interoperability and architectural support for integrating software components - though to date little or no integration of the two technologies has been made
Use of A Network Enabled Server System for a Sparse Linear Algebra Grid Application
Solving systems of linear equations is one of the key operations in linear algebra. Many different algorithms are available in that purpose. These algorithms require a very accurate tuning to minimise runtime and memory consumption. The TLSE project provides, on one hand, a scenario-driven expert site to help users choose the right algorithm according to their problem and tune accurately this algorithm, and, on the other hand, a test-bed for experts in order to compare algorithms and define scenarios for the expert site. Both features require to run the available solvers a large number of times with many different values for the control parameters (and maybe with many different architectures). Currently, only the grid can provide enough computing power for this kind of application. The DIET middleware is the GRID backbone for TLSE. It manages the solver services and their scheduling in a scalable way.La rĂ©solution de systĂšmes linĂ©aires creux est une opĂ©ration clĂ© en algĂšbre linĂ©aire. Beaucoup dâalgorithmes sont utilisĂ©s pour cela, qui dĂ©pendent de nombreux paramĂštres, afin dâoffrir une robustesse, une performance et une consommation mĂ©moire optimales. Le projet GRID-TLSE fournit dâune part, un site dâexpertise basĂ© sur lâutilisation de scĂ©narios pour aider les utilisateurs Ă choisir lâalgorithme qui convient le mieux Ă leur problĂšme ainsi que les paramĂštres associĂ©s; et dâautre part, un environnement pour les experts du domaine leur permettant de comparer efficacement des algorithmes et de dĂ©finir dynamiquement de nouveaux scĂ©narios dâutilisation. Ces fonctionnalitĂ©s nĂ©cessitent de pouvoir exĂ©cuter les logiciels de rĂ©solution disponibles un grand nombre de fois,avec beaucoup de valeurs diffĂ©rentes des paramĂštres de contrĂŽle (et Ă©ventuellement sur plusieurs architectures de machines). Actuellement, seule la grille peut fournir la puissance de calcul pour ce type dâapplications. Lâintergiciel DIETest utilisĂ© pour gĂ©rer la grille, les diffĂ©rents services, et leur ordonnancement efficace
Models of higher-order, type-safe, distributed computation over autonomous persistent object stores
A remote procedure call (RPC) mechanism permits the calling of procedures in another
address space. RPC is a simple but highly effective mechanism for interprocess communication
and enjoys nowadays a great popularity as a tool for building distributed applications.
This popularity is partly a result of their overall simplicity but also partly a consequence
of more than 20 years of research in transpaxent distribution that have failed to deliver
systems that meet the expectations of real-world application programmers.
During the same 20 years, persistent systems have proved their suitability for building
complex database applications by seamlessly integrating features traditionally found in
database management systems into the programming language itself. Some research. effort
has been invested on distributed persistent systems, but the outcomes commonly suffer
from the same problems found with transparent distribution.
In this thesis I claim that a higher-order persistent RPC is useful for building distributed
persistent applications. The proposed mechanism is: realistic in the sense that it uses
current technology and tolerates partial failures; understandable by application programmers;
and general to support the development of many classes of distributed persistent
applications.
In order to demonstrate the validity of these claims, I propose and have implemented three
models for distributed higher-order computation over autonomous persistent stores. Each
model has successively exposed new problems which have then been overcome by the next
model. Together, the three models provide a general yet simple higher-order persistent
RPC that is able to operate in realistic environments with partial failures.
The real strength of this thesis is the demonstration of realism and simplicity. A higherorder
persistent RPC was not only implemented but also used by programmers without
experience of programming distributed applications. Furthermore, a distributed persistent
application has been built using these models which would not have been feasible with a
traditional (non-persistent) programming language
Adaptive object management for distributed systems
This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
- âŠ