1,817 research outputs found
Object replication in a distributed system
PhD ThesisA number of techniques have been proposed for the construction of fault—tolerant
applications. One of these techniques is to replicate vital system resources so that if one
copy fails sufficient copies may still remain operational to allow the application to
continue to function. Interactions with replicated resources are inherently more complex
than non—replicated interactions, and hence some form of replication transparency is
necessary. This may be achieved by employing replica consistency protocols to mask replica
failures and maintain consistency of state between functioning replicas.
To achieve consistency between replicas it is necessary to ensure that all replicas
receive the same set of messages in the same order, despite failures at the senders and
receivers. This can be accomplished by making use of order preserving reliable
communication protocols. However, we shall show how it can be more efficient to use
unordered reliable communication and to impose ordering at the application level, by
making use of syntactic knowledge of the application.
This thesis develops techniques for replicating objects: in general this is harder than
replicating data, as objects (which can contain data) can contain calls on other objects.
Handling replicated objects is essentially the same as handling replicated computations,
and presents more problems than simply replicating data. We shall use the concept of the
object to provide transparent replication to users: a user will interact with only a single
object interface which hides the fact that the object is actually replicated.
The main aspects of the replication scheme presented in this thesis have been fully
implemented and tested. This includes the design and implementation of a replicated
object invocation protocol and the algorithms which ensure that (replicated) atomic
actions can manipulate replicated objects.Research Studentship, Science and Engineering Research Council.
Esprit Project 2267 (Integrated Systems Architecture)
Multidimensional catalogs for systematic exploration of component-based design spaces
Most component-based approaches to elaborate software require complete and consistent descriptions of components, but in practical settings components information is incomplete, imprecise and changing, and requirements may be likewise. More realistically deployable are approaches that combine exploration of candidate architectures with their evaluation vis-a-vis requirements, and deal with the fuzzyness of available component information. This article presents an approach to systematic generation, evaluation and re-generation of component assemblies, using potentially incomplete, imprecise, unreliable and changing descriptions of requirements and components. The key ideas are representation of NFRs using architectural policies, systematic reification of policies into mechanisms and components that implement them, multi-dimensional characterizations of these three levels, and catalogs of them. The Azimut framework embodies these ideas and enables traceability of architecture by supporting architecture-level reasoning, and allows architects to engage into systematic exploration of design spaces.
A detailed illustrative example illustrates the approach.1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 1: Software ArchitectureRed de Universidades con Carreras en Informática (RedUNCI
Unification of Transactions and Replication in Three-Tier Architectures Based on CORBA
In this paper, we describe a software infrastructure that unifies transactions and replication in three-tier architectures and provides data consistency and high availability for enterprise applications. The infrastructure uses transactions based on the CORBA object transaction service to protect the application data in databases on stable storage, using a roll-backward recovery strategy, and replication based on the fault tolerant CORBA standard to protect the middle-tier servers, using a roll-forward recovery strategy. The infrastructure replicates the middle-tier servers to protect the application business logic processing. In addition, it replicates the transaction coordinator, which renders the two-phase commit protocol nonblocking and, thus, avoids potentially long service disruptions caused by failure of the coordinator. The infrastructure handles the interactions between the replicated middle-tier servers and the database servers through replicated gateways that prevent duplicate requests from reaching the database servers. It implements automatic client-side failover mechanisms, which guarantee that clients know the outcome of the requests that they have made, and retries aborted transactions automatically on behalf of the clients
Practical issues for the implementation of survivability and recovery techniques in optical networks
On Utilization of Contributory Storage in Desktop Grids
The availability of desktop grids and shared computing platforms has popularized the use of contributory resources,
such as desktops, as computing substrates for a variety of applications. However, addressing the exponentially growing
storage demands of applications, especially in a contributory environment, remains a challenging research problem. In
this report, we propose a transparent distributed storage system that harnesses the storage contributed by grid participants
arranged in a peer-to-peer network to yield a scalable, robust, and self-organizing system. The novelty of our work
lies in (i) design simplicity to facilitate actual use; (ii) support for easy integration with grid platforms; (iii) ingenious
use of striping and error coding techniques to support very large data files; and (iv) the use of multicast techniques
for data replication. Experimental results through simulations and an actual implementation show that our system can
provide reliable and efficient storage with large file support for desktop grid applications
Digital light processing stereolithography of hydroxyapatite scaffolds with bone-like architecture, permeability, and mechanical properties
AbstractThis work deals with the additive manufacturing and characterization of hydroxyapatite scaffolds mimicking the trabecular architecture of cancellous bone. A novel approach was proposed relying on stereolithographic technology, which builds foam‐like ceramic scaffolds by using three‐dimensional (3D) micro‐tomographic reconstructions of polymeric sponges as virtual templates for the manufacturing process. The layer‐by‐layer fabrication process involves the selective polymerization of a photocurable resin in which hydroxyapatite particles are homogeneously dispersed. Irradiation is performed by a dynamic mask that projects blue light onto the slurry. After sintering, highly‐porous hydroxyapatite scaffolds (total porosity ~0.80, pore size 100‐800 µm) replicating the 3D open‐cell architecture of the polymeric template as well as spongy bone were obtained. Intrinsic permeability of scaffolds was determined by measuring laminar airflow alternating pressure wave drops and was found to be within 0.75‐1.74 × 10−9 m2, which is comparable to the range of human cancellous bone. Compressive tests were also carried out in order to determine the strength (~1.60 MPa), elastic modulus (~513 MPa) and Weibull modulus (m = 2.2) of the scaffolds. Overall, the fabrication strategy used to print hydroxyapatite scaffolds (tomographic imaging combined with digital mirror device [DMD]‐based stereolithography) shows great promise for the development of porous bioceramics with bone‐like architecture and mass transport properties
Multidimensional catalogs for systematic exploration of component-based design spaces
Most component-based approaches to elaborate software require complete and consistent descriptions of components, but in practical settings components information is incomplete, imprecise and changing, and requirements may be likewise. More realistically deployable are approaches that combine exploration of candidate architectures with their evaluation vis-a-vis requirements, and deal with the fuzzyness of available component information. This article presents an approach to systematic generation, evaluation and re-generation of component assemblies, using potentially incomplete, imprecise, unreliable and changing descriptions of requirements and components. The key ideas are representation of NFRs using architectural policies, systematic reification of policies into mechanisms and components that implement them, multi-dimensional characterizations of these three levels, and catalogs of them. The Azimut framework embodies these ideas and enables traceability of architecture by supporting architecture-level reasoning, and allows architects to engage into systematic exploration of design spaces.
A detailed illustrative example illustrates the approach.1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 1: Software ArchitectureRed de Universidades con Carreras en Informática (RedUNCI
- …