116,452 research outputs found
Component technologies: Java Beans, COM, CORBA, RMI, EJB and the CORBA component model
This one-day tutorial is aimed at software engineering practitioners
and researchers, who are familiar with objectoriented
analysis, design and programming and want to
obtain an overview of the technologies that are enabling
component-based development. We introduce the idea of
component-based development by dening the concept and
providing its economic rationale. We describe how objectoriented
programming evolved into local component models,
such as Java Beans and distributed object technologies,
such as the Common Object Request Broker Architecture
(CORBA), Java Remote Method Invocation (RMI)
and the Component Object Model (COM). We then address
how these technologies matured into distributed component
models, in partiuclar Enterprise Java Beans (EJB) and the
CORBA Component Model (CCM). We give an assessment
of the maturity of each of these technologies and sketch how
they are used to build distributed architectures
Component technologies: Java Beans, COM, CORBA, RMI, EJB and the CORBA component model
This one-day tutorial is aimed at software engineering practitioners and researchers, who are familiar with objectoriented analysis, design and programming and want to obtain an overview of the technologies that are enabling component-based development. We introduce the idea of component-based development by defining the concept and providing its economic rationale. We describe how object-oriented programming evolved into local component models, such as Java Beans and distributed object technologies, such as the Common Object Request Broker Architecture (CORBA), Java Remote Method Invocation (RMI) and the Component Object Model (COM). We then address how these technologies matured into distributed component models, in partiuclar Enterprise Java Beans (EJB) and the CORBA Component Model (CCM). We give an assessment of the maturity of each of these technologies and sketch how they are used to build distributed architectures
Reliable Actors with Retry Orchestration
Enterprise cloud developers have to build applications that are resilient to
failures and interruptions. We advocate for, formalize, implement, and evaluate
a simple, albeit effective, fault-tolerant programming model for the cloud
based on actors, reliable message delivery, and retry orchestration. Our model
guarantees that (1) failed actor invocations are retried until success, (2) in
a distributed chain of invocations only the last one may be retried, (3)
pending synchronous invocations with a failed caller are automatically
cancelled. These guarantees make it possible to productively develop
fault-tolerant distributed applications ranging from classic problems of
concurrency theory to complex enterprise applications. Built as a service mesh,
our runtime system can interface application components written in any
programming language and scale with the application. We measure overhead
relative to reliable message queues. Using an application inspired by a typical
enterprise scenario, we assess fault tolerance and the impact of fault recovery
on application performance.Comment: 14 pages, 6 figure
Recommended from our members
Infrastructure for distributed enterprise simulation
Traditional discrete-event simulations employ an inherently sequential algorithm and are run on a single computer. However, the demands of many real-world problems exceed the capabilities of sequential simulation systems. Often the capacity of a computer`s primary memory limits the size of the models that can be handled, and in some cases parallel execution on multiple processors could significantly reduce the simulation time. This paper describes the development of an Infrastructure for Distributed Enterprise Simulation (IDES) - a large-scale portable parallel simulation framework developed to support Sandia National Laboratories` mission in stockpile stewardship. IDES is based on the Breathing-Time-Buckets synchronization protocol, and maps a message-based model of distributed computing onto an object-oriented programming model. IDES is portable across heterogeneous computing architectures, including single-processor systems, networks of workstations and multi-processor computers with shared or distributed memory. The system provides a simple and sufficient application programming interface that can be used by scientists to quickly model large-scale, complex enterprise systems. In the background and without involving the user, IDES is capable of making dynamic use of idle processing power available throughout the enterprise network. 16 refs., 14 figs
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
A MODEL OF HETEROGENEOUS DISTRIBUTED SYSTEM FOR FOREIGN EXCHANGE PORTFOLIO ANALYSIS
The paper investigates the design of heterogeneous distributed system for foreign exchange portfolio analysis. The proposed model includes few separated and dislocated but connected parts through distributed mechanisms. Making system distributed brings new perspectives to performance busting where software based load balancer gets very important role. Desired system should spread over multiple, heterogeneous platforms in order to fulfil open platform goal. Building such a model incorporates different patterns from GOF design patterns, business patterns, J2EE patterns, integration patterns, enterprise patterns, distributed design patterns to Web services patterns. The authors try to find as much as possible appropriate patterns for planned tasks in order to capture best modelling and programming practices
- …