287 research outputs found

    Distributing Real Time Data From a Multi-Node Large Scale Contact Center Using Corba

    Get PDF
    This thesis researches and evaluates the current technologies available for developing a system for propagation of Real-Time Data from a large scale Enterprise Server to large numbers of registered clients on the network. The large scale Enterprise Server being implemented is a Contact Centre Server, which can be a standalone system or part of a multi-nodal system. This paper makes three contributions to the study of scalable real-time notification services. Firstly, it defines the research of the different technologies and their implementation for distributed objects in today\u27s world of computing. Secondly, the paper explains how we have addressed key design challenges faced when implementing a Notification Service for TAO, which is our CORBA-compliant real-time Object Request Broker (ORB). The paper shows how to integrate and configure CORBA features to provide real-time event communication. Finally, the paper analyzes the results of the implementation and how it compares to existing technologies being used for the propagation of Real-Time Data

    A web services based framework for efficient monitoring and event reporting.

    Get PDF
    Network and Service Management (NSM) is a research discipline with significant research contributions the last 25 years. Despite the numerous standardised solutions that have been proposed for NSM, the quest for an "all encompassing technology" still continues. A new technology introduced lately to address NSM problems is Web Services (WS). Despite the research effort put into WS and their potential for addressing NSM objectives, there are efficiency, interoperability, etc issues that need to be solved before using WS for NSM. This thesis looks at two techniques to increase the efficiency of WS management applications so that the latter can be used for efficient monitoring and event reporting. The first is a query tool we built that can be used for efficient retrieval of management state data close to the devices where they are hosted. The second technique is policies used to delegate a number of tasks from a manager to an agent to make WS-based event reporting systems more efficient. We tested the performance of these mechanisms by incorporating them in a custom monitoring and event reporting framework and supporting systems we have built, against other similar mechanisms (XPath) that have been proposed for the same tasks, as well as previous technologies such as SNMP. Through these tests we have shown that these mechanisms are capable of allowing us to use WS efficiently in various monitoring and event reporting scenarios. Having shown the potential of our techniques we also present the design and implementation challenges for building a GUI tool to support and enhance the above systems with extra capabilities. In summary, we expect that other problems WS face will be solved in the near future, making WS a capable platform for it to be used for NSM

    EbbRT: a framework for building per-application library operating systems

    Full text link
    Efficient use of high speed hardware requires operating system components be customized to the application work- load. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable high-performance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux

    RESTful Service Composition

    Get PDF
    The Service-Oriented Architecture (SOA) has become one of the most popular approaches to building large-scale network applications. The web service technologies are de facto the default implementation for SOA. Simple Object Access Protocol (SOAP) is the key and fundamental technology of web services. Service composition is a way to deliver complex services based on existing partner services. Service orchestration with the support of Web Services Business Process Execution Language (WSBPEL) is the dominant approach of web service composition. WSBPEL-based service orchestration inherited the issue of interoperability from SOAP, and it was furthermore challenged for performance, scalability, reliability and modifiability. I present an architectural approach for service composition in this thesis to address these challenges. An architectural solution is so generic that it can be applied to a large spectrum of problems. I name the architectural style RESTful Service Composition (RSC), because many of its elements and constraints are derived from Representational State Transfer (REST). REST is an architectural style developed to describe the architectural style of the Web. The Web has demonstrated outstanding interoperability, performance, scalability, reliability and modifiability. RSC is designed for service composition on the Internet. The RSC style is composed on specific element types, including RESTful service composition client, RESTful partner proxy, composite resource, resource client, functional computation and relaying service. A service composition is partitioned into stages; each stage is represented as a computation that has a uniform identifier and a set of uniform access methods; and the transitions between stages are driven by computational batons. RSC is supplemented by a programming model that emphasizes on-demand function, map-reduce and continuation passing. An RSC-style composition does not depend on either a central conductor service or a common choreography specification, which makes it different from service orchestration or service choreography. Four scenarios are used to evaluate the performance, scalability, reliability and modifiability improvement of the RSC approach compared to orchestration. An RSC-style solution and an orchestration solution are compared side by side in every scenario. The first scenario evaluates the performance improvement of the X-Ray Diffraction (XRD) application in ScienceStudio; the second scenario evaluates the scalability improvement of the Process Variable (PV) snapshot application; the third scenario evaluates the reliability improvement of a notification application by simulation; and the fourth scenario evaluates the modifiability improvement of the XRD application in order to fulfil emerging requirements. The results show that the RSC approach outperforms the orchestration approach in every aspect

    A Policy-Based Resource Brokering Environment for Computational Grids

    Get PDF
    With the advances in networking infrastructure in general, and the Internet in particular, we can build grid environments that allow users to utilize a diverse set of distributed and heterogeneous resources. Since the focus of such environments is the efficient usage of the underlying resources, a critical component is the resource brokering environment that mediates the discovery, access and usage of these resources. With the consumer\u27s constraints, provider\u27s rules, distributed heterogeneous resources and the large number of scheduling choices, the resource brokering environment needs to decide where to place the user\u27s jobs and when to start their execution in a way that yields the best performance for the user and the best utilization for the resource provider. As brokering and scheduling are very complicated tasks, most current resource brokering environments are either specific to a particular grid environment or have limited features. This makes them unsuitable for large applications with heterogeneous requirements. In addition, most of these resource brokering environments lack flexibility. Policies at the resource-, application-, and system-levels cannot be specified and enforced to provide commitment to the guaranteed level of allocation that can help in attracting grid users and contribute to establishing credibility for existing grid environments. In this thesis, we propose and prototype a flexible and extensible Policy-based Resource Brokering Environment (PROBE) that can be utilized by various grid systems. In designing PROBE, we follow a policy-based approach that provides PROBE with the intelligence to not only match the user\u27s request with the right set of resources, but also to assure the guaranteed level of the allocation. PROBE looks at the task allocation as a Service Level Agreement (SLA) that needs to be enforced between the resource provider and the resource consumer. The policy-based framework is useful in a typical grid environment where resources, most of the time, are not dedicated. In implementing PROBE, we have utilized a layered architecture and façade design patterns. These along with the well-defined API, make the framework independent of any architecture and allow for the incorporation of different types of scheduling algorithms, applications and platform adaptors as the underlying environment requires. We have utilized XML as a base for all the specification needs. This provides a flexible mechanism to specify the heterogeneous resources and user\u27s requests along with their allocation constraints. We have developed XML-based specifications by which high-level internal structures of resources, jobs and policies can be specified. This provides interoperability in which a grid system can utilize PROBE to discover and use resources controlled by other grid systems. We have implemented a prototype of PROBE to demonstrate its feasibility. We also describe a test bed environment and the evaluation experiments that we have conducted to demonstrate the usefulness and effectiveness of our approach

    Quality of service management for non-guaranteed networks

    Get PDF
    The increasing dominance of multimedia communication posed new requirements for the underlying systems. Multimedia data, formally called continuous media, has time constraints that impose real time limitations for their transmission. Certain levels of service, called Quality of Service (QoS), need to be considered when handling continuous media. The present work utilizes QoS concepts for networks that do not have inherent QoS support. The thesis aims at verifying the possibility of having QoS-controlled communication on non-guaranteed networks. A basic QoS architecture is designed where already existing QoS concepts are adapted to work with non-guaranteed networks. The architecture provides the facilities of QoS specification, mapping, admission, maintenance, monitoring and notification. In addition, a new concept for predictive QoS admission is introduced. The proposed architecture was verified using a prototype system. The results showed an increased percentage of continuous media that arrive on time to their receivers (good put) with higher network loads. The increased good put was at the expense of high network overhead

    Communication in Microkernel-Based Operating Systems

    Get PDF
    Communication in microkernel-based systems is much more frequent than system calls known from monolithic kernels. This can be attributed to the placement of system services into their own protection domains. Communication has to be fast to avoid unnecessary overhead. Also, communication channels in microkernel-based systems are used for more than just remote procedure calls. In distributed systems, which also have a componentized design, it is state of the art to use tools to generate stubs for the communication between components. The communication interfaces of components are described in an interface definition language (IDL). In contrast to distributed systems, components of a microkernel-based system run on the same architecture and message delivery is guaranteed. In this Thesis, I explore the different kinds of communication, which can be used in microkernel-based systems, as well as their possible representation in IDL. Specifically, I introduce the syntax to describe kernel objects in IDL. I discuss the complexity of IDL compilers and its relation to the complexity of the IDL. Furthermore, I evaluate the performance of the communication stubs generated by different IDL compilers and discuss techniques to minimize performance overhead in generated stubs. I validated these techniques by implementing the Drops IDL Compiler - Dice. Finally, this Thesis presents a mechanism to measure the frequency and performance of invocations of generated communication code. I used this technique to conduct measurements in highly complex systems and introducing the least possible overhead

    Communication in Microkernel-Based Operating Systems

    Get PDF
    Communication in microkernel-based systems is much more frequent than system calls known from monolithic kernels. This can be attributed to the placement of system services into their own protection domains. Communication has to be fast to avoid unnecessary overhead. Also, communication channels in microkernel-based systems are used for more than just remote procedure calls. In distributed systems, which also have a componentized design, it is state of the art to use tools to generate stubs for the communication between components. The communication interfaces of components are described in an interface definition language (IDL). In contrast to distributed systems, components of a microkernel-based system run on the same architecture and message delivery is guaranteed. In this Thesis, I explore the different kinds of communication, which can be used in microkernel-based systems, as well as their possible representation in IDL. Specifically, I introduce the syntax to describe kernel objects in IDL. I discuss the complexity of IDL compilers and its relation to the complexity of the IDL. Furthermore, I evaluate the performance of the communication stubs generated by different IDL compilers and discuss techniques to minimize performance overhead in generated stubs. I validated these techniques by implementing the Drops IDL Compiler - Dice. Finally, this Thesis presents a mechanism to measure the frequency and performance of invocations of generated communication code. I used this technique to conduct measurements in highly complex systems and introducing the least possible overhead
    corecore