2,517 research outputs found

    A Peer-to-Peer Middleware Framework for Resilient Persistent Programming

    Get PDF
    The persistent programming systems of the 1980s offered a programming model that integrated computation and long-term storage. In these systems, reliable applications could be engineered without requiring the programmer to write translation code to manage the transfer of data to and from non-volatile storage. More importantly, it simplified the programmer's conceptual model of an application, and avoided the many coherency problems that result from multiple cached copies of the same information. Although technically innovative, persistent languages were not widely adopted, perhaps due in part to their closed-world model. Each persistent store was located on a single host, and there were no flexible mechanisms for communication or transfer of data between separate stores. Here we re-open the work on persistence and combine it with modern peer-to-peer techniques in order to provide support for orthogonal persistence in resilient and potentially long-running distributed applications. Our vision is of an infrastructure within which an application can be developed and distributed with minimal modification, whereupon the application becomes resilient to certain failure modes. If a node, or the connection to it, fails during execution of the application, the objects are re-instantiated from distributed replicas, without their reference holders being aware of the failure. Furthermore, we believe that this can be achieved within a spectrum of application programmer intervention, ranging from minimal to totally prescriptive, as desired. The same mechanisms encompass an orthogonally persistent programming model. We outline our approach to implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200

    OpenPING: A Reflective Middleware for the Construction of Adaptive Networked Game Applications

    Get PDF
    The emergence of distributed Virtual Reality (VR) applications that run over the Internet has presented networked game application designers with new challenges. In an environment where the public internet streams multimedia data and is constantly under pressure to deliver over widely heterogeneous user-platforms, there has been a growing need that distributed VR applications be aware of and adapt to frequent variations in their context of execution. In this paper, we argue that in contrast to research efforts targeted at improvement of scalability, persistence and responsiveness capabilities, much less attempts have been aimed at addressing the flexibility, maintainability and extensibility requirements in contemporary distributed VR platforms. We propose the use of structural reflection as an approach that not only addresses these requirements but also offers added value in the form of providing a framework for scalability, persistence and responsiveness that is itself flexible, maintainable and extensible. We also present an adaptive middleware platform implementation called OpenPING1 that supports our proposal in addressing these requirements

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    Middleware Design Framework for Mobile Computing

    Get PDF
    Mobile computing is one of the recent growing fields in the area of wireless networking. The recent standardization efforts accomplished in Web services, with their XML-based formats for registration/discovery, service description, and service access, respectively UDDI, WSDL, and SOAP, certainly represent an interesting first step towards open service composition, which MA supports for mobile computing are expected to integrate within their frameworks soon. A middle-ware that can work even if the network parameters are changed can be a better solution for successful mobile computing. A middle-ware is proposed for handling the entire existing problem in distributed environment. Middleware is about integration and interoperability of applications and services running on heterogeneous computing and communication devices. The services it provides - including identification, authentication, authorization, soft-switching, certification and security - are used in a vast range of global appliances and systems, from smart cards and wireless devices to mobile services and e-Commerce

    Software engineering and middleware: a roadmap (Invited talk)

    Get PDF
    The construction of a large class of distributed systems can be simplified by leveraging middleware, which is layered between network operating systems and application components. Middleware resolves heterogeneity and facilitates communication and coordination of distributed components. Existing middleware products enable software engineers to build systems that are distributed across a local-area network. State-of-the-art middleware research aims to push this boundary towards Internet-scale distribution, adaptive and reconfigurable middleware and middleware for dependable and wireless systems. The challenge for software engineering research is to devise notations, techniques, methods and tools for distributed system construction that systematically build and exploit the capabilities that middleware deliver

    Improving Reuse of Distributed Transaction Software with Transaction-Aware Aspects

    Get PDF
    Implementing crosscutting concerns for transactions is difficult, even using Aspect-Oriented Programming Languages (AOPLs) such as AspectJ. Many of these challenges arise because the context of a transaction-related crosscutting concern consists of loosely-coupled abstractions like dynamically-generated identifiers, timestamps, and tentative value sets of distributed resources. Current AOPLs do not provide joinpoints and pointcuts for weaving advice into high-level abstractions or contexts, like transaction contexts. Other challenges stem from the essential complexity in the nature of the data, operations on the data, or the volume of data, and accidental complexity comes from the way that the problem is being solved, even using common transaction frameworks. This dissertation describes an extension to AspectJ, called TransJ, with which developers can implement transaction-related crosscutting concerns in cohesive and loosely-coupled aspects. It also presents a preliminary experiment that provides evidence of improvement in reusability without sacrificing the performance of applications requiring essential transactions. This empirical study is conducted using the extended-quality model for transactional application to define measurements on the transaction software systems. This quality model defines three goals: the first relates to code quality (in terms of its reusability); the second to software performance; and the third concerns software development efficiency. Results from this study show that TransJ can improve the reusability while maintaining performance of TransJ applications requiring transaction for all eight areas addressed by the hypotheses: better encapsulation and separation of concern; loose Coupling, higher-cohesion and less tangling; improving obliviousness; preserving the software efficiency; improving extensibility; and hasten the development process

    Revisiting Actor Programming in C++

    Full text link
    The actor model of computation has gained significant popularity over the last decade. Its high level of abstraction makes it appealing for concurrent applications in parallel and distributed systems. However, designing a real-world actor framework that subsumes full scalability, strong reliability, and high resource efficiency requires many conceptual and algorithmic additives to the original model. In this paper, we report on designing and building CAF, the "C++ Actor Framework". CAF targets at providing a concurrent and distributed native environment for scaling up to very large, high-performance applications, and equally well down to small constrained systems. We present the key specifications and design concepts---in particular a message-transparent architecture, type-safe message interfaces, and pattern matching facilities---that make native actors a viable approach for many robust, elastic, and highly distributed developments. We demonstrate the feasibility of CAF in three scenarios: first for elastic, upscaling environments, second for including heterogeneous hardware like GPGPUs, and third for distributed runtime systems. Extensive performance evaluations indicate ideal runtime behaviour for up to 64 cores at very low memory footprint, or in the presence of GPUs. In these tests, CAF continuously outperforms the competing actor environments Erlang, Charm++, SalsaLite, Scala, ActorFoundry, and even the OpenMPI.Comment: 33 page

    Concurrent High-performance Persistent Hash Table In Java

    Get PDF
    Current trading systems must handle both high volumes of trading and large amounts of trading data. One crucial module in high-performance trading is fast storage and retrieval of large volumes of data simultaneously accessed by multiple computer traders. To speed up access, a high-performance in-memory software-cache stores the dynamic working-set of trades during a trading day. To utilize memory effeciently, it is beneficial to provide a single shared cache for multiple trading applications. Much of the cache access is read-only, as information is gathered before a transaction to determine its value. Hence, extremely fast lookup is essential to support quick information gathering for assessment. This thesis presents a software-cache, called MapHash, that is a high-performance hash-table for use in Java
    corecore