143 research outputs found

    The future of enterprise groupware applications

    Get PDF
    This paper provides a review of groupware technology and products. The purpose of this review is to investigate the appropriateness of current groupware technology as the basis for future enterprise systems and evaluate its role in realising, the currently emerging, Virtual Enterprise model for business organisation. It also identifies in which way current technological phenomena will transform groupware technology and will drive the development of the enterprise systems of the future

    A Collaborative Visualization Framework Using JINI™ Technology

    Get PDF
    It is difficult to achieve mutual understanding of complex information between individuals that are separated geographically. Two well-known techniques commonly used to deal with this difficultly are collaboration and information visualization. This thesis develops a generic flexible framework that supports both collaboration and information visualization. It introduces the Collaborative Visualization Environment (COVE) framework, which simplifies the development of real-time synchronous multi-user applications by decoupling the elements of collaboration from the application. This allows developers to focus on building applications and leave the difficulties of collaboration (i.e., concurrency controls, user awareness, session management, etc.) to the framework. The framework uses an object sharing approach to share information and views between participants in a collaborative session. This approach takes advantage of several Java technologies (i.e., JavaBeans™, Jini™, and JavaSpaces™). JavaBeans™ establish a well-known standard for applications to operate within the framework. Jini™ services provide framework stability and enable code sharing across the network. Objects are shared between remote clients through the JavaSpaces™ service

    Achieving High Performance and High Productivity in Next Generational Parallel Programming Languages

    Get PDF
    Processor design has turned toward parallelism and heterogeneity cores to achieve performance and energy efficiency. Developers find high-level languages attractive because they use abstraction to offer productivity and portability over hardware complexities. To achieve performance, some modern implementations of high-level languages use work-stealing scheduling for load balancing of dynamically created tasks. Work-stealing is a promising approach for effectively exploiting software parallelism on parallel hardware. A programmer who uses work-stealing explicitly identifies potential parallelism and the runtime then schedules work, keeping otherwise idle hardware busy while relieving overloaded hardware of its burden. However, work-stealing comes with substantial overheads. These overheads arise as a necessary side effect of the implementation and hamper parallel performance. In addition to runtime-imposed overheads, there is a substantial cognitive load associated with ensuring that parallel code is data-race free. This dissertation explores the overheads associated with achieving high performance parallelism in modern high-level languages. My thesis is that, by exploiting existing underlying mechanisms of managed runtimes; and by extending existing language design, high-level languages will be able to deliver productivity and parallel performance at the levels necessary for widespread uptake. The key contributions of my thesis are: 1) a detailed analysis of the key sources of overhead associated with a work-stealing runtime, namely sequential and dynamic overheads; 2) novel techniques to reduce these overheads that use rich features of managed runtimes such as the yieldpoint mechanism, on-stack replacement, dynamic code-patching, exception handling support, and return barriers; 3) comprehensive analysis of the resulting benefits, which demonstrate that work-stealing overheads can be significantly reduced, leading to substantial performance improvements; and 4) a small set of language extensions that achieve both high performance and high productivity with minimal programmer effort. A managed runtime forms the backbone of any modern implementation of a high-level language. Managed runtimes enjoy the benefits of a long history of research and their implementations are highly optimized. My thesis demonstrates that converging these highly optimized features together with the expressiveness of high-level languages, gives further hope for achieving high performance and high productivity on modern parallel hardwar

    A Flexible Framework for Collaborative Visualization Applications Using Java Spaces

    Get PDF
    The complexity of modern tasks is rising along with the level of technology. Two techniques commonly used to deal with complexity are collaboration and information visualization. Recently, computer networks have arisen as a powerful means of collaboration, and many new technologies are being developed to better utilize them. Among the newer, more promising of these technologies is Sun Microsystems\u27 JavaSpaces ™, a high-level network programming API. This thesis describes a tool for developing collaborative visualization software using JavaSpaces-an application framework and accompanying toolkit. In addition to a detailed description of the framework, the thesis also describes an application implemented using the framework, discusses the benefits of development under the framework, evaluates the performance of JavaSpaces in the context of the framework, and addresses the issue of network bandwidth limitations, which are a concern when developing visualizations that deal with large data sets

    Scheduling Macro-DataFlow Programs on Task-Parallel Runtime Systems

    Get PDF
    Though multicore systems are ubiquitous, parallel programming models for these systems are generally not accessible to a wide programmer community. The macro-dataflow model is an attractive stepping stone to implicit parallelism for domain experts who are not the target audience for explicit parallel programming models. We use Intel's Concurrent Collections (CnC) programming model as a concrete exemplar of the macro-dataflow model in this work. CnC is a high level coordination language that can be implemented on top of lower-level task-parallel frameworks. In this thesis, we study an implementation of CnC, based on Habanero-Java as the underlying task-parallel runtime system. A unique feature of CnC, first-class decoupling of data and control dependences, allows us to experiment with schedulers by taking these data and control dependences into account for better scheduling decisions. Our observations led to the proposal and implementation of a new task-parallel synchronization construct for Habanero-Java, namely Data-Driven Futures. We obtained two kinds of experimental results from our implementation. First, we compare the effectiveness of task scheduling policies for CnC programs. Secondly, we show that data-driven futures not only reduce execution time but also shrink memory footprint. In summary, this thesis shows a macro-dataflow programming model can deliver productivity and performance on modern multicore processors

    A reconfigurable component-based problem solving environment

    Get PDF
    ©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Problem solving environments are an attractive approach to the integration of calculation and management tools for various scientific and engineering applications. These applications often require high performance computing components in order to be computationally feasible. It is therefore a challenge to construct integration technology, suitable for problem solving environments, that allows both flexibility as well as the embedding of parallel and high performance computing systems. Our DISCWorld system is designed to meet these needs and provides a Java-based middleware to integrate component applications across wide-area networks. Key features of our design are the abilities to: access remotely stored data; compose complex processing requests either graphically or through a scripting language; execute components on heterogeneous and remote platforms; reconfigure task sub-graphs to run across multiple servers. Operators in task graphs can be slow (but portable) “pure Java” implementations or wrappers to fast (platform specific) supercomputer implementations.K. Hawick, H. James, P. Coddingto

    Efficient and Reasonable Object-Oriented Concurrency

    Full text link
    Making threaded programs safe and easy to reason about is one of the chief difficulties in modern programming. This work provides an efficient execution model for SCOOP, a concurrency approach that provides not only data race freedom but also pre/postcondition reasoning guarantees between threads. The extensions we propose influence both the underlying semantics to increase the amount of concurrent execution that is possible, exclude certain classes of deadlocks, and enable greater performance. These extensions are used as the basis an efficient runtime and optimization pass that improve performance 15x over a baseline implementation. This new implementation of SCOOP is also 2x faster than other well-known safe concurrent languages. The measurements are based on both coordination-intensive and data-manipulation-intensive benchmarks designed to offer a mixture of workloads.Comment: Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE '15). ACM, 201

    Coupling Memory and Computation for Locality Management

    Get PDF
    We articulate the need for managing (data) locality automatically rather than leaving it to the programmer, especially in parallel programming systems. To this end, we propose techniques for coupling tightly the computation (including the thread scheduler) and the memory manager so that data and computation can be positioned closely in hardware. Such tight coupling of computation and memory management is in sharp contrast with the prevailing practice of considering each in isolation. For example, memory-management techniques usually abstract the computation as an unknown "mutator", which is treated as a "black box". As an example of the approach, in this paper we consider a specific class of parallel computations, nested-parallel computations. Such computations dynamically create a nesting of parallel tasks. We propose a method for organizing memory as a tree of heaps reflecting the structure of the nesting. More specifically, our approach creates a heap for a task if it is separately scheduled on a processor. This allows us to couple garbage collection with the structure of the computation and the way in which it is dynamically scheduled on the processors. This coupling enables taking advantage of locality in the program by mapping it to the locality of the hardware. For example for improved locality a heap can be garbage collected immediately after its task finishes when the heap contents is likely in cache
    • …
    corecore