144,531 research outputs found

    Concurrent collaboration in research and development

    Get PDF
    Integration is the essence of current research and development (R&D) activity in many organizations. Integration can be established in various ways depending on the type, size and intricacy in organizational functions and products. Nevertheless, research and development (R&D) has become an inevitable function in most manufacturing companies in order to develop their own product niches for their survival in the prevailing highly completion market environment. Research and development functions are fundamental drivers of value creation in technology based enterprises. Of creating and maintaining a vibrant R&D environment, organizations individually or collectively need to incorporate virtual R&D team. A virtual R\&D team can introduce new product in less lead time than by conventional R\&D working. Therefore, how to increase the possibility of having more successful R\&D is a critical issue for enterprises. This paper examines the current approach of collaboration in R\&D issues from the perspective of their impact on virtual R\&D team in enterprises and compares the findings with the other concepts of concurrent collaboration. By reviewing literature and theories, the paper firstly presents the definition and characteristics of virtual R&D teams. A comparison of different types of virtual R&D teams along with the strengths and limitations of the preceding studies in this area are also presented. It is observed that most of the research activities encourage and support virtual R\&D teams applicable to enterprises. Distinctive benefits of establishing virtual R&D team have been enumerated and demand future attention has been indicated in the paper

    Design and Implementation of a Distributed Middleware for Parallel Execution of Legacy Enterprise Applications

    Get PDF
    A typical enterprise uses a local area network of computers to perform its business. During the off-working hours, the computational capacities of these networked computers are underused or unused. In order to utilize this computational capacity an application has to be recoded to exploit concurrency inherent in a computation which is clearly not possible for legacy applications without any source code. This thesis presents the design an implementation of a distributed middleware which can automatically execute a legacy application on multiple networked computers by parallelizing it. This middleware runs multiple copies of the binary executable code in parallel on different hosts in the network. It wraps up the binary executable code of the legacy application in order to capture the kernel level data access system calls and perform them distributively over multiple computers in a safe and conflict free manner. The middleware also incorporates a dynamic scheduling technique to execute the target application in minimum time by scavenging the available CPU cycles of the hosts in the network. This dynamic scheduling also supports the CPU availability of the hosts to change over time and properly reschedule the replicas performing the computation to minimize the execution time. A prototype implementation of this middleware has been developed as a proof of concept of the design. This implementation has been evaluated with a few typical case studies and the test results confirm that the middleware works as expected

    RELEASE: A High-level Paradigm for Reliable Large-scale Server Software

    Get PDF
    Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the first six months. The project aim is to scale the Erlang’s radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the effectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene

    A front-end system to support cloud-based manufacturing of customised products

    Get PDF
    In today’s global market, customized products are amongst an important means to address diverse customer demand and in achieving a unique competitive advantage. Key enablers of this approach are existing product configuration and supporting IT-based manufacturing systems. As a proposed advancement, it considered that the development of a front-end system with a next level of integration to a cloud-based manufacturing infrastructure is able to better support the specification and on-demand manufacture of customized products. In this paper, a new paradigm of Manufacturing-as-a-Service (MaaS) environment is introduced and highlights the current research challenges in the configuration of customizable products. Furthermore, the latest development of the front-end system is reported with a view towards further work in the research

    A new approach to collaborative frameworks using shared objects

    Get PDF
    Multi-user graphical applications currently require the creation of a set of interface objects to maintain each participating display. The concept of shared objects allows a single object instance to be used in multiple contexts concurrently. This provides a novel way of reducing collaborative overheads by requiring the maintenance of only a single set of interface objects. The paper presents the concept of a shared-object collaborative framework and illustrates how the concept can be incorporated into an existing object-oriented toolkit

    Continuous client-side query evaluation over dynamic linked data

    Get PDF
    Existing solutions to query dynamic Linked Data sources extend the SPARQL language, and require continuous server processing for each query. Traditional SPARQL endpoints already accept highly expressive queries, so extending these endpoints for time-sensitive queries increases the server cost even further. To make continuous querying over dynamic Linked Data more affordable, we extend the low-cost Triple Pattern Fragments (TPF) interface with support for time-sensitive queries. In this paper, we introduce the TPF Query Streamer that allows clients to evaluate SPARQL queries with continuously updating results. Our experiments indicate that this extension significantly lowers the server complexity, at the expense of an increase in the execution time per query. We prove that by moving the complexity of continuously evaluating queries over dynamic Linked Data to the clients and thus increasing bandwidth usage, the cost at the server side is significantly reduced. Our results show that this solution makes real-time querying more scalable for a large amount of concurrent clients when compared to the alternatives

    DiPerF: an automated DIstributed PERformance testing Framework

    Full text link
    We present DiPerF, a distributed performance testing framework, aimed at simplifying and automating service performance evaluation. DiPerF coordinates a pool of machines that test a target service, collects and aggregates performance metrics, and generates performance statistics. The aggregate data collected provide information on service throughput, on service "fairness" when serving multiple clients concurrently, and on the impact of network latency on service performance. Furthermore, using this data, it is possible to build predictive models that estimate a service performance given the service load. We have tested DiPerF on 100+ machines on two testbeds, Grid3 and PlanetLab, and explored the performance of job submission services (pre WS GRAM and WS GRAM) included with Globus Toolkit 3.2.Comment: 8 pages, 8 figures, will appear in IEEE/ACM Grid2004, November 200
    corecore