37,077 research outputs found

    Progress in large-shared projects : method for forecasting and optimizing project duration in a distributed project

    Get PDF
    In large-shared projects, it is still difficult to measure progress due to the complexities involved, because the realization is shared among departments of a company or among companies in the world. Project management and operations research literature is reviewed for discovering various techniques applicable. Widely used tools for progress measurement and forecasting, such as Earned Value Analysis, Progress Plot, Milestone and Resource slip charts, concurrent engineering, can be employed. This paper is based on a problem of pharmaceutical industry where the effectiveness of a certain medical treatment is examined on patients in a number of countries. The number of variables involved increase the complexity of this problem. The main objective is to analyze the effectiveness of a solution in different situations during the project such that a better project duration and a lower cost can be achieved. Our findings suggest that reallocation of patients among countries produces better results in terms of progress

    Improving CE with PDM

    Get PDF
    The concept of Concurrent Engineering (CE) centers around the management of information so that the right information will be at the right place at the right time and in the right format. Product Data Management (PDM) aims to support a CE way of working in product development processes. In specific situations, however, it is hard to estimate the contribution of a particular PDM package to CE. This paper presents a method to assess the contribution to CE of a PDM package in a specific situation. The method uses the concept of information quality to identify the gap with CE information quality requirements. The contribution of PDM to bridge this gap is estimated. Decisions on improvement actions are supported to improve readiness for PDM as well as to improve CE. The method has been tested in a real-life situation

    Integrated engineering environments for large complex products

    Get PDF
    An introduction is given to the Engineering Design Centre at the University of Newcastle upon Tyne, along with a brief explanation of the main focus towards large made-to-order products. Three key areas of research at the Centre, which have evolved as a result of collaboration with industrial partners from various sectors of industry, are identified as (1) decision support and optimisation, (2) design for lifecycle, and (3) design integration and co-ordination. A summary of the unique features of large made-to-order products is then presented, which includes the need for integration and co-ordination technologies. Thus, an overview of the existing integration and co-ordination technologies is presented followed by a brief explanation of research in these areas at the Engineering Design Centre. A more detailed description is then presented regarding the co-ordination aspect of research being conducted at the Engineering Design Centre, in collaboration with the CAD Centre at the University of Strathclyde. Concurrent Engineering is acknowledged as a strategy for improving the design process, however design coordination is viewed as a principal requirement for its successful implementation. That is, design co-ordination is proposed as being the key to a mechanism that is able to maximise and realise any potential opportunity of concurrency. Thus, an agentoriented approach to co-ordination is presented, which incorporates various types of agents responsible for managing their respective activities. The co-ordinated approach, which is implemented within the Design Co-ordination System, includes features such as resource management and monitoring, dynamic scheduling, activity direction, task enactment, and information management. An application of the Design Co-ordination System, in conjunction with a robust concept exploration tool, shows that the computational design analysis involved in evaluating many design concepts can be performed more efficiently through a co-ordinated approach

    A system for co-ordinating concurrent engineering

    Get PDF
    Design of large made-to-order products invariably involves design activities which are increasingly being distributed globally in order to reduce costs, gain competitive advantage and utilise external expertise and resources. Designers specialise within their domain producing solutions to design problems using the tools and techniques with which they are familiar. They possess a relatively local perception of where their expertise and actions are consumed within the design process. This is further compounded when design activities are geographically distributed, resulting with the increased disassociation between an individual designer's activities and the overall design process. The tools and techniques used by designers rarely facilitate concurrency, producing solutions within a particular discipline without using or sharing information from other disciplines, and seldom considering stages within the product's life-cycle other than conceptual, embodiment or detail [1, 2]. Conventional management and maintenance of consistency throughout the product model can subsequently become difficult to achieve since there are many factors that need to be simultaneously considered whilst making achange to the product model

    Modelling iteration in engineering design

    Get PDF
    This paper examines design iteration and its modelling in the simulation of New Product Development (NPD) processes. A framework comprising six perspectives of iteration is proposed and it is argued that the importance of each perspective depends upon domain-specific factors. Key challenges of modelling iteration in process simulation frameworks such as the Design Structure Matrix are discussed, and we argue that no single model or framework can fully capture the iterative dynamics of an NPD process. To conclude, we propose that consideration of iteration and its representation could help identify the most appropriate modelling framework for a given process and modelling objective, thereby improving the fidelity of design process simulation models and increasing their utility

    Operational design co-ordination : an agent based approach

    Get PDF
    Operational design co-ordination has been identified as the basis for an approach to engineering design management that is more comprehensive than those that currently exist. As such, an integrated and holistic approach to operational design co-ordination has been developed that enables design to be managed in a coherent, appropriate and timely manner. Furthermore, the approach has been implemented within an agent-based software system, called the Design Co-ordination System, which has been applied to an industrial case study involving the computational design analysis of turbine blades. This application demonstrates that managing and adjusting in real-time in an operationally co-ordinated manner enables reductions in the time taken to complete the turbine blade design process to be achieved

    OpenCL Actors - Adding Data Parallelism to Actor-based Programming with CAF

    Full text link
    The actor model of computation has been designed for a seamless support of concurrency and distribution. However, it remains unspecific about data parallel program flows, while available processing power of modern many core hardware such as graphics processing units (GPUs) or coprocessors increases the relevance of data parallelism for general-purpose computation. In this work, we introduce OpenCL-enabled actors to the C++ Actor Framework (CAF). This offers a high level interface for accessing any OpenCL device without leaving the actor paradigm. The new type of actor is integrated into the runtime environment of CAF and gives rise to transparent message passing in distributed systems on heterogeneous hardware. Following the actor logic in CAF, OpenCL kernels can be composed while encapsulated in C++ actors, hence operate in a multi-stage fashion on data resident at the GPU. Developers are thus enabled to build complex data parallel programs from primitives without leaving the actor paradigm, nor sacrificing performance. Our evaluations on commodity GPUs, an Nvidia TESLA, and an Intel PHI reveal the expected linear scaling behavior when offloading larger workloads. For sub-second duties, the efficiency of offloading was found to largely differ between devices. Moreover, our findings indicate a negligible overhead over programming with the native OpenCL API.Comment: 28 page

    Architecture independent environment for developing engineering software on MIMD computers

    Get PDF
    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management

    New Product Development: Impact of Project Characteristics and Development Practices on Performance

    Get PDF
    Concurrent product development process and integrated product development teams have emerged as the two dominant new product development (NPD) “best practices” in the literature. Yet empirical evidence of their impact on product development succes
    • 

    corecore