4,428 research outputs found

    From critical success factors to critical success processes

    Get PDF
    After myriad studies into the main causes of project failure, almost every project manager can list the main factors that distinguish between project failure and project success. These factors are usually called Critical Success Factors (CSF). However, despite the fact that CSF are well-known, the rate of failed projects still remains very high. This may be due to the fact that current CSF are too general and do not contain specific enough know-how to better support project managers decision-making. This paper analyses the impact of 16 specific planning processes on project success and identifies Critical Success Processes (CSP) to which project success is most vulnerable. Results are based on a field study that involved 282 project managers. It was found that the most critical planning processes, which have the greatest impact on project success, are "definition of activities to be performed in the project", "schedule development", "organizational planning", "staff acquisition", "communications planning" and "developing a project plan". It was also found that project managers usually do not divide their time effectively among the different processes, following their influence on project success

    VR-PMS: a new approach for performance measurement and management of industrial systems

    Get PDF
    A new performance measurement and management framework based on value and risk is proposed. The proposed framework is applied to the modelling and evaluation of the a priori performance evaluation of manufacturing processes and to deciding on their alternatives. For this reason, it consistently integrates concepts relevant to objectives, activity, and risk in a single framework comprising a conceptual value/risk model, and it conceptualises the idea of value- and risk based performance management in a process context. In addition, a methodological framework is developed to provide guidelines for the decision-makers or performance evaluators of the processes. To facilitate the performance measurement and management process, this latter framework is organized in four phases: context establishment, performance modelling, performance assessment, and decision-making. Each phase of the framework is then instrumented with state of-the-art quantitative analysis tools and methods. For process design and evaluation, the deliverable of the value- and risk-based performance measurement and management system (VR-PMS) is a set of ranked solutions (i.e. alternative business processes) evaluated against the developed value and risk indicators. The proposed VR-PMS is illustrated with a case study from discrete parts manufacturing but is indeed applicable to a wide range of processes or systems

    ESTIMATING THE IMPACT OF BUILDING INFORMATION MODELING (BIM) UTILIZATION ON BUILDING PROJECT PERFORMANCE

    Get PDF
    Many benefits of utilizing the Building Information Modeling (BIM) technology have been recognized and reported in the Architectural, Engineering and Construction (AEC) industry literature. However, it seems that the construction industry still hesitates to fully adopt BIM. As some researchers suggest, the root cause may be in the lack of understanding of whether and how BIM improves project outcomes. This research aims to shed some light on this matter by studying the impact of BIM utilization on building project performance. This research follows a model-based approach as opposed to statistically analyzing the project outcomes with and without BIM utilization. The construction project supply chain is modeled at the design and construction activity level to represent the project behavior in terms of cost over time. As traditional project management tools as well as statistical methods are not able to consider the dynamic nature of the projects such as feedbacks, time delays and non-linear relationships, this research uses system dynamics methodology to model the project supply chain. The project supply chain model is calibrated with two sets of the projects; with BIM and without BIM. The two calibrated models, Non-BIM and BIM-utilized, are used to estimate the outcomes of a hypothetical set of the projects. The outcomes are compared in terms of the project performance indexes to analyze the BIM impact on the project performance. Since relatively few projects that utilized BIM were found, this research employs expert elicitation (EE) technique to capture the required knowledge from the industry to estimate the parameters of the BIM-utilized model. The EE is used to build a causal model to capture the impact of BIM utilization on the Non-BIM project model parameters in the absence of sufficient BIM-utilized project data

    Comparisons of some large scientific computers

    Get PDF
    In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report

    The Empirical Reality of IT Project Cost Overruns: Discovering A Power-Law Distribution

    Full text link
    If managers assume a normal or near-normal distribution of Information Technology (IT) project cost overruns, as is common, and cost overruns can be shown to follow a power-law distribution, managers may be unwittingly exposing their organizations to extreme risk by severely underestimating the probability of large cost overruns. In this research, we collect and analyze a large sample comprised of 5,392 IT projects to empirically examine the probability distribution of IT project cost overruns. Further, we propose and examine a mechanism that can explain such a distribution. Our results reveal that IT projects are far riskier in terms of cost than normally assumed by decision makers and scholars. Specifically, we found that IT project cost overruns follow a power-law distribution in which there are a large number of projects with relatively small overruns and a fat tail that includes a smaller number of projects with extreme overruns. A possible generative mechanism for the identified power-law distribution is found in interdependencies among technological components in IT systems. We propose and demonstrate, through computer simulation, that a problem in a single technological component can lead to chain reactions in which other interdependent components are affected, causing substantial overruns. What the power law tells us is that extreme IT project cost overruns will occur and that the prevalence of these will be grossly underestimated if managers assume that overruns follow a normal or near-normal distribution. This underscores the importance of realistically assessing and mitigating the cost risk of new IT projects up front

    METHODOLOGY FOR MODELING COST AND SCHEDULE RISK ASSOCIATED WITH RESOURCE DECISIONS INVOLVING THE U.S. ARMY'S MODERNIZATION EFFORTS FOR 2035

    Get PDF
    Prioritization decisions using the Army Modernization and Analysis (AMA)-developed Trade-Space Decision Exploration System (TRADES) does not address programmatic variance related to cost and schedule growth. This study offers an improved methodology for modeling cost risk by employing sound cost estimation principles, distribution fitting, Monte Carlo simulations, and cost/benefit analysis to assist strategic decision makers and the acquisitions community. To that end, this approach follows a five-step methodology that (1) collects and screens cost data from the Cost Assessment Database Enterprise (CADE), (2) determines normalized cost growth factors, (3) identifies and constructs the appropriate distributions for modeling, (4) simulates cost variance among the entire program portfolio, and (5) recommends the necessary contingency cash reserve quantity associated with a decision maker’s confidence level. The result is a credible, repeatable, and effectual cost estimating methodology that promotes commodity-based models for predicting cost growth and measuring risk.Major, United States ArmyApproved for public release. Distribution is unlimited

    Innovation Activities Abroad and the Effects of Liability of Foreignness: Where it Hurts

    Get PDF
    The innovation activities of foreign subsidiaries have been identified as an important source of competitive advantage for multinational corporations. The success of these engagements depends heavily on tapping host country pools of localized expertise. To achieve this foreign subsidiaries have to overcome cultural and social barriers (liability of foreignness). We derive potential stumbling blocks in the innovation process theoretically and argue that these materialize as neglected projects, cancellations or budget overruns. We test these hypotheses empirically for more than 1,000 firms with innovation activities in Germany from various sectors. We find that foreign-controlled firms are not challenged by liability of foreignness at the project mobilization stage. The lack of local embeddedness becomes more binding as projects have to be prioritized and managed which we identify as more frequent mistakes and delays. We argue that this is the result of shared practices within the multinational firm that do not readily fit into the local context. Finally, we derive management recommendations how foreign innovation engagements can achieve similar levels of effectiveness and efficiency as host country competitors. --Liability of foreignness,offshoring R&D,internationalization,innovation management

    A dynamic systems approach to risk assessment in megaprojects

    Get PDF
    Purpose- Megaprojects are large, complex, and expensive projects that often involve social, technical, economic, environmental and political (STEEP) challenges. Despite these challenges, project owners and financiers continue to invest large sums of money in megaprojects that run high risks of being over schedule and over budget. While some degree of cost, schedule and quality risks are considered during planning, the challenge of understanding how risk interactions and impacts on project performance can be modelled dynamically still remains. The consequences learnt from past experiences indicate that there was a lack of dynamic tools to manage such risks effectively in megaproject construction. In seeking to help address these problems, this research put forward an innovative dynamic systems approach called SDANP to risk assessment in megaprojects construction. Design/methodology/approach – The research has developed an innovative SDANP method which involves an integrative use of system dynamics (SD) and analytic network process (ANP) for risk assessment. The SDANP model presented in the thesis has been testified by using data and information collected through a questionnaire survey and interviews from supply-side stakeholders involved in the Edinburgh Tram Network (ETN) project at the Phase One of its construction stage. The SDANP method is a case study risk assessment driven process and can be used against STEEP challenges in megaprojects. Findings – The result of the case study project revealed that the SDANP method is an effective tool for risk assessment to support supply-side stakeholders in decision making in construction planning. The SDANP model has demonstrated its efficiency through case study, and has convinced construction practitioners in terms of its innovation and usefulness. Research limitations/implications – Although the SDANP model has been developed for generic use in risk assessment, data and information used to run the simulation were based on the ETN project, which is in Edinburgh, Scotland. The use of the SDANP model in other megaprojects requires further data and information from local areas. Practical implications – The SDANP method provides an innovative approach to a comprehensive dynamic risk assessment of STEEP issues at the construction planning stage of megaprojects for the first time. It provides an interactive quantitative way for developers to prioritise and simulate potential risks across the project supply network, to understand and predict in advance the consequences of STEEP risks on project performance at the construction stage. Originality/value - The research made an original contribution in quantitative risk assessment with regard to the need for a methodological innovation in research and for a powerful sophisticated tool in practice. The SDANP has shown its advantages over existing tools such as the program evaluation and review technique (PERT) and the risk assessment matrix (RAM)

    LAI Whitepaper Series: “Lean Product Development for Practitioners”: Program Management for Large Scale Engineering Programs

    Get PDF
    The whitepaper begins by introducing the challenges of programs in section 4, proceeds to define program management in section 5 and then gives an overview of existing program management frameworks in section 6. In section 7, we introduce a new program management framework that is tailored towards describing the early program management phases – up to the start of production. This framework is used in section 8 to summarize the relevant LAI research

    Program Management for Large Scale Engineering Programs

    Get PDF
    The goal of this whitepaper is to summarize the LAI research that applies to program management. The context of most of the research discussed in this whitepaper are large-scale engineering programs, particularly in the aerospace & defense sector. The main objective is to make a large number of LAI publications – around 120 – accessible to industry practitioners by grouping them along major program management activities. Our goal is to provide starting points for program managers, program management staff and system engineers to explore the knowledge accumulated by LAI and discover new thoughts and practical guidance for their everyday challenges. The whitepaper begins by introducing the challenges of programs in section 4, proceeds to define program management in section 5 and then gives an overview of existing program management frameworks in section 6. In section 7, we introduce a new program management framework that is tailored towards describing the early program management phases – up to the start of production. This framework is used in section 8 to summarize the relevant LAI research
    • 

    corecore