223 research outputs found

    Origins and probabilities of MEP and structural design clashes within a federated BIM model

    Get PDF
    Design clashes’ encountered during the development of a large multi-storey educational building, awarded under a Joint Contracts Tribunal (JCT) Design and Build contract, are reported upon. The building was developed in Birmingham, UK and the contract value was circa ÂŁ36 million (UK Sterling, 2015). Members of the project management team (PMT) produced designs that were subsequently integrated by the main contractor into a federated building information modelling (BIM) model; at this stage 404 error clashes were evident between the positions of the mechanical, electrical and plumbing (MEP) designer’s and structural designer’s building compartments. The contractor deemed that these particular clashes were ‘mission critical’ as previous experience suggested that project costs could spiral uncontrollably if left unabated. Participatory action research was employed to acquire a deeper understanding and knowledge of the clash incidents. Clash data accrued (in mm) was subsequently quantitatively modelled using the probability density function (PDF) and the cumulative distribution function (CDF). Two models produced were the Log Logistic Three Parameter (3P) (using all data including outliers) and Generalized Gamma distribution (excluding outliers). Both models satisfied Anderson-Darling and Kolmogorov-Smirnov goodness of fit tests at α 0.01 and 0.02 levels of significance. Model parameters could be used to forecast similar clashes occurring on future projects and will prove invaluable to PMT members when accurately estimating the time and resource needed to integrate BIM designs. The predictive modelling revealed that 92.98% of clashes reside within the 30-299 mm range while the most probable occurrence of a clash overlap resides in a discrete category ranging fromof 100-199mm. Further qualitative investigation is also conducted to understand why these clashes occurred and propagate ideas about how such may be mitigated. The research concludes on two important points that, namely: i) BIM is not a panacea to design related construction project rework and that innovative 21st century digital technologies are hampered by 20th century management practices; and ii) improvements in clash and error mitigation residelie in a better understanding of tolerances specified to alleviate mitigate the erroneous task of resolving unnecessary clashes. Future research is proposed that seeks to: automate the clash detection management, analysis and resolution process; conduct further investigative analysis of the organizational and human resource management influences impacting upon design clash propagation; and devise and validate new procedural methods to mitigate clash occurrence using a real-life project

    Dynamic Modeling of Workforce Planning for Infrastructure Projects

    Get PDF
    Workforce planning (WP) for infrastructure projects provides a readily available pool of skilled labor that can deliver a nation’s societal needs. However, achieving a robust and reliable workforce prediction is a constant challenge, as a variety of variables and factors must be considered. Despite various forecasting techniques and approaches being developed, government’s worldwide continue to produce inaccurate forecasts and consequently fail to maintain the balanced workforce required to deliver infrastructure projects. To address this problem, a system dynamics (SD) model for the construction and civil engineering industry is developed, as traditional WP modeling approaches are static and unable to accommodate the changing complex dynamics that influence workforce supply and demand. The SD model is tested and used to formulate training policies that ensure workforce equilibrium and in turn, nurture sustainable infrastructure development

    System information modelling in practice: Analysis of tender documentation quality in a mining mega-project

    Get PDF
    The quality of information contained in tender documentation produced using Computer-Aided-Design (CAD) and provided in a hard-copy format to an electrical engineering contractor for a port expansion facility, which formed an integral part of an Iron Ore mega-project is analyzed. A System Information Model (SIM), which is an object oriented approach, was retrospectively constructed from the documentation provided to assist the contractor with their tender bid preparation. During the creation of the SIM, a total of 426 errors and omissions were found to be contained within the 77 tender ‘drawing’ documents supplied to the contractor by an Engineering, Construction, Procurement and Management (EPCM). Surprisingly, 70 drawings referenced in the tender documentation, and the Input/Output lists and Cause/Effect drawings were not provided. Yet, the electrical contractor was required by the EPCM organization to provide a lump sum bid and also guarantee the proposed schedule would be met; the financial risks were too high and as a result the contractor decided not to submit a bid. It is suggested that if the original tender documentation had been prepared using a SIM rather than CAD, the quality of information presented to the contractor would have enabled them to submit a competitive bid for the works. The research concludes that the economic performance and productivity of mining projects can be significantly improved by using a SIM to engineer and document electrical instrumentation and control (EIC) systems

    VfM assessment of transport PPPs: Implications for future improvement

    Get PDF
    Public-Private Partnerships (PPPs) have been adopted to deliver transport infrastructure assets worldwide, attributing to governments’ increasingly limited fiscal budget. However, the key issue about whether PPPs can provide taxpayers with better value for money (VfM) is still under controversy. As an integral component of infrastructure procurement process, VfM assessment acts as an essential stage of governments’ ex-ante evaluation for the use of PPPs. Despite this, relevant empirical research, especially within the context of transport infrastructure, has received limited attention. Against this backdrop, a total of 3 case studies of the Australian transport PPP projects has been undertaken and the SWOT analysis was conducted to examine the current practice in VfM assessment. The empirical evidence indicates that the approach being used by the Australian state governments for assessing VfM fails in addressing a critical perspective in terms of the improved service quality to be generated by private-sector entities. Based on this finding, a novel VfM-oriented decision-making model that is underpinned by the Dynamic Discrete Choice Model (DDCM) has been proposed. This paper contributes to the body of knowledge of PPPs VfM assessment and is significant for improving industry practice in decision making of procurement selection for transport assets

    Cost Profiling of Water Infrastructure Projects

    Get PDF
    © 2018 American Society of Civil Engineers. The expected final budgeted costs of infrastructure assets are often exceeded during project delivery. Being able to determine the likelihood of changes to the final budget can enable clients to implement strategies to manage and control costs during construction. To understand the changing nature of costs, this paper examines the cost profiles of 1,093 water infrastructure projects that were delivered by a water utility company. Cost overruns were experienced in 656 projects. Only 1 project was delivered on budget; the remaining 436 were completed under final budget approval. A mean cost overrun and underrun of + 19.97 and -32%, are found, respectively. The best-fit distribution for cost overruns and underruns for determining their probability of occurrence is calculated. The research moves beyond examining the cost performance of heterogeneous data sets that have dominated previous studies to the use of a homogeneous sample, which enables more-reliable contingency forecasts to be determined

    Assessing the carcinogenic potential of low-dose exposures to chemical mixtures in the environment: the challenge ahead.

    Get PDF
    Lifestyle factors are responsible for a considerable portion of cancer incidence worldwide, but credible estimates from the World Health Organization and the International Agency for Research on Cancer (IARC) suggest that the fraction of cancers attributable to toxic environmental exposures is between 7% and 19%. To explore the hypothesis that low-dose exposures to mixtures of chemicals in the environment may be combining to contribute to environmental carcinogenesis, we reviewed 11 hallmark phenotypes of cancer, multiple priority target sites for disruption in each area and prototypical chemical disruptors for all targets, this included dose-response characterizations, evidence of low-dose effects and cross-hallmark effects for all targets and chemicals. In total, 85 examples of chemicals were reviewed for actions on key pathways/mechanisms related to carcinogenesis. Only 15% (13/85) were found to have evidence of a dose-response threshold, whereas 59% (50/85) exerted low-dose effects. No dose-response information was found for the remaining 26% (22/85). Our analysis suggests that the cumulative effects of individual (non-carcinogenic) chemicals acting on different pathways, and a variety of related systems, organs, tissues and cells could plausibly conspire to produce carcinogenic synergies. Additional basic research on carcinogenesis and research focused on low-dose effects of chemical mixtures needs to be rigorously pursued before the merits of this hypothesis can be further advanced. However, the structure of the World Health Organization International Programme on Chemical Safety 'Mode of Action' framework should be revisited as it has inherent weaknesses that are not fully aligned with our current understanding of cancer biology

    Observing Exoplanets with the James Webb Space Telescope

    Get PDF
    The census of exoplanets has revealed an enormous variety of planets or- biting stars of all ages and spectral types: planets in orbits of less than a day to frigid worlds in orbits over 100 AU; planets with masses 10 times that of Jupiter to planets with masses less than that of Earth; searingly hot planets to temperate planets in the Habitable Zone. The challenge of the coming decade is to move from demography to physical characterization. The James Webb Space Telescope (JWST) is poised to open a revolutionary new phase in our understanding of exoplanets with transit spectroscopy of relatively short period planets and coronagraphic imaging of ones with wide separations from their host stars. This article discusses the wide variety of exoplanet opportunities enabled by JWSTs sensitivity and stability, its high angular resolution, and its suite of powerful instruments. These capabilities will advance our understanding of planet formation, brown dwarfs, and the atmospheres of young to mature planets

    Evaluation of a combined index of optic nerve structure and function for glaucoma diagnosis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The definitive diagnosis of glaucoma is currently based on congruent damage to both optic nerve structure and function. Given widespread quantitative assessment of both structure (imaging) and function (automated perimetry) in glaucoma, it should be possible to combine these quantitative data to diagnose disease. We have therefore defined and tested a new approach to glaucoma diagnosis by combining imaging and visual field data, using the anatomical organization of retinal ganglion cells.</p> <p>Methods</p> <p>Data from 1499 eyes of glaucoma suspects and 895 eyes with glaucoma were identified at a single glaucoma center. Each underwent Heidelberg Retinal Tomograph (HRT) imaging and standard automated perimetry. A new measure combining these two tests, the structure function index (SFI), was defined in 3 steps: 1) calculate the probability that each visual field point is abnormal, 2) calculate the probability of abnormality for each of the six HRT optic disc sectors, and 3) combine those probabilities with the probability that a field point and disc sector are linked by ganglion cell anatomy. The SFI was compared to the HRT and visual field using receiver operating characteristic (ROC) analysis.</p> <p>Results</p> <p>The SFI produced an area under the ROC curve (0.78) that was similar to that for both visual field mean deviation (0.78) and pattern standard deviation (0.80) and larger than that for a normalized measure of HRT rim area (0.66). The cases classified as glaucoma by the various tests were significantly non-overlapping. Based on the distribution of test values in the population with mild disease, the SFI may be better able to stratify this group while still clearly identifying those with severe disease.</p> <p>Conclusions</p> <p>The SFI reflects the traditional clinical diagnosis of glaucoma by combining optic nerve structure and function. In doing so, it identifies a different subset of patients than either visual field testing or optic nerve head imaging alone. Analysis of prospective data will allow us to determine whether the combined index of structure and function can provide an improved standard for glaucoma diagnosis.</p

    A chemical survey of exoplanets with ARIEL

    Get PDF
    Thousands of exoplanets have now been discovered with a huge range of masses, sizes and orbits: from rocky Earth-like planets to large gas giants grazing the surface of their host star. However, the essential nature of these exoplanets remains largely mysterious: there is no known, discernible pattern linking the presence, size, or orbital parameters of a planet to the nature of its parent star. We have little idea whether the chemistry of a planet is linked to its formation environment, or whether the type of host star drives the physics and chemistry of the planet’s birth, and evolution. ARIEL was conceived to observe a large number (~1000) of transiting planets for statistical understanding, including gas giants, Neptunes, super-Earths and Earth-size planets around a range of host star types using transit spectroscopy in the 1.25–7.8 ÎŒm spectral range and multiple narrow-band photometry in the optical. ARIEL will focus on warm and hot planets to take advantage of their well-mixed atmospheres which should show minimal condensation and sequestration of high-Z materials compared to their colder Solar System siblings. Said warm and hot atmospheres are expected to be more representative of the planetary bulk composition. Observations of these warm/hot exoplanets, and in particular of their elemental composition (especially C, O, N, S, Si), will allow the understanding of the early stages of planetary and atmospheric formation during the nebular phase and the following few million years. ARIEL will thus provide a representative picture of the chemical nature of the exoplanets and relate this directly to the type and chemical environment of the host star. ARIEL is designed as a dedicated survey mission for combined-light spectroscopy, capable of observing a large and well-defined planet sample within its 4-year mission lifetime. Transit, eclipse and phase-curve spectroscopy methods, whereby the signal from the star and planet are differentiated using knowledge of the planetary ephemerides, allow us to measure atmospheric signals from the planet at levels of 10–100 part per million (ppm) relative to the star and, given the bright nature of targets, also allows more sophisticated techniques, such as eclipse mapping, to give a deeper insight into the nature of the atmosphere. These types of observations require a stable payload and satellite platform with broad, instantaneous wavelength coverage to detect many molecular species, probe the thermal structure, identify clouds and monitor the stellar activity. The wavelength range proposed covers all the expected major atmospheric gases from e.g. H2O, CO2, CH4 NH3, HCN, H2S through to the more exotic metallic compounds, such as TiO, VO, and condensed species. Simulations of ARIEL performance in conducting exoplanet surveys have been performed – using conservative estimates of mission performance and a full model of all significant noise sources in the measurement – using a list of potential ARIEL targets that incorporates the latest available exoplanet statistics. The conclusion at the end of the Phase A study, is that ARIEL – in line with the stated mission objectives – will be able to observe about 1000 exoplanets depending on the details of the adopted survey strategy, thus confirming the feasibility of the main science objectives.Peer reviewedFinal Published versio
    • 

    corecore