1,367 research outputs found

    Construction dispute reduction through an improved contracting process in the Canadian context

    Get PDF
    This thesis presents a new approach to construction contracting in North America. This new approach is referred to as the New Canadian Contracting Method (NCCM). It has been developed as a result of research into the existing contracting process used in North America generally and in Canada specifically. The NCCM addresses four main issues that were identified in the research, namely: confrontational construction; dispute resolution problems and costs; the project execution team selection process; completion of contracts. The NCCM addresses these issues without being prescriptive or by attempting to address one party's agenda over another. This is because these two approaches have been common to previous and unsuccessful attempts at addressing these issues. The new contracting method proposes the following four elements. First the designer and contractor are selected on a qualification basis. The designer and the contractor may be brought on to the project team at a time when the contractor can add to constructability by having input into production of the working drawings. Second, a commercial risk evaluation process is introduced as a part of the negotiation or tendering stage. This approach is innovative, and allows both the owner and the contractors to have input to the identification and allocation of risk in the contract. Third the administration of the contract involves a Proactive Mediation Process that is designed to reduce the incidence of conflict and lower or eliminate conflict resolution costs. Fourth the close-out of contracts is formalized with a process for realigning the completion of the contract. This is done by reassigning outstanding obligations to the best advantage of all parties. The draft process was tested for validity. The consensus was that, with some modifications (included in the thesis), the NCCM could be useful to the Canadian construction industry

    Beyond IT interoperability assessment: Complexity analysis of the project context

    Get PDF
    IT people do best what they are trained to do: examine interoperability issues through a technical lens. It may be unfair to ask of them to systematically and comprehensively analyze non-IT concerns of an interoperability project such as business strategy, constraints and governance. Yet to fully understand the feasibility of an interoperability project, IT people need to examine non-IT factors that can make or break these complex, expensive and time consuming projects. This paper is about a model that emerged from a research project about understanding the nature of IT projects. The Complexity-Based Project Classification Framework can be used to assess the feasibility of a business interoperability project. A three-round international Delphi project with a sample of 23 acknowledged experts identified and prioritized the non-technical project attributes that need to be analyzed when assessing IT project feasibility. The Complexity-Based Project Classification Framework emerged. The Complexity-Based Project Classification Framework is composed of three parts: preconditions, contextual complexity attributes and project effort attributes. Once preconditions are in place (e.g. the organization needs to support using this model for assessing the feasibility of business interoperability) then the project team can assess the interoperability project by considering its project effort attributes (e.g. technology) and project contextual attributes (e.g. relative project size). It is suggested that practitioners who use this Framework will have an improved understanding of the IT interoperability project feasibility

    The Progression Towards Project Management Competence

    Get PDF
    The purpose of this research was to investigate the soft competencies by project phase that IT project managers, hybrid and technical team members require for project success. The authors conducted qualitative interviews to collect data from a sample of 22 IT project managers and business leaders located in Calgary, Canada. They identified the key competencies for the three types of job roles. The research participants offered their opinions of what are the most important competencies from the following competence categories: Personal Attributes (e.g. eye for details), Communication (e.g. effective questioning), Leadership (e.g. create an effective project environment), Negotiations (e.g. consensus building), Professionalism (e.g. life long learning), Social Skills (e.g. charisma) and Project Management Competencies (e.g. manage expectations). The authors discuss the progression of competence through these job roles. They identified and discuss the interplay between a change in job role and the required competencies necessary for IT project success from a neuro-science perspective

    The Delphi Method for Graduate Research

    Get PDF
    Introduction It continues to be an exciting time to be a researcher in the information systems discipline; there seems to be a plethora of interesting and pressing research topics suitable for research at the masters or PhD level. Researchers may want to look forward to see what will be the key information systems issues in a wireless world, the ethical dilemmas in social network analysis, and the lessons early adopters learn. Practitioners may be interested in what others think about the strengths and weaknesses of an existing information system, or the effectiveness of a newly implemented information system. The Delphi method can help to uncover data in these research directions. The Delphi method is an iterative process used to collect and distill the judgments of experts using a series of questionnaires interspersed with feedback. The questionnaires are designed to focus on problems, opportunities, solutions, or forecasts. Each subsequent questionnaire is developed based on the results of the previous questionnaire. The process stops when the research question is answered: for example, when consensus is reached, theoretical saturation is achieved, or when sufficient information has been exchanged. The Delphi method has its origins in the American business community, and has since been widely accepted throughout the world in many industry sectors including health care, defense, business, education, information technology, transportation and engineering. The Delphi method\u27s flexibility is evident in how it has been used. It is a method for structuring a group communication process to facilitate group problem solving and to structure models (Linstone & Turloff, 1975). The method can also be used as a judgment, decision-aiding or forecasting tool (Rowe & Wright, 1999), and can be applied to program planning and administration (Delbeq, Van de Ven, & Gustafson, 1975). The Delphi method can be used when there is incomplete knowledge about a problem or phenomena (Adler & Ziglio, 1996; Delbeq et al., 1975). The method can be applied to problems that do not lend themselves to precise analytical techniques but rather could benefit from the subjective judgments of individuals on a collective basis (Adler & Ziglio, 1996) and to focus their collective human intelligence on the problem at hand (Linstone & Turloff, 1975). Also, the Delphi is used to investigate what does not yet exist (Czinkota & Ronkainen, 1997; Halal, Kull, & Leffmann, 1997; Skulmoski & Hartman 2002). The Delphi method is a mature and a very adaptable research method used in many research arenas by researchers across the globe. To better understand its diversity in application, one needs to consider the origins of the Delphi method. The Classical Delphi The original Delphi method was developed by Norman Dalkey of the RAND Corporation in the 1950\u27s for a U.S. sponsored military project. Dalkey states that the goal of the project was to solicit expert opinion to the selection, from the point of view of a Soviet strategic planner, of an optimal U.S. industrial target system and to the estimation of the number of A-bombs required to reduce the munitions output by a prescribed amount, (Dalkey & Helmer, 1963, p. 458). Rowe and Wright (1999) characterize the classical Delphi method by four key features: 1. Anonymity of Delphi participants: allows the participants to freely express their opinions without undue social pressures to conform from others in the group. Decisions are evaluated on their merit, rather than who has proposed the idea. 2. Iteration: allows the participants to refine their views in light of the progress of the group\u27s work from round to round. 3. Controlled feedback: informs the participants of the other participant\u27s perspectives, and provides the opportunity for Delphi participants to clarify or change their views. 4. Statistical aggregation of group response: allows for a quantitative analysis and interpretation of data.

    Jimmy Swaggart's Secular Confession

    Get PDF
    This is the author's accepted manuscript. The published version is available from http://dx.doi.org/10.1080/02773940902766748 .Following the exposure of televangelist Jimmy Swaggart’s illicit rendezvous with a New Orleans prostitute, the Assemblies of God simultaneously orchestrated a massive attempt to silence those who would discuss the tryst and arranged the most widely publicized confession in American history theretofore. The coincidence of a “silence campaign” with the vast distribution of a public confession invites us to reconsider the nature of the public confession. For what place has a public confession, the discourse of disclosure par excellence, in a silence campaign? This question is best answered, I argue, if we understand public confession not as a stable a-historical form, but as a practice that is informed by multiple, competing traditions. I argue that by situating Swaggart’s performance in a philosophically modern and secular tradition of public confession we can understand both its complicity in a silence campaign and, more generally, the political logic of the modern public confession

    An Essential Role for Katanin p80 and Microtubule Severing in Male Gamete Production

    Get PDF
    Katanin is an evolutionarily conserved microtubule-severing complex implicated in multiple aspects of microtubule dynamics. Katanin consists of a p60 severing enzyme and a p80 regulatory subunit. The p80 subunit is thought to regulate complex targeting and severing activity, but its precise role remains elusive. In lower-order species, the katanin complex has been shown to modulate mitotic and female meiotic spindle dynamics and flagella development. The in vivo function of katanin p80 in mammals is unknown. Here we show that katanin p80 is essential for male fertility. Specifically, through an analysis of a mouse loss-of-function allele (the Taily line), we demonstrate that katanin p80, most likely in association with p60, has an essential role in male meiotic spindle assembly and dissolution and the removal of midbody microtubules and, thus, cytokinesis. Katanin p80 also controls the formation, function, and dissolution of a microtubule structure intimately involved in defining sperm head shaping and sperm tail formation, the manchette, and plays a role in the formation of axoneme microtubules. Perturbed katanin p80 function, as evidenced in the Taily mouse, results in male sterility characterized by decreased sperm production, sperm with abnormal head shape, and a virtual absence of progressive motility. Collectively these data demonstrate that katanin p80 serves an essential and evolutionarily conserved role in several aspects of male germ cell development

    Variability of the blazar 4C 38.41 (B3 1633+382) from GHz frequencies to GeV energies

    Get PDF
    The quasar-type blazar 4C 38.41 (B3 1633+382) experienced a large outburst in 2011, which was detected throughout the entire electromagnetic spectrum. We present the results of low-energy multifrequency monitoring by the GASP project of the WEBT consortium and collaborators, as well as those of spectropolarimetric/spectrophotometric monitoring at the Steward Observatory. We also analyse high-energy observations of the Swift and Fermi satellites. In the optical-UV band, several results indicate that there is a contribution from a QSO-like emission component, in addition to both variable and polarised jet emission. The unpolarised emission component is likely thermal radiation from the accretion disc that dilutes the jet polarisation. We estimate its brightness to be R(QSO) ~ 17.85 - 18 and derive the intrinsic jet polarisation degree. We find no clear correlation between the optical and radio light curves, while the correlation between the optical and \gamma-ray flux apparently fades in time, likely because of an increasing optical to \gamma-ray flux ratio. As suggested for other blazars, the long-term variability of 4C 38.41 can be interpreted in terms of an inhomogeneous bent jet, where different emitting regions can change their alignment with respect to the line of sight, leading to variations in the Doppler factor \delta. Under the hypothesis that in the period 2008-2011 all the \gamma-ray and optical variability on a one-week timescale were due to changes in \delta, this would range between ~ 7 and ~ 21. If the variability were caused by changes in the viewing angle \theta\ only, then \theta\ would go from ~ 2.6 degr to ~ 5 degr. Variations in the viewing angle would also account for the dependence of the polarisation degree on the source brightness in the framework of a shock-in-jet model.Comment: 19 pages, 23 figures, in press for Astronomy and Astrophysic

    Combination of the W boson polarization measurements in top quark decays using ATLAS and CMS data at root s=8 TeV

    Get PDF
    The combination of measurements of the W boson polarization in top quark decays performed by the ATLAS and CMS collaborations is presented. The measurements are based on proton-proton collision data produced at the LHC at a centre-of-mass energy of 8 TeV, and corresponding to an integrated luminosity of about 20 fb(-1)for each experiment. The measurements used events containing one lepton and having different jet multiplicities in the final state. The results are quoted as fractions of W bosons with longitudinal (F-0), left-handed (F-L), or right-handed (F-R) polarizations. The resulting combined measurements of the polarization fractions are F-0= 0.693 +/- 0.014 and F-L= 0.315 +/- 0.011. The fractionF(R)is calculated from the unitarity constraint to be F-R=-0.008 +/- 0.007. These results are in agreement with the standard model predictions at next-to-next-to-leading order in perturbative quantum chromodynamics and represent an improvement in precision of 25 (29)% for F-0(F-L) with respect to the most precise single measurement. A limit on anomalous right-handed vector (V-R), and left- and right-handed tensor (g(L), g(R)) tWb couplings is set while fixing all others to their standard model values. The allowed regions are [-0.11,0.16] for V-R, [-0.08,0.05] for g(L), and [-0.04,0.02] for g(R), at 95% confidence level. Limits on the corresponding Wilson coefficients are also derived.Peer reviewe

    Measurement of hadronic event shapes in high-p T multijet final states at √s = 13 TeV with the ATLAS detector

    Get PDF
    A measurement of event-shape variables in proton-proton collisions at large momentum transfer is presented using data collected at s = 13 TeV with the ATLAS detector at the Large Hadron Collider. Six event-shape variables calculated using hadronic jets are studied in inclusive multijet events using data corresponding to an integrated luminosity of 139 fb−1. Measurements are performed in bins of jet multiplicity and in different ranges of the scalar sum of the transverse momenta of the two leading jets, reaching scales beyond 2 TeV. These measurements are compared with predictions from Monte Carlo event generators containing leading-order or next-to-leading order matrix elements matched to parton showers simulated to leading-logarithm accuracy. At low jet multiplicities, shape discrepancies between the measurements and the Monte Carlo predictions are observed. At high jet multiplicities, the shapes are better described but discrepancies in the normalisation are observed. [Figure not available: see fulltext.

    A search for the dimuon decay of the Standard Model Higgs boson with the ATLAS detector

    Get PDF
    A search for the dimuon decay of the Standard Model (SM) Higgs boson is performed using data corresponding to an integrated luminosity of 139 fb(-1) collected with the ATLAS detector in Run 2 pp collisions at root s = 13 TeV at the Large Hadron Collider. The observed (expected) significance over the background-only hypothesis for a Higgs boson with a mass of 125.09 GeV is 2.0 sigma (1.7 sigma). The observed upper limit on the cross section times branching ratio for pp -> H -> mu mu is 2.2 times the SM prediction at 95% confidence level, while the expected limit on a H -> mu mu signal assuming the absence (presence) of a SM signal is 1.1(2.0). The best-fit value of the signal strength parameter, defined as the ratio of the observed signal yield to the one expected in the SM, is mu = 1.2 +/- 0.6. (C) 2020 The Author(s). Published by Elsevier B.V
    corecore