18,911 research outputs found

    An empirical study evaluating depth of inheritance on the maintainability of object-oriented software

    Get PDF
    This empirical research was undertaken as part of a multi-method programme of research to investigate unsupported claims made of object-oriented technology. A series of subject-based laboratory experiments, including an internal replication, tested the effect of inheritance depth on the maintainability of object-oriented software. Subjects were timed performing identical maintenance tasks on object-oriented software with a hierarchy of three levels of inheritance depth and equivalent object-based software with no inheritance. This was then replicated with more experienced subjects. In a second experiment of similar design, subjects were timed performing identical maintenance tasks on object-oriented software with a hierarchy of five levels of inheritance depth and the equivalent object-based software. The collected data showed that subjects maintaining object-oriented software with three levels of inheritance depth performed the maintenance tasks significantly quicker than those maintaining equivalent object-based software with no inheritance. In contrast, subjects maintaining the object-oriented software with five levels of inheritance depth took longer, on average, than the subjects maintaining the equivalent object-based software (although statistical significance was not obtained). Subjects' source code solutions and debriefing questionnaires provided some evidence suggesting subjects began to experience diffculties with the deeper inheritance hierarchy. It is not at all obvious that object-oriented software is going to be more maintainable in the long run. These findings are sufficiently important that attempts to verify the results should be made by independent researchers

    Dry Friction due to Adsorbed Molecules

    Full text link
    Using an adiabatic approximation method, which searches for Tomlinson model-like instabilities for a simple but still realistic model for two crystalline surfaces in the extremely light contact limit, with mobile molecules present at the interface, sliding relative to each other, we are able to account for the virtually universal occurrence of "dry friction." The model makes important predictions for the dependence of friction on the strength of the interaction of each surface with the mobile molecules.Comment: four pages of latex, figure provide

    Direct Determinations of the Redshift Behavior of the Pressure, Energy Density, and Equation of State of the Dark Energy and the Acceleration of the Universe

    Full text link
    One of the goals of current cosmological studies is the determination of the expansion and acceleration rates of the universe as functions of redshift, and the determination of the properties of the dark energy that can explain these observations. Here the expansion and acceleration rates are determined directly from the data, without the need for the specification of a theory of gravity, and without adopting an a priori parameterization of the form or redshift evolution of the dark energy. We use the latest set of distances to SN standard candles from Riess et al. (2004), supplemented by data on radio galaxy standard ruler sizes, as described by Daly and Djorgovski (2003, 2004). We find that the universe transitions from acceleration to deceleration at a redshift of about 0.4. The standard "concordance model" provides a reasonably good fit to the dimensionless expansion rate as a function of redshift, though it fits the dimensionless acceleration rate as a function of redshift less well. The expansion and acceleration rates are then combined with a theory of gravity to determine the pressure, energy density, and equation of state of the dark energy as functions of redshift. Adopting General Relativity as the correct theory of gravity, the redshift trends for the pressure, energy density, and equation of state of the dark energy out to redshifts of about one are determined, and are found to be generally consistent with the concordance model.Comment: 8 pages, 5 figures. Invited presentation at Coral Gables 200

    Spectroscopic characterization and detection of Ethyl Mercaptan in Orion

    Full text link
    New laboratory data of ethyl mercaptan, CH3_{3}CH2_{2}SH, in the millimeter and submillimeter-wave domains (up to 880 GHz) provided very precise values of the spectroscopic constants that allowed the detection of gauchegauche-CH3_3CH2_2SH towards Orion KL. 77 unblended or slightly blended lines plus no missing transitions in the range 80-280 GHz support this identification. A detection of methyl mercaptan, CH3_{3}SH, in the spectral survey of Orion KL is reported as well. Our column density results indicate that methyl mercaptan is \simeq 5 times more abundant than ethyl mercaptan in the hot core of Orion KL.Comment: Accepted for publication in ApJL (30 January 2014)/ submitted (8 January 2014

    Clean technology in tourist accommodation: a best practice manual

    Full text link

    Crossing Statistic: Bayesian interpretation, model selection and resolving dark energy parametrization problem

    Full text link
    By introducing Crossing functions and hyper-parameters I show that the Bayesian interpretation of the Crossing Statistics [1] can be used trivially for the purpose of model selection among cosmological models. In this approach to falsify a cosmological model there is no need to compare it with other models or assume any particular form of parametrization for the cosmological quantities like luminosity distance, Hubble parameter or equation of state of dark energy. Instead, hyper-parameters of Crossing functions perform as discriminators between correct and wrong models. Using this approach one can falsify any assumed cosmological model without putting priors on the underlying actual model of the universe and its parameters, hence the issue of dark energy parametrization is resolved. It will be also shown that the sensitivity of the method to the intrinsic dispersion of the data is small that is another important characteristic of the method in testing cosmological models dealing with data with high uncertainties.Comment: 14 pages, 4 figures, discussions extended, 1 figure and two references added, main results unchanged, matches the final version to be published in JCA

    The importance of clinical leadership in the hospital setting

    Full text link
    © 2014 Daly et al. In many areas of the developed world, contemporary hospital care is confronted by workforce challenges, changing consumer expectations and demands, fiscal constraints, increasing demands for access to care, a mandate to improve patient centered care, and issues concerned with levels of quality and safety of health care. Effective governance is crucial to efforts to maximize effective management of care in the hospital setting. Emerging from this complex literature is the role of leadership in the clinical setting. The importance of effective clinical leadership in ensuring a high quality health care system that consistently provides safe and efficient care has been reiterated in the scholarly literature and in various government reports. Recent inquiries, commissions, and reports have promoted clinician engagement and clinical leadership as critical to achieving and sustaining improvements to care quality and patient safety. In this discursive paper, we discuss clinical leadership in health care, consider published definitions of clinical leadership, synthesize the literature to describe the characteristics, qualities, or attributes required to be an effective clinical leader, consider clinical leadership in relation to hospital care, and discuss the facilitators and barriers to effective clinical leadership in the hospital sector. Despite the widespread recognition of the importance of effective clinical leadership to patient outcomes, there are some quite considerable barriers to participation in clinical leadership. Future strategies should aim to address these barriers so as to enhance the quality of clinical leadership in hospital care

    A case study objectively assessing female physical activity levels within the National Curriculum for Physical Education

    Get PDF
    The purpose of this study was to assess the impact of the National Curriculum for Physical Education (NCPE) lesson themes and contexts on the profile of moderate to vigorous physical activity (MVPA). Fifteen, Year 9 PE lessons were assessed within the lesson themes of Outwitting Opponents (OO) (delivered through field hockey and netball) and Accurate Replication (AR) (delivered through gymnastics) using the System for Observing the Teaching of Games in Physical Education. Accelerometry identified MVPA within Physical Education Lessons (Actigraph-GTM1, 10-second epoch, MVPA ≥2296 counts/min). Among 112 females MVPA averaged 20.8% of available learning time. Significantly more MVPA was facilitated during OO than AR (22.7 vs. 15.9%, p<0.001, d=0.88). Within both lesson themes, warm-up was the most active lesson context while pre- and post-lesson general management were the least active. Contrary to expectations, neither small-sided nor modified games, vs. full sided games, increased MVPA within OO. During AR technical and applied skill practice resulted in low MVPA. Objective evidence has justified concerns about female adolescent MVPA within PE. At current levels an additional 17.5 minutes of MVPA per 60 minute PE lesson would be needed to meet the minimum 50% guideline
    corecore