356 research outputs found

    Reagowanie na nietypowe zachowania związane z demencją

    Get PDF
    Dementia is an insidious disease process that prevents an individual from making sense of environmental circumstances. Cognitively impaired patients are at increased risk for falls, skin integrity issues, accidents, and wandering behaviors. Yet, as the understanding of this disease process and the behaviors exhibited by the dementia patient grows, there is a new focus on individualizing care and attempting to manage adverse behaviors in a holistic fashion utilizing mainly nonpharmacological interventions. It has been shown that the evidence-based best practices related to mitigating adventitious behaviors in the geriatric population diagnosed with dementia are associated to nonpharmacological interventions. Utilization of technology in the form of computers to implement music therapy, reminiscence therapy, and occupational recreational therapy was the selected evidence-based best practice to implement.The aim of the work is to show non-pharmacological interventions as an inseparable and complementary method of pharmacological treatment of working with people with dementia. (JNNN 2018;7(1):40–45)Demencja jest podstępnym schorzeniem, które uniemożliwia chorym zrozumienie warunków środowiskowych. Pacjenci z zaburzeniami poznawczymi narażeni są na zwiększone ryzyko wystąpienia upadków, problemów skórnych, wypadków i zachowań wędrownych. Wraz ze wzrostem zrozumienia tego procesu chorobowego i zachowań przejawianych przez osoby chore, nowe podejście koncentrować się powinno na indywidualizacji opieki i próbach radzenia sobie z niepożądanymi zachowaniami w sposób całościowy, wykorzystując głównie interwencje niefarmakologiczne. Wykazano, że najlepsze praktyki oparte na dowodach, związanych z łagodzeniem przypadkowych zachowań w populacji geriatrycznej z rozpoznaną demencją, wiążą się z niefarmakologicznymi interwencjami. Wykorzystanie technologii w postaci komputerów do realizacji muzykoterapii, terapii wspomnień i rehabilitacji zajęciowej były wybrane najlepszą praktyką opartą na dowodach.Celem pracy jest ukazanie interwencji niefarmakologicznych jako nieodłącznej i uzupełniającej podstawowe leczenie farmakologiczne metody pracy z osobami z demencją. (PNN 2018;7(1):40–45

    Senates, Unions, and the Flow of Power in American Higher Education

    Get PDF
    This article draws on longitudinal survey research to examine the influence pro- files of senates standing alone on college campuses in the United States, as well as the influence profiles of coexisting senates and faculty unions. The article discusses the forces prompting a flow of power away from faculty deliberative bodies and speculates on the future of faculty senates as hard times came to American higher education.Cet article essaie de retracer l'influence des "sénats académiques"dans les insti-tutions où il n'existe pas de syndicats de professeurs et d'en faire une comparaison avec les institutions où les deux corps doivent coexister. L'auteur discute des forces qui tendent à diminuer l'influence des assemblées délibérantes et du sort que pourrait leur réserver la période difficile que l'enseignement postsecondaire américain traverse

    AN EMPIRICAL VALIDATION OF SOFTWARE COST ESTIMATION MODELS

    Get PDF
    Practitioners have expressed concern over their inability to accurately estimate costs associated with software development. This concern has become even more pressing as these costs continue to increase. As a result, considerable research attention is now directed at gaining a better understanding of the software development process, as well as constructing and evaluating software cost estimating tools. This paper evaluates four of the most popular algorithmic models used to estimate software costs (SLIM, COCOMO, FUNCTION POINTS, and ESTIMACS). Specifically, this paper addresses the following questions: 1) Are these models accurate outside their original environments and can they be easily calibrated? 2) Two of the models use source lines of code (SLOC) as an input, and two use inputs that are easier to estimate early in the project life cycle. Can the latter models be as accurate as the SLOC models, thus eliminating the need to attempt to estimate lines of code early in the project? 3) Two of the models are proprietary and two are not. Are the models that are in the open literature as accurate as the proprietary models, thus eliminating the need to purchase estimating software? The methodology for evaluating these models was to gather data on completed software development projects and compare the actual costs with the ex post estimates obtained from the four models. Two tests were used to assess the accuracy of these models. The first was Conte\u27s Magnitude of Relative Error (MRE) test, which divides the difference between the estimate and the actual effort by the actual effort, then takes the absolute value to eliminate problems with averaging positive and negative variances. The second test was to run simple regressions with the estimate as the independent variable and the actual effort as the dependent variable. The latter test was used for calibration and to judge the relative goodness of fit of the resulting linear models. The source of the project data was a national computer consulting and services firm specializing in the design and development of data processing systems. The fifteen projects collected for this study covered a range of profit and not-for-profit applications. The majority of projects were written in COBOL, with an average size of approximately 200,000 source lines of code. Analysis of the data yielded several practical results. First, models developed in different environments did not perform very well uncalibrated, as might be expected. Average error rates calculated using the MRE formula ranged from 85% to 772%, with many falling in the 500-600% range. Therefore, organizations that wish to use algorithmic estimating tools need to collect historical data on their projects in order to calibrate the models for local conditions. After allowing for calibration, the best of the models explain 88% of the behavior of the actual man-month effort in this data set. The second estimation question concerned the relative efficacy of\u27SLOC models versus non-SLOC models. In terms of the MRE results, the non-SLOC models (ESTIMACS and FUNCTION POINTS) did better, although this is likely due to their development in business data processing environments similar to that of the data source. In terms of the regression results, both COCOMO and SLIM had higher correlations than either ESTIMACS or FUNCTION POINTS. However, this conclusion must be made with reservation because the SLOC counts were obtained ex post, and are therefore likely to be much more accurate than SLOC counts obtained before a project begins. The final research question on the relative accuracy of the proprietary and the nonproprietary models was not answered conclusively by this research. The proprietary SLIM model outperformed (in its regression coefficient of determination) the nonproprietary COCOMO model, while the non-proprietary FUNCTION POINTS model outperformed the proprietary ESTIMACS model for this data set. This research has provided several important results regarding software metrics and models. First, Albrecht\u27s model for estimating man-months of effort from the FUNCTION POINTS metric has been validated on an independent dataset. This is particularly significant in that FUNCTION POINTS have been proposed by IBM as a general productivity measure, and because prior to this there was only limited evidence for their utility from non-1BM sources. Second, algorithmic models, while an improvement over their basic inputs as predictors of effort, do not model the productivity process very well. While improving estimation techniques within the industry is a worthwhile goal, the ultimate question must concern how the productivity of software developers could be improved. These questions are related in that the estimation models contain descriptions of what factors their developers believe affect productivity. The results of this study show that the models researched do not seem to capture the productivity factors very well. Further research needs to be done to isolate and measure these factors affecting systems professionals\u27 productivity if the profession is to meet future challenges

    A Longitudinal Analysis of Software Maintenance Patterns

    Get PDF

    Toward a Detailed Classification Scheme for Software Maintenance Activities,

    Get PDF
    Concern for Y2K compliance emphasizes the need for understanding and improved management of software maintenance activities. Relatively little empirical research has examined the type and extent of activities taking place during software maintenance. Our research represents a first attempt in developing a detailed taxonomy that describes the type and distribution of activities within software maintenance. We illustrate our taxonomy using maintenance data from an actual application system

    FACTORS AFFECTING SOFTWARE MAINTENANCE PRODUCTIVITY: AN EXPLORATORY STUDYl

    Get PDF
    Systems developers and researchers have long been interested in the factors that affect software development productivity. Identification of factors as either aiding or hindering productivity enables management to take steps to encourage the positive influences and to eliminate the negative ones. This research has explored the possibility of developing an estimable model of software development productivity using a frontier estimation method. The approach taken is based upon output metrics for the entire project life-cycle, and includes project quality metrics. A large number of factors potentially affecting software maintenance productivity were included in this initial investigation. The empirical analysis of a pilot data set indicated that high project quality did not necessarily reduce project productivity. Significant factors in explaining positive variations in productivity included project team capability and good system response (turnaround) time. Factors significantly associated with negative variations in productivity included lack of team application experience and high project staff loading, The use of a new structured analysis and design methodology also resulted in lower short term productivity. These preliminary results have suggested a number of new research directions and have prompted the data-site to begin a full scale data collection effort in order to validate a model of software maintenance productivity

    Software Volatility: A Stystem-Level Measure

    Get PDF
    With change our only constant, information systems researchers appreciate the need to measure and understand change processes occurring in software systems (i.e., software volatility). In this study we define a system-level multi-dimensional measure of software volatility. This measure can be used both quantitatively and qualitatively to analyze system behavior. We describe the lifecycle volatility of three application systems. We also discuss use of software volatility as a qualitative measure to interpret system behavior for software portfolio management
    corecore