4,886 research outputs found

    Keeping an eye on the truth: Pupil size, recognition memory and malingering

    Get PDF
    Background: Estimates of the incidence of malingering in patient populations vary from 1 to 12%, rising to ∼25% in patients seeking financial compensation. Malingering is particularly difficult to detect when patients feign poor performance on neuropsychological tests (see Hutchinson, 2001). One strategy to detect malingering has been to identify psychophysiological markers associated with deception. Tardif, Barry, Fox and Johnstone (2000) used electroencephalogram (EEG) recording to measure event related potentials (ERPs) during a standard recognition memory test. Previous research has documented an ERP “old/new effect” – late positive parietal ERPs are larger when participants view old, learned words compared to new words during recognition. Tardif et al. reasoned that if this effect is not under conscious control, then it should be equally detectable in people feigning amnesia as in participants performing to their best ability. As predicted, they found no difference in the magnitude and topography of the old/new ERP effect between participants who were asked to feign amnesia whilst performing the test and those asked to perform to their best ability. Whilst this approach shows some promise, EEG is comparatively time consuming and expensive. Previous research has shown that during recognition memory tests, participants' pupils dilate more when they view old items compared to new items (Otero, Weeks, and Hutton, 2006; Vo et al., 2008). This pupil “old/new effect” may present a simpler means by which to establish whether participants are feigning amnesia. Method: We used video-based oculography to compare changes in pupil size during a recognition memory test when participants were given standard recognition memory instructions, instructions to feign amnesia and instructions to report all items as new. Due to constant fluctuation in pupil size over time, and variation between individuals, a pupil dilation ratio (PDR) was calculated that represented the maximum pupil size during the trial as a proportion of the maximum during baseline. Results: Participants' pupils dilated more to old items compared to new items under all three instruction conditions (F(1.25) = 47.02, MSE < 0.001, p < .001, ηp2 = .65). There were no significant differences between baseline pupil size (F(1.63,40.76) = 1.90, p = .17, ns). Conclusions: The finding that under standard recognition memory instructions, participants' relative increase in pupil size is greater when they view old items compared to new items replicates previous research documenting the pupil old/new effect. That the effect persists, even when participants give erroneous responses during recognition, suggests that the “pupil old/new effect” is not under conscious control and may therefore have potential use in clinical settings as a simple means with which to detect whether patients are feigning amnesia

    Using blind analysis for software engineering experiments

    Get PDF
    Context: In recent years there has been growing concern about conflicting experimental results in empirical software engineering. This has been paralleled by awareness of how bias can impact research results. Objective: To explore the practicalities of blind analysis of experimental results to reduce bias. Method : We apply blind analysis to a real software engineering experiment that compares three feature weighting approaches with a na ̈ıve benchmark (sample mean) to the Finnish software effort data set. We use this experiment as an example to explore blind analysis as a method to reduce researcher bias. Results: Our experience shows that blinding can be a relatively straightforward procedure. We also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety (i.e., the degree of trimming). Conclusion: Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and easy to implement method that supports more objective analysis of experimental results. Therefore we argue that blind analysis should be the norm for analysing software engineering experiments

    Advanced control concepts

    Get PDF
    The selection of a trim solution that provides the space shuttle with the highest level of performance and dynamic control in the presense of wind disturbances and bias torques due to misalignment of rocket engines is described. It was determined that engine gimballing is insufficient to provide control to trim the vehicle for headwind and sidewind disturbances, and that it is necessary to use aerodynamic surfaces in conjunction with engine gimballing to achieve trim. The algebraic equations for computing the trim solution were derived from the differential equations describing the motion of the vehicle by substituting the desired trim conditions. The general problem of showing how the trim equations are derived from the equations of motion and the mathematical forms of the performance criterion is discussed in detail, along with the general equations for studying the dynamic response of the trim solution

    Quasi-optimum design of a six degree of freedom moving base simulator control system

    Get PDF
    The design of a washout control system for a moving base simulator is treated by a quasi-optimum control technique. The broad objective of the design is to reproduce the sensed motion of a six degree of freedom simulator as accurately as possible without causing the simulator excursions to exceed specified limits. A performance criterion is established that weights magnitude and direction errors in specific force and in angular velocity and attempts to maintain the excursion within set limits by penalizing excessive excursions. A FORTRAN routine for relizing the washout law was developed and typical time histories using the washout routine were simulated for a range of parameters in the penalty- and weighting-functions. These time histories and the listing of the routine are included in the report

    Professionalism and Job Satisfaction of Registered Nurses in the Commonwealth of Virginia

    Get PDF
    The purpose of this study was to investigate current views of professionalism and job satisfaction of registered nurses practicing a variety of health care settings in the State of Virginia. Five research questions were investigated in this analytical-descriptive study. A two percent stratified random sample of 427 registered nurses, female and actively employed, represented all nurses from five regions in the state of Virginia. The demographic findings indicated that the majority of nurses were diploma graduates, staff nurses, employed in hospital settings, and working full-time. A descriptive analysis of Stone and Knopke Health Care Professional Attitude Inventory items modified by Lawler indicated that registered nurses have professional status according to Dumont’s model of professionalism. Nurses identified consumer control, indifference to credentials, compassion, and impatience with the rate of change as important elements of professionalism. However, there was no significant relationship between nurses\u27 professionalism and highest levels of education in nursing, current job positions, and major job settings. Job satisfaction findings using Atwood and Hinshaw’s Work Satisfaction Scale indicated that nurses were generally satisfied in their work setting although they were concerned about pay compensation, opportunities to advance, and control of nursing practice. A significant relationship was found between nurses\u27 work setting and job satisfaction. Hospital nurses exhibited greater job satisfaction than nurses in other health care settings. A small relationship was revealed using job satisfaction as a predictor of professionalism

    A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?

    Get PDF
    Background Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. Discussion We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see “others” in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. Conclusion We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process

    A Decision Support Tool for Seed Mixture Calculations

    Get PDF
    Grassland species are normally seeded in mixtures rather than monocultures. In theory, seeding rates for mixtures are simply a sum of the amount of pure live seed (PLS) of each seed lot in the mix, an amount sufficient to ensure establishment and survival of each species. Mixtures can be complex because of the number of species used (especially in conservation and reclamation programs) and variations in seed purity and seed size. Soil limitations and seeding equipment settings need to be considered and in Canada, a metric conversion may be required. All these conditions make by-hand calculations of mixtures containing more than 3 species tedious and complicated. Thus, in practice, agronomists and growers use simple rules to set rates. The easiest rule is to estimate the mixture’s components as a percentage by weight of a standardized total weight of the seed required (e.g. 10% of 10 kg/ha). The resulting errors can be observed in the predominance of thin stands, the unexpected dominance of small seeded species and the added costs of interseeding to compete with weeds and fertilizer to increase yield. The objective of this project was to develop a decision support tool, a seed mixture calculator to simplify conversion and improve the estimates of seed required for individual seeding projects

    The Effects of Individual Differences on Cued Antisaccade Performance

    Get PDF
    In the antisaccade task, pre-cueing the location of a correct response has the paradoxical effect of increasing errors. It has been suggested that this effect occurs because participants adopt an "antisaccade task set" and treat the cue as if was a target - directing attention away from the precue and towards the location of the impending target. This hypothesis was tested using a mixed pro / antisaccade task. In addition the effects of individual differences in working memory capacity and schizotypal personality traits on performance were examined. Whilst we observed some modest relationships between these individual differences and antisaccade performance, the strongest predictor of antisaccade error rate was uncued prosaccade latency

    Executive function in first-episode schizophrenia

    Get PDF
    BACKGROUND: We tested the hypothesis that schizophrenia is primarily a frontostriatal disorder by examining executive function in first-episode patients. Previous studies have shown either equal decrements in many cognitive domains or specific deficits in memory. Such studies have grouped test results or have used few executive measures, thus, possibly losing information. We, therefore, measured a range of executive ability with tests known to be sensitive to frontal lobe function. METHODS: Thirty first-episode schizophrenic patients and 30 normal volunteers, matched for age and NART IQ, were tested on computerized test of planning, spatial working memory and attentional set shifting from the Cambridge Automated Neuropsychological Test Battery. Computerized and traditional tests of memory were also administered for comparison. RESULTS: Patients were worse on all tests but the profile was non-uniform. A componential analysis indicated that the patients were characterized by a poor ability to think ahead and organize responses but an intact ability to switch attention and inhibit prepotent responses. Patients also demonstrated poor memory, especially for free recall of a story and associate learning of unrelated word pairs. CONCLUSIONS: In contradistinction to previous studies, schizophrenic patients do have profound executive impairments at the beginning of the illness. However, these concern planning and strategy use rather than attentional set shifting, which is generally unimpaired. Previous findings in more chronic patients, of severe attentional set shifting impairment, suggest that executive cognitive deficits are progressive during the course of schizophrenia. The finding of severe mnemonic impairment at first episode suggests that cognitive deficits are not restricted to one cognitive domain

    Duration of untreated psychosis and social function: 1-year follow-up study of first-episode schizophrenia.

    Get PDF
    BACKGROUND: In first-episode schizophrenia, longer duration of untreated psychosis (DUP) predicts poorer outcomes. AIMS: To address whether the relationship between DUP and outcome is a direct causal one or the result of association between symptoms and/or cognitive functioning and social functioning at the same time point. METHOD: Symptoms, social function and cognitive function were assessed in 98 patients with first-episode schizphrenia at presentation and 1 year later. RESULTS: There was no significant clinical difference between participants with short and long DUP at presentation. Linear regression analyses revealed that longer DUP significantly predicted more severe positive and negative symptoms and poorer social function at 1 year, independent of scores at presentation. Path analyses revealed independent direct relationships between DUP and social function, core negative symptoms and positive symptoms. There was no significant association between DUP and cognition. CONCLUSIONS: Longer DUP predicts poor social function independently of symptoms. The findings underline the importance of taking account of the phenomenological overlap between measures of negative symptoms and social function when investigating the effects of DUP
    corecore