386 research outputs found

    Predictive feedback control and Fitts' law

    Get PDF
    Fitts’ law is a well established empirical formula, known for encapsulating the “speed-accuracy trade-off”. For discrete, manual movements from a starting location to a target, Fitts’ law relates movement duration to the distance moved and target size. The widespread empirical success of the formula is suggestive of underlying principles of human movement control. There have been previous attempts to relate Fitts’ law to engineering-type control hypotheses and it has been shown that the law is exactly consistent with the closed-loop step-response of a time-delayed, first-order system. Assuming only the operation of closed-loop feedback, either continuous or intermittent, this paper asks whether such feedback should be predictive or not predictive to be consistent with Fitts law. Since Fitts’ law is equivalent to a time delay separated from a first-order system, known control theory implies that the controller must be predictive. A predictive controller moves the time-delay outside the feedback loop such that the closed-loop response can be separated into a time delay and rational function whereas a non- predictive controller retains a state delay within feedback loop which is not consistent with Fitts’ law. Using sufficient parameters, a high-order non-predictive controller could approximately reproduce Fitts’ law. However, such high-order, “non-parametric” controllers are essentially empirical in nature, without physical meaning, and therefore are conceptually inferior to the predictive controller. It is a new insight that using closed-loop feedback, prediction is required to physically explain Fitts’ law. The implication is that prediction is an inherent part of the “speed-accuracy trade-off”

    Cognitive dysfunction in naturally occurring canine idiopathic epilepsy

    Get PDF
    Globally, epilepsy is a common serious brain disorder. In addition to seizure activity, epilepsy is associated with cognitive impairments including static cognitive impairments present at onset, progressive seizure-induced impairments and co-morbid dementia. Epilepsy occurs naturally in domestic dogs but its impact on canine cognition has yet to be studied, despite canine cognitive dysfunction (CCD) recognised as a spontaneous model of dementia. Here we use data from a psychometrically validated tool, the canine cognitive dysfunction rating (CCDR) scale, to compare cognitive dysfunction in dogs diagnosed with idiopathic epilepsy (IE) with controls while accounting for age. An online cross-sectional study resulted in a sample of 4051 dogs, of which n = 286 had been diagnosed with IE. Four factors were significantly associated with a diagnosis of CCD (above the diagnostic cut-off of CCDR ≥50): (i) epilepsy diagnosis: dogs with epilepsy were at higher risk; (ii) age: older dogs were at higher risk; (iii) weight: lighter dogs (kg) were at higher risk; (iv) training history: dogs with more exposure to training activities were at lower risk. Impairments in memory were most common in dogs with IE, but progression of impairments was not observed compared to controls. A significant interaction between epilepsy and age was identified, with IE dogs exhibiting a higher risk of CCD at a young age, while control dogs followed the expected pattern of low-risk throughout middle age, with risk increasing exponentially in geriatric years. Within the IE sub-population, dogs with a history of cluster seizures and high seizure frequency had higher CCDR scores. The age of onset, nature and progression of cognitive impairment in the current IE dogs appear divergent from those classically seen in CCD. Longitudinal monitoring of cognitive function from seizure onset is required to further characterise these impairments

    Effects of automated alerts on unnecessarily repeated serology tests in a cardiovascular surgery department: a time series analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Laboratory testing is frequently unnecessary, particularly repetitive testing. Among the interventions proposed to reduce unnecessary testing, Computerized Decision Support Systems (CDSS) have been shown to be effective, but their impact depends on their technical characteristics. The objective of the study was to evaluate the impact of a Serology-CDSS providing point of care reminders of previous existing serology results, embedded in a Computerized Physician Order Entry at a university teaching hospital in Paris, France.</p> <p>Methods</p> <p>A CDSS was implemented in the Cardiovascular Surgery department of the hospital in order to decrease inappropriate repetitions of viral serology tests (HBV).</p> <p>A time series analysis was performed to assess the impact of the alert on physicians' practices. The study took place between January 2004 and December 2007. The primary outcome was the proportion of unnecessarily repeated HBs antigen tests over the periods of the study. A test was considered unnecessary when it was ordered within 90 days after a previous test for the same patient. A secondary outcome was the proportion of potentially unnecessary HBs antigen test orders cancelled after an alert display.</p> <p>Results</p> <p>In the pre-intervention period, 3,480 viral serology tests were ordered, of which 538 (15.5%) were unnecessarily repeated. During the intervention period, of the 2,095 HBs antigen tests performed, 330 unnecessary repetitions (15.8%) were observed. Before the intervention, the mean proportion of unnecessarily repeated HBs antigen tests increased by 0.4% per month (absolute increase, 95% CI 0.2% to 0.6%, <it>p </it>< 0.001). After the intervention, a significant trend change occurred, with a monthly difference estimated at -0.4% (95% CI -0.7% to -0.1%, <it>p </it>= 0.02) resulting in a stable proportion of unnecessarily repeated HBs antigen tests. A total of 380 unnecessary tests were ordered among 500 alerts displayed (compliance rate 24%).</p> <p>Conclusions</p> <p>The proportion of unnecessarily repeated tests immediately dropped after CDSS implementation and remained stable, contrasting with the significant continuous increase observed before. The compliance rate confirmed the effect of the alerts. It is necessary to continue experimentation with dedicated systems in order to improve understanding of the diversity of CDSS and their impact on clinical practice.</p

    A randomised clinical trial on cardiotocography plus fetal blood sampling versus cardiotocography plus ST-analysis of the fetal electrocardiogram (STANÂŽ) for intrapartum monitoring

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cardiotocography (CTG) is worldwide the method for fetal surveillance during labour. However, CTG alone shows many false positive test results and without fetal blood sampling (FBS), it results in an increase in operative deliveries without improvement of fetal outcome. FBS requires additional expertise, is invasive and has often to be repeated during labour. Two clinical trials have shown that a combination of CTG and ST-analysis of the fetal electrocardiogram (ECG) reduces the rates of metabolic acidosis and instrumental delivery. However, in both trials FBS was still performed in the ST-analysis arm, and it is therefore still unknown if the observed results were indeed due to the ST-analysis or to the use of FBS in combination with ST-analysis.</p> <p>Methods/Design</p> <p>We aim to evaluate the effectiveness of non-invasive monitoring (CTG + ST-analysis) as compared to normal care (CTG + FBS), in a multicentre randomised clinical trial setting. Secondary aims are: 1) to judge whether ST-analysis of fetal electrocardiogram can significantly decrease frequency of performance of FBS or even replace it; 2) perform a cost analysis to establish the economic impact of the two treatment options.</p> <p>Women in labour with a gestational age ≥ 36 weeks and an indication for CTG-monitoring can be included in the trial.</p> <p>Eligible women will be randomised for fetal surveillance with CTG and, if necessary, FBS or CTG combined with ST-analysis of the fetal ECG.</p> <p>The primary outcome of the study is the incidence of serious metabolic acidosis (defined as pH < 7.05 and Bd<sub>ecf </sub>> 12 mmol/L in the umbilical cord artery). Secondary outcome measures are: instrumental delivery, neonatal outcome (Apgar score, admission to a neonatal ward), incidence of performance of FBS in both arms and cost-effectiveness of both monitoring strategies across hospitals.</p> <p>The analysis will follow the intention to treat principle. The incidence of metabolic acidosis will be compared across both groups. Assuming a reduction of metabolic acidosis from 3.5% to 2.1 %, using a two-sided test with an alpha of 0.05 and a power of 0.80, in favour of CTG plus ST-analysis, about 5100 women have to be randomised. Furthermore, the cost-effectiveness of CTG and ST-analysis as compared to CTG and FBS will be studied.</p> <p>Discussion</p> <p>This study will provide data about the use of intrapartum ST-analysis with a strict protocol for performance of FBS to limit its incidence. We aim to clarify to what extent intrapartum ST-analysis can be used without the performance of FBS and in which cases FBS is still needed.</p> <p>Trial Registration Number</p> <p>ISRCTN95732366</p

    Estimates of CO2 from fires in the United States: implications for carbon management

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Fires emit significant amounts of CO<sub>2 </sub>to the atmosphere. These emissions, however, are highly variable in both space and time. Additionally, CO<sub>2 </sub>emissions estimates from fires are very uncertain. The combination of high spatial and temporal variability and substantial uncertainty associated with fire CO<sub>2 </sub>emissions can be problematic to efforts to develop remote sensing, monitoring, and inverse modeling techniques to quantify carbon fluxes at the continental scale. Policy and carbon management decisions based on atmospheric sampling/modeling techniques must account for the impact of fire CO<sub>2 </sub>emissions; a task that may prove very difficult for the foreseeable future. This paper addresses the variability of CO<sub>2 </sub>emissions from fires across the US, how these emissions compare to anthropogenic emissions of CO<sub>2 </sub>and Net Primary Productivity, and the potential implications for monitoring programs and policy development.</p> <p>Results</p> <p>Average annual CO<sub>2 </sub>emissions from fires in the lower 48 (LOWER48) states from 2002–2006 are estimated to be 213 (± 50 std. dev.) Tg CO<sub>2 </sub>yr<sup>-1 </sup>and 80 (± 89 std. dev.) Tg CO<sub>2 </sub>yr<sup>-1 </sup>in Alaska. These estimates have significant interannual and spatial variability. Needleleaf forests in the Southeastern US and the Western US are the dominant source regions for US fire CO<sub>2 </sub>emissions. Very high emission years typically coincide with droughts, and climatic variability is a major driver of the high interannual and spatial variation in fire emissions. The amount of CO<sub>2 </sub>emitted from fires in the US is equivalent to 4–6% of anthropogenic emissions at the continental scale and, at the state-level, fire emissions of CO<sub>2 </sub>can, in some cases, exceed annual emissions of CO<sub>2 </sub>from fossil fuel usage.</p> <p>Conclusion</p> <p>The CO<sub>2 </sub>released from fires, overall, is a small fraction of the estimated average annual Net Primary Productivity and, unlike fossil fuel CO<sub>2 </sub>emissions, the pulsed emissions of CO<sub>2 </sub>during fires are partially counterbalanced by uptake of CO<sub>2 </sub>by regrowing vegetation in the decades following fire. Changes in fire severity and frequency can, however, lead to net changes in atmospheric CO<sub>2 </sub>and the short-term impacts of fire emissions on monitoring, modeling, and carbon management policy are substantial.</p
    • …
    corecore