484 research outputs found

    Explicit motor sequence learning with the paretic arm after stroke

    Get PDF
    PURPOSE: Motor sequence learning is important for stroke recovery, but experimental tasks require dexterous movements, which are impossible for people with upper limb impairment. This makes it difficult to draw conclusions about the impact of stroke on learning motor sequences. We aimed to test a paradigm requiring gross arm movements to determine whether stroke survivors with upper limb impairment were capable of learning a movement sequence as effectively as age-matched controls. MATERIALS AND METHODS: In this case-control study, 12 stroke survivors (10-138 months post-stroke, mean age 64 years) attempted the task once using their affected arm. Ten healthy controls (mean 66 years) used their non-dominant arm. A sequence of 10 movements was repeated 25 times. The variables were: time from target illumination until the cursor left the central square (onset time; OT), accuracy (path length), and movement speed. RESULTS: OT reduced with training (p  0.1). We quantified learning as the OT difference between the end of training and a random sequence; this was smaller for stroke survivors than controls (p = 0.015). CONCLUSIONS: Stroke survivors can learn a movement sequence with their paretic arm, but demonstrate impairments in sequence specific learning. Implications for Rehabilitation Motor sequence learning is important for recovery of movement after stroke. Stroke survivors were found to be capable of learning a movement sequence with their paretic arm, supporting the concept of repetitive task training for recovery of movement. Stroke survivors showed impaired sequence specific learning in comparison with age-matched controls, indicating that they may need more repetitions of a sequence in order to re-learn movements. Further research is required into the effect of lesion location, time since stroke, hand dominance and gender on learning of motor sequences after stroke

    Factors associated with time delay to carotid stenting in patients with a symptomatic carotid artery stenosis

    Get PDF
    Treatment of a symptomatic stenosis is known to be most beneficial within 14 days after the presenting event but this can frequently not be achieved in daily practice. The aim of this study was the assessment of factors responsible for this time delay to treatment. A retrospective analysis of a prospective two-center CAS database was carried out to investigate the potential factors that influence a delayed CAS treatment. Of 374 patients with a symptomatic carotid stenosis, 59.1% were treated beyond ≥14 days. A retinal TIA event (OR = 3.59, 95% CI 1.47–8.74, p < 0.01) was found to be a predictor for a delayed treatment, whereas the year of the intervention (OR = 0.32, 95% CI 0.20–0.50, p < 0.01) and a contralateral carotid occlusion (OR = 0.42, 95% CI 0.21–0.86, p = 0.02) were predictive of an early treatment. Similarly, within the subgroup of patients with transient symptoms, the year of the intervention (OR = 0.28, 95% CI 0.14–0.59, p < 0.01) was associated with an early treatment, whereas a retinal TIA as the qualifying event (OR = 6.96, 95% CI 2.37–20.47, p < 0.01) was associated with a delayed treatment. Treatment delay was most pronounced in patients with an amaurosis fugax, whereas a contralateral carotid occlusion led to an early intervention. Although CAS is increasingly performed faster in the last years, there is still scope for an even more accelerated treatment strategy, which might prevent future recurrent strokes prior to treatment

    The clinical efficacy of first-generation carcinoembryonic antigen (CEACAM5)-specific CAR T cells is limited by poor persistence and transient pre-conditioning-dependent respiratory toxicity

    Get PDF
    The primary aim of this clinical trial was to determine the feasibility of delivering first-generation CAR T cell therapy to patients with advanced, CEACAM5(+) malignancy. Secondary aims were to assess clinical efficacy, immune effector function and optimal dose of CAR T cells. Three cohorts of patients received increasing doses of CEACAM5(+)-specific CAR T cells after fludarabine pre-conditioning plus systemic IL2 support post T cell infusion. Patients in cohort 4 received increased intensity pre-conditioning (cyclophosphamide and fludarabine), systemic IL2 support and CAR T cells. No objective clinical responses were observed. CAR T cell engraftment in patients within cohort 4 was significantly higher. However, engraftment was short-lived with a rapid decline of systemic CAR T cells within 14 days. Patients in cohort 4 had transient, acute respiratory toxicity which, in combination with lack of prolonged CAR T cell persistence, resulted in the premature closure of the trial. Elevated levels of systemic IFNγ and IL-6 implied that the CEACAM5-specific T cells had undergone immune activation in vivo but only in patients receiving high-intensity pre-conditioning. Expression of CEACAM5 on lung epithelium may have resulted in this transient toxicity. Raised levels of serum cytokines including IL-6 in these patients implicate cytokine release as one of several potential factors exacerbating the observed respiratory toxicity. Whilst improved CAR designs and T cell production methods could improve the systemic persistence and activity, methods to control CAR T 'on-target, off-tissue' toxicity are required to enable a clinical impact of this approach in solid malignancies

    Multivariable risk prediction can greatly enhance the statistical power of clinical trial subgroup analysis

    Get PDF
    BACKGROUND: When subgroup analyses of a positive clinical trial are unrevealing, such findings are commonly used to argue that the treatment's benefits apply to the entire study population; however, such analyses are often limited by poor statistical power. Multivariable risk-stratified analysis has been proposed as an important advance in investigating heterogeneity in treatment benefits, yet no one has conducted a systematic statistical examination of circumstances influencing the relative merits of this approach vs. conventional subgroup analysis. METHODS: Using simulated clinical trials in which the probability of outcomes in individual patients was stochastically determined by the presence of risk factors and the effects of treatment, we examined the relative merits of a conventional vs. a "risk-stratified" subgroup analysis under a variety of circumstances in which there is a small amount of uniformly distributed treatment-related harm. The statistical power to detect treatment-effect heterogeneity was calculated for risk-stratified and conventional subgroup analysis while varying: 1) the number, prevalence and odds ratios of individual risk factors for risk in the absence of treatment, 2) the predictiveness of the multivariable risk model (including the accuracy of its weights), 3) the degree of treatment-related harm, and 5) the average untreated risk of the study population. RESULTS: Conventional subgroup analysis (in which single patient attributes are evaluated "one-at-a-time") had at best moderate statistical power (30% to 45%) to detect variation in a treatment's net relative risk reduction resulting from treatment-related harm, even under optimal circumstances (overall statistical power of the study was good and treatment-effect heterogeneity was evaluated across a major risk factor [OR = 3]). In some instances a multi-variable risk-stratified approach also had low to moderate statistical power (especially when the multivariable risk prediction tool had low discrimination). However, a multivariable risk-stratified approach can have excellent statistical power to detect heterogeneity in net treatment benefit under a wide variety of circumstances, instances under which conventional subgroup analysis has poor statistical power. CONCLUSION: These results suggest that under many likely scenarios, a multivariable risk-stratified approach will have substantially greater statistical power than conventional subgroup analysis for detecting heterogeneity in treatment benefits and safety related to previously unidentified treatment-related harm. Subgroup analyses must always be well-justified and interpreted with care, and conventional subgroup analyses can be useful under some circumstances; however, clinical trial reporting should include a multivariable risk-stratified analysis when an adequate externally-developed risk prediction tool is available

    Centre selection for clinical trials and the generalisability of results: a mixed methods study.

    Get PDF
    BACKGROUND: The rationale for centre selection in randomised controlled trials (RCTs) is often unclear but may have important implications for the generalisability of trial results. The aims of this study were to evaluate the factors which currently influence centre selection in RCTs and consider how generalisability considerations inform current and optimal practice. METHODS AND FINDINGS: Mixed methods approach consisting of a systematic review and meta-summary of centre selection criteria reported in RCT protocols funded by the UK National Institute of Health Research (NIHR) initiated between January 2005-January 2012; and an online survey on the topic of current and optimal centre selection, distributed to professionals in the 48 UK Clinical Trials Units and 10 NIHR Research Design Services. The survey design was informed by the systematic review and by two focus groups conducted with trialists at the Birmingham Centre for Clinical Trials. 129 trial protocols were included in the systematic review, with a total target sample size in excess of 317,000 participants. The meta-summary identified 53 unique centre selection criteria. 78 protocols (60%) provided at least one criterion for centre selection, but only 31 (24%) protocols explicitly acknowledged generalisability. This is consistent with the survey findings (n = 70), where less than a third of participants reported generalisability as a key driver of centre selection in current practice. This contrasts with trialists' views on optimal practice, where generalisability in terms of clinical practice, population characteristics and economic results were prime considerations for 60% (n = 42), 57% (n = 40) and 46% (n = 32) of respondents, respectively. CONCLUSIONS: Centres are rarely enrolled in RCTs with an explicit view to external validity, although trialists acknowledge that incorporating generalisability in centre selection should ideally be more prominent. There is a need to operationalize 'generalisability' and incorporate it at the design stage of RCTs so that results are readily transferable to 'real world' practice

    Brain state and polarity dependent modulation of brain networks by transcranial direct current stimulation

    Get PDF
    Despite its widespread use in cognitive studies, there is still limited understanding of whether and how transcranial direct current stimulation (tDCS) modulates brain network function. To clarify its physiological effects, we assessed brain network function using functional magnetic resonance imaging (fMRI) simultaneously acquired during tDCS stimulation. Cognitive state was manipulated by having subjects perform a Choice Reaction Task or being at "rest." A novel factorial design was used to assess the effects of brain state and polarity. Anodal and cathodal tDCS were applied to the right inferior frontal gyrus (rIFG), a region involved in controlling activity large-scale intrinsic connectivity networks during switches of cognitive state. tDCS produced widespread modulation of brain activity in a polarity and brain state dependent manner. In the absence of task, the main effect of tDCS was to accentuate default mode network (DMN) activation and salience network (SN) deactivation. In contrast, during task performance, tDCS increased SN activation. In the absence of task, the main effect of anodal tDCS was more pronounced, whereas cathodal tDCS had a greater effect during task performance. Cathodal tDCS also accentuated the within-DMN connectivity associated with task performance. There were minimal main effects of stimulation on network connectivity. These results demonstrate that rIFG tDCS can modulate the activity and functional connectivity of large-scale brain networks involved in cognitive function, in a brain state and polarity dependent manner. This study provides an important insight into mechanisms by which tDCS may modulate cognitive function, and also has implications for the design of future stimulation studies

    Reviewer agreement trends from four years of electronic submissions of conference abstract

    Get PDF
    BACKGROUND: The purpose of this study was to determine the inter-rater agreement between reviewers on the quality of abstract submissions to an annual national scientific meeting (Canadian Association of Emergency Physicians; CAEP) to identify factors associated with low agreement. METHODS: All abstracts were submitted using an on-line system and assessed by three volunteer CAEP reviewers blinded to the abstracts' source. Reviewers used an on-line form specific for each type of study design to score abstracts based on nine criteria, each contributing from two to six points toward the total (maximum 24). The final score was determined to be the mean of the three reviewers' scores using Intraclass Correlation Coefficient (ICC). RESULTS: 495 Abstracts were received electronically during the four-year period, 2001 – 2004, increasing from 94 abstracts in 2001 to 165 in 2004. The mean score for submitted abstracts over the four years was 14.4 (95% CI: 14.1–14.6). While there was no significant difference between mean total scores over the four years (p = 0.23), the ICC increased from fair (0.36; 95% CI: 0.24–0.49) to moderate (0.59; 95% CI: 0.50–0.68). Reviewers agreed less on individual criteria than on the total score in general, and less on subjective than objective criteria. CONCLUSION: The correlation between reviewers' total scores suggests general recognition of "high quality" and "low quality" abstracts. Criteria based on the presence/absence of objective methodological parameters (i.e., blinding in a controlled clinical trial) resulted in higher inter-rater agreement than the more subjective and opinion-based criteria. In future abstract competitions, defining criteria more objectively so that reviewers can base their responses on empirical evidence may lead to increased consistency of scoring and, presumably, increased fairness to submitters

    Recurrent stroke risk and cerebral microbleed burden in ischemic stroke and TIA A meta-analysis

    Get PDF
    OBJECTIVE: To determine associations between cerebral microbleed (CMB) burden with recurrent ischemic stroke (IS) and intracerebral hemorrhage (ICH) risk after IS or TIA. METHODS: We identified prospective studies of patients with IS or TIA that investigated CMBs and stroke (ICH and IS) risk during 3monthsfollowup.Authorsprovidedaggregatesummaryleveldataonstrokeoutcomes,withCMBscategorizedaccordingtoburden(single,24,and3 months follow-up. Authors provided aggregate summary-level data on stroke outcomes, with CMBs categorized according to burden (single, 2–4, and 5 CMBs) and distribution. We calculated absolute event rates and pooled risk ratios (RR) using randomeffects meta-analysis. RESULTS: We included 5,068 patients from 15 studies. There were 115/1,284 (9.6%) recurrent IS events in patients with CMBs vs 212/3,781 (5.6%) in patients without CMBs (pooled RR 1.8 for CMBs vs no CMBs; 95% confidence interval [CI] 1.4–2.5). There were 49/1,142 (4.3%) ICH events in those with CMBs vs 17/2,912 (0.58%) in those without CMBs (pooled RR 6.3 for CMBs vs no CMBs; 95% CI 3.5–11.4). Increasing CMB burden increased the risk of IS (pooled RR [95% CI] 1.8 [1.0–3.1], 2.4 [1.3–4.4], and 2.7 [1.5–4.9] for 1 CMB, 2–4 CMBs, and 5CMBs,respectively)andICH(pooledRR[95CMB,24CMBs,and5 CMBs, respectively) and ICH (pooled RR [95% CI] 4.6 [1.9–10.7], 5.6 [2.4–13.3], and 14.1 [6.9–29.0] for 1 CMB, 2–4 CMBs, and 5 CMBs, respectively). CONCLUSIONS: CMBs are associated with increased stroke risk after IS or TIA. With increasing CMB burden (compared to no CMBs), the risk of ICH increases more steeply than that of IS. However, IS absolute event rates remain higher than ICH absolute event rates in all CMB burden categories

    β-catenin negatively regulates expression of the prostaglandin transporter PGT in the normal intestinal epithelium and colorectal tumour cells: A role in the chemopreventive efficacy of aspirin

    Get PDF
    Background: Levels of the pro-tumorigenic prostaglandin PGE 2 are increased in colorectal cancer, previously attributed to increased synthesis through COX-2 upregulation and, more recently, to decreased catabolism. The functionally linked genes 15-prostaglandin dehydrogenase (15-PGDH) and the prostaglandin transporter PGT co-operate in prostaglandin degradation and are downregulated in colorectal cancer. We previously reported repression of 15-PGDH expression by the Wnt/β-catenin pathway, commonly deregulated during early colorectal neoplasia. Here we asked whether β-catenin also regulates PGT expression. Methods: The effect of β-catenin deletion in vivo was addressed by PGT immunostaining of β-catenin/lox-villin-cre-ERT2 mouse tissue. The effect of siRNA-mediated β-catenin knockdown and dnTCF4 induction in vitro was addressed by semi-quantitative and quantitative real-time RT-PCR and immunoblotting. Results: This study shows for the first time that deletion of β-catenin in murine intestinal epithelium in vivo upregulates PGT protein, especially in the crypt epithelium. Furthermore, β-catenin knockdown in vitro increases PGT expression in both colorectal adenoma-and carcinoma-derived cell lines, as does dnTCF4 induction in LS174T cells.Conclusions:These data suggest that β-catenin employs a two-pronged approach to inhibiting prostaglandin turnover during colorectal neoplasia by repressing PGT expression in addition to 15-PGDH. Furthermore, our data highlight a potential mechanism that may contribute to the non-selective NSAID aspirins chemopreventive efficacy. © 2012 Cancer Research UK All rights reserved

    The challenges faced in the design, conduct and analysis of surgical randomised controlled trials

    Get PDF
    Randomised evaluations of surgical interventions are rare; some interventions have been widely adopted without rigorous evaluation. Unlike other medical areas, the randomised controlled trial (RCT) design has not become the default study design for the evaluation of surgical interventions. Surgical trials are difficult to successfully undertake and pose particular practical and methodological challenges. However, RCTs have played a role in the assessment of surgical innovations and there is scope and need for greater use. This article will consider the design, conduct and analysis of an RCT of a surgical intervention. The issues will be reviewed under three headings: the timing of the evaluation, defining the research question and trial design issues. Recommendations on the conduct of future surgical RCTs are made. Collaboration between research and surgical communities is needed to address the distinct issues raised by the assessmentof surgical interventions and enable the conduct of appropriate and well-designed trials.The Health Services Research Unit is funded by the Scottish Government Health DirectoratesPeer reviewedPublisher PD
    corecore