4,249 research outputs found

    Excess pressure as an analogue of blood flow velocity

    Get PDF
    INTRODUCTION: Derivation of blood flow velocity from a blood pressure waveform is a novel technique, which could have potential clinical importance. Excess pressure, calculated from the blood pressure waveform via the reservoir-excess pressure model, is purported to be an analogue of blood flow velocity but this has never been examined in detail, which was the aim of this study. METHODS: Intra-arterial blood pressure was measured sequentially at the brachial and radial arteries via fluid-filled catheter simultaneously with blood flow velocity waveforms recorded via Doppler ultrasound on the contralateral arm (n = 98, aged 61 ± 10 years, 72% men). Excess pressure was derived from intra-arterial blood pressure waveforms using pressure-only reservoir-excess pressure analysis. RESULTS: Brachial and radial blood flow velocity waveform morphology were closely approximated by excess pressure derived from their respective sites of measurement (median cross-correlation coefficient r = 0.96 and r = 0.95 for brachial and radial comparisons, respectively). In frequency analyses, coherence between blood flow velocity and excess pressure was similar for brachial and radial artery comparisons (brachial and radial median coherence = 0.93 and 0.92, respectively). Brachial and radial blood flow velocity pulse heights were correlated with their respective excess pressure pulse heights (r = 0.53, P < 0.001 and r = 0.43, P < 0.001, respectively). CONCLUSION: Excess pressure is an analogue of blood flow velocity, thus affording the opportunity to derive potentially important information related to arterial blood flow using only the blood pressure waveform

    Recruitment to randomised trials : Strategies for Trial Enrolment and Participation Study. The STEPS study

    Get PDF
    Objectives: To identify factors associated with good and poor recruitment to multicentre trials. Data sources: Part A: database of trials started in or after 1994 and were due to end before 2003 held by the Medical Research Council and Health Technology Assessment Programmes. Part B: interviews with people playing a wide range of roles within four trials that their funders identified as ‘exemplars’. Part C: a large multicentre trial (the CRASH trial) of treatment for head injury. Review methods: The study used a number of different perspectives (‘multiple lenses’), and three components. Part A: an epidemiological review of a cohort of trials. Part B: case studies of trials that appeared to have particularly interesting lessons for recruitment. Part C: a single, in-depth case study to examine the feasibility of applying a businessorientated analytical framework as a reference model in future trials. Results: In the 114 trials found in Part A, less than one-third recruited their original target within the time originally specified, and around one-third had extensions. Factors observed more often in trials that recruited successfully were: having a dedicated trial manager, being a cancer or drug trial, and having interventions only available inside the trial. The most commonly reported strategies to improve recruitment were newsletters and mailshots, but it was not possible to assess whether they were causally linked to changes in recruitment. The analyses in Part B suggested that successful trials were those addressing clinically important questions at a timely point. The investigators were held in high esteem by the interviewees, and the trials were firmly grounded in existing clinical practices, so that the trial processes were not alien to clinical collaborators, and the results could be easily applicable to future practice. The interviewees considered that the needs of patients were well served by participation in the trials. Clinical collaborators particularly appreciated clear delineation of roles, which released them from much of the workload associated with trial participation. There was a strong feeling from interviewees that they were proud to be part of a successful team. This pride fed into further success. Good groundwork and excellent communications across many levels of complex trial structures were considered to be extremely important, including training components for learning about trial interventions and processes, and team building. All four trials had faced recruitment problems, and extra insights into the working of trials were afforded by strategies invoked to address them. The process of the case study in Part C was able to draw attention to a body of research and practice in a different discipline (academic business studies). It generated a reference model derived from a combination of business theory and work within CRASH. This enabled identification of weaker managerial components within CRASH, and initiatives to strengthen them. Although it is not clear, even within CRASH, whether the initiatives that follow from developing and applying the model will be effective in increasing recruitment or other aspects of the success of the trial, the reference model could provide a template, with potential for those managing other trials to use or adapt it, especially at foundation stages. The model derived from this project could also be used as a diagnostic tool if trials have difficulties and hence as a basis for deciding what type of remedial action to take. It may also be useful for auditing the progress of trials, such as during external review. Conclusions: While not producing sufficiently definitive results to make strong recommendations, the work here suggests that future trials should consider the different needs at different phases in the life of trials, and place greater emphasis on ‘conduct’ (the process of actually doing trials). This implies learning lessons from successful trialists and trial managers, with better training for issues relating to trial conduct. The complexity of large trials means that unanticipated difficulties are highly likely at some time in every trial. Part B suggested that successful trials were those flexible and robust enough to adapt to unexpected issues. Arguably, the trialists should also expect agility from funders within a proactive approach to monitoring ongoing trials. Further research into different recruitment patterns (including ‘failures’) may help to clarify whether the patterns seen in the ‘exemplar’ trials differ or are similar. The reference model from Part C needs to be further considered in other similar and different trials to assess its robustness. These and other strategies aimed at increasing recruitment and making trials more successful need to be formally evaluated for their effectiveness in a range of trials.Not peer reviewedPublisher PD

    ALS monocyte-derived microglia-like cells reveal cytoplasmic TDP-43 accumulation, DNA damage, and cell-specific impairment of phagocytosis associated with disease progression

    Get PDF
    Background: Amyotrophic lateral sclerosis (ALS) is a multifactorial neurodegenerative disease characterised by the loss of upper and lower motor neurons. Increasing evidence indicates that neuroinflammation mediated by microglia contributes to ALS pathogenesis. This microglial activation is evident in post-mortem brain tissues and neuroimaging data from patients with ALS. However, the role of microglia in the pathogenesis and progression of amyotrophic lateral sclerosis remains unclear, partly due to the lack of a model system that is able to faithfully recapitulate the clinical pathology of ALS. To address this shortcoming, we describe an approach that generates monocyte-derived microglia-like cells that are capable of expressing molecular markers, and functional characteristics similar to in vivo human brain microglia. Methods: In this study, we have established monocyte-derived microglia-like cells from 30 sporadic patients with ALS, including 15 patients with slow disease progression, 6 with intermediate progression, and 9 with rapid progression, together with 20 non-affected healthy controls. Results: We demonstrate that patient monocyte-derived microglia-like cells recapitulate canonical pathological features of ALS including non-phosphorylated and phosphorylated-TDP-43-positive inclusions. Moreover, ALS microglia-like cells showed significantly impaired phagocytosis, altered cytokine profiles, and abnormal morphologies consistent with a neuroinflammatory phenotype. Interestingly, all ALS microglia-like cells showed abnormal phagocytosis consistent with the progression of the disease. In-depth analysis of ALS microglia-like cells from the rapid disease progression cohort revealed significantly altered cell-specific variation in phagocytic function. In addition, DNA damage and NOD-leucine rich repeat and pyrin containing protein 3 (NLRP3) inflammasome activity were also elevated in ALS patient monocyte-derived microglia-like cells, indicating a potential new pathway involved in driving disease progression. Conclusions: Taken together, our work demonstrates that the monocyte-derived microglia-like cell model recapitulates disease-specific hallmarks and characteristics that substantiate patient heterogeneity associated with disease subgroups. Thus, monocyte-derived microglia-like cells are highly applicable to monitor disease progression and can be applied as a functional readout in clinical trials for anti-neuroinflammatory agents, providing a basis for personalised treatment for patients with ALS

    Recognition memory, self-other source memory, and theory-of-mind in children with autism spectrum disorder.

    Get PDF
    This study investigated semantic and episodic memory in autism spectrum disorder (ASD), using a task which assessed recognition and self-other source memory. Children with ASD showed undiminished recognition memory but significantly diminished source memory, relative to age- and verbal ability-matched comparison children. Both children with and without ASD showed an “enactment effect”, demonstrating significantly better recognition and source memory for self-performed actions than other-person-performed actions. Within the comparison group, theory-of-mind (ToM) task performance was significantly correlated with source memory, specifically for other-person-performed actions (after statistically controlling for verbal ability). Within the ASD group, ToM task performance was not significantly correlated with source memory (after controlling for verbal ability). Possible explanations for these relations between source memory and ToM are considered

    Bayesian Point Set Registration

    Get PDF
    Point set registration involves identifying a smooth invertible transformation between corresponding points in two point sets, one of which may be smaller than the other and possibly corrupted by observation noise. This problem is traditionally decomposed into two separate optimization problems: (i) assignment or correspondence, and (ii) identification of the optimal transformation between the ordered point sets. In this work, we propose an approach solving both problems simultaneously. In particular, a coherent Bayesian formulation of the problem results in a marginal posterior distribution on the transformation, which is explored within a Markov chain Monte Carlo scheme. Motivated by Atomic Probe Tomography (APT), in the context of structure inference for high entropy alloys (HEA), we focus on the registration of noisy sparse observations of rigid transformations of a known reference configuration.Lastly, we test our method on synthetic data sets.Comment: 15 pages, 20 figure

    Fluorescence characterization of clinically-important bacteria

    Get PDF
    Healthcare-associated infections (HCAI/HAI) represent a substantial threat to patient health during hospitalization and incur billions of dollars additional cost for subsequent treatment. One promising method for the detection of bacterial contamination in a clinical setting before an HAI outbreak occurs is to exploit native fluorescence of cellular molecules for a hand-held, rapid-sweep surveillance instrument. Previous studies have shown fluorescence-based detection to be sensitive and effective for food-borne and environmental microorganisms, and even to be able to distinguish between cell types, but this powerful technique has not yet been deployed on the macroscale for the primary surveillance of contamination in healthcare facilities to prevent HAI. Here we report experimental data for the specification and design of such a fluorescence-based detection instrument. We have characterized the complete fluorescence response of eleven clinically-relevant bacteria by generating excitation-emission matrices (EEMs) over broad wavelength ranges. Furthermore, a number of surfaces and items of equipment commonly present on a ward, and potentially responsible for pathogen transfer, have been analyzed for potential issues of background fluorescence masking the signal from contaminant bacteria. These include bedside handrails, nurse call button, blood pressure cuff and ward computer keyboard, as well as disinfectant cleaning products and microfiber cloth. All examined bacterial strains exhibited a distinctive double-peak fluorescence feature associated with tryptophan with no other cellular fluorophore detected. Thus, this fluorescence survey found that an emission peak of 340nm, from an excitation source at 280nm, was the cellular fluorescence signal to target for detection of bacterial contamination. The majority of materials analysed offer a spectral window through which bacterial contamination could indeed be detected. A few instances were found of potential problems of background fluorescence masking that of bacteria, but in the case of the microfiber cleaning cloth, imaging techniques could morphologically distinguish between stray strands and bacterial contamination

    Population-Based Precision Cancer Screening: A Symposium on Evidence, Epidemiology, and Next Steps

    Get PDF
    Precision medicine, an emerging approach for disease treatment that takes into account individual variability in genes, environment, and lifestyle, is under consideration for preventive interventions, including cancer screening. On September 29, 2015, the National Cancer Institute sponsored a symposium entitled “Precision Cancer Screening in the General Population: Evidence, Epidemiology, and Next Steps”. The goal was two-fold: to share current information on the evidence, practices, and challenges surrounding precision screening for breast, cervical, colorectal, lung, and prostate cancers, and to allow for in-depth discussion among experts in relevant fields regarding how epidemiology and other population sciences can be used to generate evidence to inform precision screening strategies. Attendees concluded that the strength of evidence for efficacy and effectiveness of precision strategies varies by cancer site, that no one research strategy or methodology would be able or appropriate to address the many knowledge gaps in precision screening, and that issues surrounding implementation must be researched as well. Additional discussion needs to occur to identify the high priority research areas in precision cancer screening for pertinent organs and to gather the necessary evidence to determine whether further implementation of precision cancer screening strategies in the general population would be feasible and beneficial

    Practical computational toolkits for dendrimers and dendrons structure design

    Get PDF
    Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface (GUI) toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.Peer reviewe
    corecore