500 research outputs found

    Validation of low-dose lung cancer PET-CT protocol and PET image improvement using machine learning

    Get PDF
    PURPOSE: To conduct a simplified lesion-detection task of a low-dose (LD) PET-CT protocol for frequent lung screening using 30% of the effective PETCT dose and to investigate the feasibility of increasing clinical value of low-statistics scans using machine learning. METHODS: We acquired 33 SD PET images, of which 13 had actual LD (ALD) PET, and simulated LD (SLD) PET images at seven different count levels from the SD PET scans. We employed image quality transfer (IQT), a machine learning algorithm that performs patch-regression to map parameters from low-quality to high-quality images. At each count level, patches extracted from 23 pairs of SD/SLD PET images were used to train three IQT models - global linear, single tree, and random forest regressions with cubic patch sizes of 3 and 5 voxels. The models were then used to estimate SD images from LD images at each count level for 10 unseen subjects. Lesion-detection task was carried out on matched lesion-present and lesion-absent images. RESULTS: LD PET-CT protocol yielded lesion detectability with sensitivity of 0.98 and specificity of 1. Random forest algorithm with cubic patch size of 5 allowed further 11.7% reduction in the effective PETCT dose without compromising lesion detectability, but underestimated SUV by 30%. CONCLUSION: LD PET-CT protocol was validated for lesion detection using ALD PET scans. Substantial image quality improvement or additional dose reduction while preserving clinical values can be achieved using machine learning methods though SUV quantification may be biased and adjustment of our research protocol is required for clinical use

    Pathophysiology of fecal incontinence differs between men and women: a case-matched study in 200 patients

    Get PDF
    CHK is partially funded by a grant from the National Institute of Health Research

    Improved intensive care unit survival for critically ill allogeneic haematopoietic stem cell transplant recipients following reduced intensity conditioning.

    Get PDF
    The use of allogeneic haematopoietic stem cell transplantation (Allo-HSCT) is a standard treatment option for many patients with haematological malignancies. Historically, patients requiring intensive care unit (ICU) admission for transplant-related toxicities have fared extremely poorly, with high ICU mortality rates. Little is known about the impact of reduced intensity Allo-HSCT conditioning regimens in older patients on the ICU and subsequent long-term outcomes. A retrospective analysis of data collected from 164 consecutive Allo-HSCT recipients admitted to ICU for a total of 213 admissions, at a single centre over an 11·5-year study period was performed. Follow-up was recorded until 31 March 2011. Autologous HSCT recipients were excluded. In this study we report favourable ICU survival following Allo-HSCT and, for the first time, demonstrate significantly better survival for patients who underwent Allo-HSCT with reduced intensity conditioning compared to those treated with myeloablative conditioning regimens. In addition, we identified the need for ventilation (invasive or non-invasive) as an independently significant adverse factor affecting short-term ICU outcome. For patients surviving ICU admission, subsequent long-term overall survival was excellent; 61% and 51% at 1 and 5 years, respectively. Reduced intensity Allo-HSCT patients admitted to ICU with critical illness have improved survival compared to myeloablative Allo-HSCT recipients

    Antidepressant use and risk of epilepsy and seizures in people aged 20 to 64 years: cohort study using a primary care database

    Get PDF
    Background: Epilepsy is a serious condition which can profoundly affect an individual’s life. While there is some evidence to suggest an association between antidepressant use and epilepsy and seizures it is conflicting and not conclusive. Antidepressant prescribing is rising in the UK so it is important to quantify absolute risks with individual antidepressants to enable shared decision making with patients. In this study we assess and quantify the association between antidepressant treatment and the risk of epilepsy and seizures in a large cohort of patients diagnosed with depression aged between 20 and 64 years. Methods: Data on 238,963 patients with a diagnosis of depression aged 20 to 64 from 687 UK practices were extracted from the QResearch primary care database. We used Cox’s proportional hazards to analyse the time to the first recorded diagnosis of epilepsy/seizures, excluding patients with a prior history and estimated hazard ratios for antidepressant exposure adjusting for potential confounding variables. Results: In the first 5 years of follow-up, 878 (0.37 %) patients had a first diagnosis of epilepsy/seizures with the hazard ratio (HR) significantly increased (P < 0.01) for all antidepressant drug classes and for 8 of the 11 most commonly prescribed drugs. The highest risks (in the first 5 years) compared with no treatment were for trazodone (HR 5.41, 95 % confidence interval (CI) 3.05 to 9.61, number needed to harm (NNH) 65), lofepramine (HR 3.09, 95 % CI 1.73 to 5.50, NNH 138), venlafaxine (HR 2.84, 95 % CI 1.97 to 4.08, NNH 156) and combined antidepressant treatment (HR 2.73, 95 % CI 1.52 to 4.91, NNH 166). Conclusions: Risk of epilepsy/seizures is significantly increased for all classes of antidepressant. There is a need for individual risk-benefit assessments in patients being considered for antidepressant treatment, especially those with ongoing mild depression or with additional risk factors. Residual confounding and indication bias may influence our results, so confirmation may be required from additional studies

    Exploring the equity of GP practice prescribing rates for selected coronary heart disease drugs: a multiple regression analysis with proxies of healthcare need

    Get PDF
    Background There is a small, but growing body of literature highlighting inequities in GP practice prescribing rates for many drug therapies. The aim of this paper is to further explore the equity of prescribing for five major CHD drug groups and to explain the amount of variation in GP practice prescribing rates that can be explained by a range of healthcare needs indicators (HCNIs). Methods The study involved a cross-sectional secondary analysis in four primary care trusts (PCTs 1–4) in the North West of England, including 132 GP practices. Prescribing rates (average daily quantities per registered patient aged over 35 years) and HCNIs were developed for all GP practices. Analysis was undertaken using multiple linear regression. Results Between 22–25% of the variation in prescribing rates for statins, beta-blockers and bendrofluazide was explained in the multiple regression models. Slightly more variation was explained for ACE inhibitors (31.6%) and considerably more for aspirin (51.2%). Prescribing rates were positively associated with CHD hospital diagnoses and procedures for all drug groups other than ACE inhibitors. The proportion of patients aged 55–74 years was positively related to all prescribing rates other than aspirin, where they were positively related to the proportion of patients aged >75 years. However, prescribing rates for statins and ACE inhibitors were negatively associated with the proportion of patients aged >75 years in addition to the proportion of patients from minority ethnic groups. Prescribing rates for aspirin, bendrofluazide and all CHD drugs combined were negatively associated with deprivation. Conclusion Although around 25–50% of the variation in prescribing rates was explained by HCNIs, this varied markedly between PCTs and drug groups. Prescribing rates were generally characterised by both positive and negative associations with HCNIs, suggesting possible inequities in prescribing rates on the basis of ethnicity, deprivation and the proportion of patients aged over 75 years (for statins and ACE inhibitors, but not for aspirin)

    Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016.

    Get PDF
    OBJECTIVE: To provide an update to "Surviving Sepsis Campaign Guidelines for Management of Sepsis and Septic Shock: 2012." DESIGN: A consensus committee of 55 international experts representing 25 international organizations was convened. Nominal groups were assembled at key international meetings (for those committee members attending the conference). A formal conflict-of-interest (COI) policy was developed at the onset of the process and enforced throughout. A stand-alone meeting was held for all panel members in December 2015. Teleconferences and electronic-based discussion among subgroups and among the entire committee served as an integral part of the development. METHODS: The panel consisted of five sections: hemodynamics, infection, adjunctive therapies, metabolic, and ventilation. Population, intervention, comparison, and outcomes (PICO) questions were reviewed and updated as needed, and evidence profiles were generated. Each subgroup generated a list of questions, searched for best available evidence, and then followed the principles of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system to assess the quality of evidence from high to very low, and to formulate recommendations as strong or weak, or best practice statement when applicable. RESULTS: The Surviving Sepsis Guideline panel provided 93 statements on early management and resuscitation of patients with sepsis or septic shock. Overall, 32 were strong recommendations, 39 were weak recommendations, and 18 were best-practice statements. No recommendation was provided for four questions. CONCLUSIONS: Substantial agreement exists among a large cohort of international experts regarding many strong recommendations for the best care of patients with sepsis. Although a significant number of aspects of care have relatively weak support, evidence-based recommendations regarding the acute management of sepsis and septic shock are the foundation of improved outcomes for these critically ill patients with high mortality

    Reproductive health and quality of life of young Burmese refugees in Thailand

    Get PDF
    BACKGROUND: Of the 140,000 Burmese* refugees living in camps in Thailand, 30% are youths aged 15-24. Health services in these camps do not specifically target young people and their problems and needs are poorly understood. This study aimed to assess their reproductive health issues and quality of life, and identifies appropriate service needs. METHODS: We used a stratified two-stage random sample questionnaire survey of 397 young people 15-24 years from 5,183 households, and 19 semi-structured qualitative interviews to assess and explore health and quality of life issues. RESULTS: The young people in the camps had very limited knowledge of reproductive health issues; only about one in five correctly answered at least one question on reproductive health. They were clear that they wanted more reproductive health education and services, to be provided by health workers rather than parents or teachers who were not able to give them the information they needed. Marital status was associated with sexual health knowledge; having relevant knowledge of reproductive health was up to six times higher in married compared to unmarried youth, after adjusting for socio-economic and demographic factors. Although condom use was considered important, in practice a large proportion of respondents felt too embarrassed to use them. There was a contradiction between moral views and actual behaviour; more than half believed they should remain virgins until marriage, while over half of the youth experienced sex before marriage. Two thirds of women were married before the age of 18, but two third felt they did not marry at the right age. Forced sex was considered acceptable by one in three youth. The youth considered their quality of life to be poor and limited due to confinement in the camps, the limited work opportunities, the aid dependency, the unclear future and the boredom and unhappiness they face. CONCLUSIONS: The long conflict in Myanmar and the resultant long stay in refugee camps over decades affect the wellbeing of these young people. Lack of sexual health education and relevant services, and their concerns for their future are particular problems, which need to be addressed. Issues of education, vocational training and job possibilities also need to be considered.*Burmese is used for all ethnic groups

    The impact of workplace risk factors on the occurrence of neck and upper limb pain: a general population study

    Get PDF
    BACKGROUND: Work-related neck and upper limb pain has mainly been studied in specific occupational groups, and little is known about its impact in the general population. The objectives of this study were to estimate the prevalence and population impact of work-related neck and upper limb pain. METHODS: A cross-sectional survey was conducted of 10 000 adults in North Staffordshire, UK, in which there is a common local manual industry. The primary outcome measure was presence or absence of neck and upper limb pain. Participants were asked to give details of up to five recent jobs, and to report exposure to six work activities involving the neck or upper limbs. Psychosocial measures included job control, demand and support. Odds ratios (ORs) and population attributable fractions were calculated for these risk factors. RESULTS: The age-standardized one-month period prevalence of neck and upper limb pain was 44%. There were significant independent associations between neck and upper limb pain and: repeated lifting of heavy objects (OR = 1.4); prolonged bending of neck (OR = 2.0); working with arms at/above shoulder height (OR = 1.3); little job control (OR = 1.6); and little supervisor support (OR = 1.3). The population attributable fractions were 0.24 (24%) for exposure to work activities and 0.12 (12%) for exposure to psychosocial factors. CONCLUSION: Neck and upper limb pain is associated with both physical and psychosocial factors in the work environment. Inferences of cause-and-effect from cross-sectional studies must be made with caution; nonetheless, our findings suggest that modification of the work environment might prevent up to one in three of cases of neck and upper limb pain in the general population, depending on current exposures to occupational risk

    A simple rule governs the evolution and development of hominin tooth size

    Get PDF
    The variation in molar tooth size in humans and our closest relatives (hominins) has strongly influenced our view of human evolution. The reduction in overall size and disproportionate decrease in third molar size have been noted for over a century, and have been attributed to reduced selection for large dentitions owing to changes in diet or the acquisition of cooking1, 2. The systematic pattern of size variation along the tooth row has been described as a ‘morphogenetic gradient’ in mammal, and more specifically hominin, teeth since Butler3 and Dahlberg4. However, the underlying controls of tooth size have not been well understood, with hypotheses ranging from morphogenetic fields3 to the clone theory5. In this study we address the following question: are there rules that govern how hominin tooth size evolves? Here we propose that the inhibitory cascade, an activator–inhibitor mechanism that affects relative tooth size in mammals6, produces the default pattern of tooth sizes for all lower primary postcanine teeth (deciduous premolars and permanent molars) in hominins. This configuration is also equivalent to a morphogenetic gradient, finally pointing to a mechanism that can generate this gradient. The pattern of tooth size remains constant with absolute size in australopiths (including Ardipithecus, Australopithecus and Paranthropus). However, in species of Homo, including modern humans, there is a tight link between tooth proportions and absolute size such that a single developmental parameter can explain both the relative and absolute sizes of primary postcanine teeth. On the basis of the relationship of inhibitory cascade patterning with size, we can use the size at one tooth position to predict the sizes of the remaining four primary postcanine teeth in the row for hominins. Our study provides a development-based expectation to examine the evolution of the unique proportions of human teeth

    Food effects on statolith composition of the common cuttlefish (Sepia officinalis)

    Get PDF
    The concentration of trace elements within cephalopod statoliths can provide a record of the environmental characteristics at the time of calcification. To reconstruct accurately the environmental characteristics at the time of calcification, it is important to understand the influence of as many factors as possible. To test the hypothesis that the elemental composition of cuttlefish statoliths could be influenced by diet, juvenile Sepia officinalis were fed either shrimp Crangon sp. or fish Clupea harengus under equal temperature and salinity regimes in laboratory experiments. Element concentrations in different regions of the statoliths (core–lateral dome–rostrum) were determined using laser ablation inductively coupled plasma mass spectrometry (LA- ICPMS). The ratios of Sr/Ca, Ba/Ca, Mn/Ca and Y/Ca in the statolith’s lateral dome of shrimp-fed cuttlefish were significantly higher than in the statolith’s lateral dome of fish-fed cuttlefish. Moreover, significant differences between statolith regions were found for all analysed elements. The fact that diet adds a considerable variation especially to Sr/Ca and Ba/Ca must be taken into account in future micro-chemical statolith studies targeting cephalopod’s life history
    • …
    corecore