30 research outputs found

    Transcriptomic analysis of crustacean neuropeptide signaling during the moult cycle in the green shore crab, Carcinus maenas

    Get PDF
    Abstract Background Ecdysis is an innate behaviour programme by which all arthropods moult their exoskeletons. The complex suite of interacting neuropeptides that orchestrate ecdysis is well studied in insects, but details of the crustacean ecdysis cassette are fragmented and our understanding of this process is comparatively crude, preventing a meaningful evolutionary comparison. To begin to address this issue we identified transcripts coding for neuropeptides and their putative receptors in the central nervous system (CNS) and Y-organs (YO) within the crab, Carcinus maenas, and mapped their expression profiles across accurately defined stages of the moult cycle using RNA-sequencing. We also studied gene expression within the epidermally-derived YO, the only defined role for which is the synthesis of ecdysteroid moulting hormones, to elucidate peptides and G protein-coupled receptors (GPCRs) that might have a function in ecdysis. Results Transcriptome mining of the CNS transcriptome yielded neuropeptide transcripts representing 47 neuropeptide families and 66 putative GPCRs. Neuropeptide transcripts that were differentially expressed across the moult cycle included carcikinin, crustacean hyperglycemic hormone-2, and crustacean cardioactive peptide, whilst a single putative neuropeptide receptor, proctolin R1, was differentially expressed. Carcikinin mRNA in particular exhibited dramatic increases in expression pre-moult, suggesting a role in ecdysis regulation. Crustacean hyperglycemic hormone-2 mRNA expression was elevated post- and pre-moult whilst that for crustacean cardioactive peptide, which regulates insect ecdysis and plays a role in stereotyped motor activity during crustacean ecdysis, was elevated in pre-moult. In the YO, several putative neuropeptide receptor transcripts were differentially expressed across the moult cycle, as was the mRNA for the neuropeptide, neuroparsin-1. Whilst differential gene expression of putative neuropeptide receptors was expected, the discovery and differential expression of neuropeptide transcripts was surprising. Analysis of GPCR transcript expression between YO and epidermis revealed 11 to be upregulated in the YO and thus are now candidates for peptide control of ecdysis. Conclusions The data presented represent a comprehensive survey of the deduced C. maenas neuropeptidome and putative GPCRs. Importantly, we have described the differential expression profiles of these transcripts across accurately staged moult cycles in tissues key to the ecdysis programme. This study provides important avenues for the future exploration of functionality of receptor-ligand pairs in crustaceans

    Raising the Bar: Improving Methodological Rigour in Cognitive Alcohol Research

    Get PDF
    Background and Aims: A range of experimental paradigms claim to measure the cognitive processes underpinning alcohol use, suggesting that heightened attentional bias, greater approach tendencies and reduced cue-specific inhibitory control are important drivers of consumption. This paper identifies methodological shortcomings within this broad domain of research and exemplifies them in studies focused specifically on alcohol-related attentional bias. Argument and analysis: We highlight five main methodological issues: (i) the use of inappropriately matched control stimuli; (ii) opacity of stimulus selection and validation procedures; (iii) a credence in noisy measures; (iv) a reliance on unreliable tasks; and (v) variability in design and analysis. This is evidenced through a review of alcohol-related attentional bias (64 empirical articles, 68 tasks), which reveals the following: only 53% of tasks use appropriately matched control stimuli; as few as 38% report their stimulus selection and 19% their validation procedures; less than 28% used indices capable of disambiguating attentional processes; 22% assess reliability; and under 2% of studies were pre-registered. Conclusions: Well-matched and validated experimental stimuli, the development of reliable cognitive tasks and explicit assessment of their psychometric properties, and careful consideration of behavioural indices and their analysis will improve the methodological rigour of cognitive alcohol research. Open science principles can facilitate replication and reproducibility in alcohol research

    Quality-of-life outcomes in older patients with early-stage rectal cancer receiving organ-preserving treatment with hypofractionated short-course radiotherapy followed by transanal endoscopic microsurgery (TREC): non-randomised registry of patients unsuitable for total mesorectal excision

    Get PDF
    Background Older patients with early-stage rectal cancer are under-represented in clinical trials and, therefore, little high-quality data are available to guide treatment in this patient population. The TREC trial was a randomised, open-label feasibility study conducted at 21 centres across the UK that compared organ preservation through short-course radiotherapy (SCRT; 25 Gy in five fractions) plus transanal endoscopic microsurgery (TEM) with standard total mesorectal excision in adults with stage T1–2 rectal adenocarcinoma (maximum diameter ≤30 mm) and no lymph node involvement or metastasis. TREC incorporated a non-randomised registry offering organ preservation to patients who were considered unsuitable for total mesorectal excision by the local colorectal cancer multidisciplinary team. Organ preservation was achieved in 56 (92%) of 61 non-randomised registry patients with local recurrence-free survival of 91% (95% CI 84–99) at 3 years. Here, we report acute and long-term patient-reported outcomes from this non-randomised registry group. Methods Patients considered by the local colorectal cancer multidisciplinary team to be at high risk of complications from total mesorectal excision on the basis of frailty, comorbidities, and older age were included in a non-randomised registry to receive organ-preserving treatment. These patients were invited to complete questionnaires on patient-reported outcomes (the European Organisation for Research and Treatment of Cancer Quality of Life [EORTC-QLQ] questionnaire core module [QLQ-C30] and colorectal cancer module [QLQ-CR29], the Colorectal Functional Outcome [COREFO] questionnaire, and EuroQol-5 Dimensions-3 Level [EQ-5D-3L]) at baseline and at months 3, 6, 12, 24, and 36 postoperatively. To aid interpretation, data from patients in the non-randomised registry were compared with data from those patients in the TREC trial who had been randomly assigned to organ-preserving therapy, and an additional reference cohort of aged-matched controls from the UK general population. This study is registered with the ISRCTN registry, ISRCTN14422743, and is closed. Findings Between July 21, 2011, and July 15, 2015, 88 patients were enrolled onto the TREC study to undergo organ preservation, of whom 27 (31%) were randomly allocated to organ-preserving therapy and 61 (69%) were added to the non-randomised registry for organ-preserving therapy. Non-randomised patients were older than randomised patients (median age 74 years [IQR 67–80] vs 65 years [61–71]). Organ-preserving treatment was well tolerated among patients in the non-randomised registry, with mild worsening of fatigue; quality of life; physical, social, and role functioning; and bowel function 3 months postoperatively compared with baseline values. By 6–12 months, most scores had returned to baseline values, and were indistinguishable from data from the reference cohort. Only mild symptoms of faecal incontinence and urgency, equivalent to less than one episode per week, persisted at 36 months among patients in both groups. Interpretation The SCRT and TEM organ-preservation approach was well tolerated in older and frailer patients, showed good rates of organ preservation, and was associated with low rates of acute and long-term toxicity, with minimal effects on quality of life and functional status. Our findings support the adoption of this approach for patients considered to be at high risk from radical surgery. Funding Cancer Research UK

    Hepatic alterations are accompanied by changes to bile acid transporter-expressing neurons in the hypothalamus after traumatic brain injury

    Get PDF
    Annually, there are over 2 million incidents of traumatic brain injury (TBI) and treatment options are non-existent. While many TBI studies have focused on the brain, peripheral contributions involving the digestive and immune systems are emerging as factors involved in the various symptomology associated with TBI. We hypothesized that TBI would alter hepatic function, including bile acid system machinery in the liver and brain. The results show activation of the hepatic acute phase response by 2 hours after TBI, hepatic inflammation by 6 hours after TBI and a decrease in hepatic transcription factors, Gli 1, Gli 2, Gli 3 at 2 and 24 hrs after TBI. Bile acid receptors and transporters were decreased as early as 2 hrs after TBI until at least 24 hrs after TBI. Quantification of bile acid transporter, ASBT-expressing neurons in the hypothalamus, revealed a significant decrease following TBI. These results are the first to show such changes following a TBI, and are compatible with previous studies of the bile acid system in stroke models. The data support the emerging idea of a systemic influence to neurological disorders and point to the need for future studies to better define specific mechanisms of action

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose: Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods: Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results: The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion: We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    Does sleep affect alcohol-related attention bias?

    No full text

    Reducing human error in the quality control checking of fresh produce labels

    No full text
    Human error in the quality control checking of fresh produce labels results in financial loss, reputational damage, and a significant carbon footprint. This chapter reviews a research project aimed at understanding the reasons for such human error. In the course of the project, observations were taken in a packaging facility, historical error records were studied, key operatives were interviewed, and laboratory-based work was conducted. The in-situ observations highlighted the dynamic environment in which label-checking took place. The interviews revealed that no explicit training was given in label-checking. Respondents also identified a range of cognitive and situational factors likely to contribute to increased human error. Laboratory-based work, using an eye tracker to record eye movements during simulated label-checking tasks, showed that varying strategies were adopted by different quality control professionals. A systematic approach, in which one bit of information was checked at a time, was associated with more accurate performance. Several cognitive abilities were found to predict accurate label-checking performance in both quality control professionals and university students. Implications for personnel selection, training, human performance, and task design are identified. The understanding of human quality control checking gained from this project can be used to reduce human error and, thus, waste across different manufacturing domains
    corecore