207 research outputs found

    Determining stakeholder priorities and core components for school-based identification of mental health difficulties: A Delphi study

    Get PDF
    Only approximately half of children and young people (CYP) with mental health difficulties access mental health services in England, with under-identification of need as a contributing factor. Schools may be an ideal setting for identifying mental health difficulties in CYP, but uncertainty remains about the processes by which these needs can best be identified and addressed. In this study, we conducted a two-round, three-panel Delphi study with parents, school staff, mental health practitioners, and researchers to inform the development of a program to identify mental health difficulties in primary schools. We aimed to assess and build consensus regarding (a) the aims of such a program, (b) identification model preferences, (c) key features of the identification model, and (d) key features of the implementation model. A total of 54 and 42 participants completed the Round 1 and 2 questionnaires, respectively. In general, responses indicated that all three panels supported the idea of school-based identification of mental health difficulties. Overall, 53 of a possible 99 items met the criteria for inclusion as program core components. Five main priorities emerged, including that (a) the program should identify children experiencing mental health difficulties across the continuum of severity, as well as children exposed to adversity, who are at greater risk of mental health difficulties; (b) the program should train staff and educate pupils about mental health in parallel; (c) parental consent should be obtained on an opt-out basis; (d) the program must include clear mechanisms for connecting identified pupils to care and support; and (e) to maximize implementation success, the program needs to lie within a school culture that values mental health and wellbeing. In highlighting these priorities, our study provides needed stakeholder consensus to guide further development and evaluation of mental health interventions within schools

    Stratification of adolescents across mental phenomena emphasizes the importance of transdiagnostic distress: a replication in two general population cohorts

    Get PDF
    Characterizing patterns of mental phenomena in epidemiological studies of adolescents can provide insight into the latent organization of psychiatric disorders. This avoids the biases of chronicity and selection inherent in clinical samples, guides models of shared aetiology within psychiatric disorders and informs the development and implementation of interventions. We applied Gaussian mixture modelling to measures of mental phenomena from two general population cohorts: the Avon Longitudinal Study of Parents and Children (ALSPAC, n=3,018) and the Neuroscience in Psychiatry Network (NSPN, n=2,023). We defined classes according to their patterns of both positive (e.g. wellbeing and self-esteem) and negative (e.g. depression, anxiety, psychotic experiences) phenomena. Subsequently, we characterized classes by considering the distribution of diagnoses and sex split across classes. Four well-separated classes were identified within each cohort. Classes primarily differed by overall severity of transdiagnostic distress rather than particular patterns of phenomena akin to diagnoses. Further, as overall severity of distress increased, so did within-class variability, the proportion of individuals with operational psychiatric diagnoses. These results suggest that classes of mental phenomena in the general population of adolescents may not be the same as those found in clinical samples. Classes differentiated only by overall severity support the existence of a general, transdiagnostic mental distress factor and have important implications for intervention

    Leveraging Administrative Data to Better Understand and Address Child Maltreatment: A Scoping Review of Data Linkage Studies

    Get PDF
    Background This scoping review aimed to overview studies that used administrative data linkage in the context of child maltreatment to improve our understanding of the value that data linkage may confer for policy, practice, and research. Methods We searched MEDLINE, Embase, PsycINFO, CINAHL, and ERIC electronic databases in June 2019 and May 2020 for studies that linked two or more datasets (at least one of which was administrative in nature) to study child maltreatment. We report findings with numerical and narrative summary. Results We included 121 studies, mainly from the United States or Australia and published in the past decade. Data came primarily from social services and health sectors, and linkage processes and data quality were often not described in sufficient detail to align with current reporting guidelines. Most studies were descriptive in nature and research questions addressed fell under eight themes: descriptive epidemiology, risk factors, outcomes, intergenerational transmission, predictive modelling, intervention/service evaluation, multi-sector involvement, and methodological considerations/advancements. Conclusions Included studies demonstrated the wide variety of ways in which data linkage can contribute to the public health response to child maltreatment. However, how research using linked data can be translated into effective service development and monitoring, or targeting of interventions, is underexplored in terms of privacy protection, ethics and governance, data quality, and evidence of effectiveness

    Optical dipole traps and atomic waveguides based on Bessel light beams

    Full text link
    We theoretically investigate the use of Bessel light beams generated using axicons for creating optical dipole traps for cold atoms and atomic waveguiding. Zeroth-order Bessel beams can be used to produce highly elongated dipole traps allowing for the study of one-dimensional trapped gases and realization of a Tonks gas of impentrable bosons. First-order Bessel beams are shown to be able to produce tight confined atomic waveguides over centimeter distances.Comment: 20 pages, 5 figures, to appear in Phys. Rev.

    Design and validation of Segment - freely available software for cardiovascular image analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format.</p> <p>Results</p> <p>Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page <url>http://segment.heiberg.se</url>.</p> <p>Conclusions</p> <p>Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.</p

    Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.

    Get PDF
    BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data

    Integrative analysis of gene expression and copy number alterations using canonical correlation analysis

    Get PDF
    Supplementary Figure 1. Representation of the samples from the tuning set by their coordinates in the first two pairs of features (extracted from the tuning set) using regularized dual CCA, with regularization parameters tx = 0.9, ty = 0.3 (left panel), and PCA+CCA (right panel). We show the representations with respect to both the copy number features and the gene expression features in a superimposed way, where each sample is represented by two markers. The filled markers represent the coordinates in the features extracted from the copy number variables, and the open markers represent coordinates in the features extracted from the gene expression variables. Samples with different leukemia subtypes are shown with different colors. The first feature pair distinguishes the HD50 group from the rest, while the second feature pair represents the characteristics of the samples from the E2A/PBX1 subtype. The high canonical correlation obtained for the tuning samples with regularized dual CCA is apparent in the left panel, where the two points for each sample coincide. Nevertheless, the extracted features have a high generalization ability, as can be seen in the left panel of Figure 5, showing the representation of the validation samples. 1 Supplementary Figure 2. Representation of the samples from the tuning set by their coordinates in the first two pairs of features (extracted from the tuning set) using regularized dual CCA, with regularization parameters tx = 0, ty = 0 (left panel), and tx = 1, ty = 1 (right panel). We show the representations with respect to both the copy number features and the gene expression features in a superimposed way, where each sample is represented by tw

    Relative Abundance of Transcripts (RATs):Identifying differential isoform abundance from RNA-seq [version 1; referees: 1 approved, 2 approved with reservations]

    Get PDF
    The biological importance of changes in RNA expression is reflected by the wide variety of tools available to characterise these changes from RNA-seq data. Several tools exist for detecting differential transcript isoform usage (DTU) from aligned or assembled RNA-seq data, but few exist for DTU detection from alignment-free RNA-seq quantifications. We present the RATs, an R package that identifies DTU transcriptome-wide directly from transcript abundance estimates. RATs is unique in applying bootstrapping to estimate the reliability of detected DTU events and shows good performance at all replication levels (median false positive fraction < 0.05). We compare RATs to two existing DTU tools, DRIM-Seq & SUPPA2, using two publicly available simulated RNA-seq datasets and a published human RNA-seq dataset, in which 248 genes have been previously identified as displaying significant DTU. RATs with default threshold values on the simulated Human data has a sensitivity of 0.55, a Matthews correlation coefficient of 0.71 and a false discovery rate (FDR) of 0.04, outperforming both other tools. Applying the same thresholds for SUPPA2 results in a higher sensitivity (0.61) but poorer FDR performance (0.33). RATs and DRIM-seq use different methods for measuring DTU effect-sizes complicating the comparison of results between these tools, however, for a likelihood-ratio threshold of 30, DRIM-Seq has similar FDR performance to RATs (0.06), but worse sensitivity (0.47). These differences persist for the simulated drosophila dataset. On the published human RNA-seq dataset the greatest agreement between the tools tested is 53%, observed between RATs and SUPPA2. The bootstrapping quality filter in RATs is responsible for removing the majority of DTU events called by SUPPA2 that are not reported by RATs. All methods, including the previously published qRT-PCR of three of the 248 detected DTU events, were found to be sensitive to annotation differences between Ensembl v60 and v87
    corecore