9 research outputs found

    Phylogenetic tree for <i>HARS</i> suggests multiple duplication events in <i>HARS</i> evolutionary history.

    No full text
    <p>A maximum likelihood tree constructed using the mRNA sequences of <i>HARS</i> and <i>HARS2</i> (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0185317#sec002" target="_blank">Materials</a> for RefSeq numbers) shows the relatedness between <i>HARS</i> transcripts among representative species. Stars represent putative duplication events; numbers at each node are percent bootstrap support out of 1000 replicates.</p

    Percent identity between the amino acid sequences of human and <i>Danio rerio</i> cytoplasmic aminoacyl tRNA synthetase homologues.

    No full text
    <p>Percent identity between the amino acid sequences of human and <i>Danio rerio</i> cytoplasmic aminoacyl tRNA synthetase homologues.</p

    RefSeq ID numbers of HARS and HARS2 mRNA sequences used for molecular phylogenetic analysis of the HARS gene family in animals.

    No full text
    <p>RefSeq ID numbers of HARS and HARS2 mRNA sequences used for molecular phylogenetic analysis of the HARS gene family in animals.</p

    Synteny of <i>HARS</i> genes suggests that <i>HARS</i> has been duplicated multiple times in the animal kingdom.

    No full text
    <p>The chromosomal positioning of <i>HARS</i> genes reveal clade-specific patterns, with the exception of amphibians. For example, mammalian <i>HARS</i> and <i>HARS2</i> are arranged back-to-back, while bird <i>HARS</i> genes are side-by-side but with the same orientation.</p

    Predicted localization of <i>Danio rerio</i> genes by four subcellular localization predictor programs.

    No full text
    <p>Predicted localization of <i>Danio rerio</i> genes by four subcellular localization predictor programs.</p

    <i>Danio rerio hars</i> generates two transcripts that are predicted to code for distinct proteins.

    No full text
    <p>(A) Diagram showing the alternative splicing of <i>hars</i> pre-mRNA into the two transcript variants and the domain structures of the resulting protein products. Solid arrows indicate translation start sites. Dashed arrows indicate primer binding sites used for B. Unk, unknown; ABD, Anticodon Binding Domain. (B) As shown in lane 2, PCR for <i>hars</i> from <i>D rerio</i> cDNA using primers noted by the dashed arrows in A generates two products that differ in size by the length of exon 2 (300 bp). Lane 3 is a PCR product for <i>ef1α</i>, which was used as a positive control.</p

    Hars-001 and Hars-002 are differentially localized within cells.

    No full text
    <p>(A, D) FLAG immunohistochemistry of COS7 cells transfected with C-terminally FLAG-tagged Hars variants. (B, E) Cells were co-transfected with a mitochondrial-targeted red fluorescent protein (mito-dsRed) to identify mitochondria. (C, F) Merged images show Hars-001-FLAG co-localizes with the mitochondria marker, while Hars-002-FLAG appears more cytoplasmic. Scale bar in F is 50 μm and is the same for all panels.</p

    Data_Sheet_1_Machine-learning assisted swallowing assessment: a deep learning-based quality improvement tool to screen for post-stroke dysphagia.docx

    No full text
    IntroductionPost-stroke dysphagia is common and associated with significant morbidity and mortality, rendering bedside screening of significant clinical importance. Using voice as a biomarker coupled with deep learning has the potential to improve patient access to screening and mitigate the subjectivity associated with detecting voice change, a component of several validated screening protocols.MethodsIn this single-center study, we developed a proof-of-concept model for automated dysphagia screening and evaluated the performance of this model on training and testing cohorts. Patients were admitted to a comprehensive stroke center, where primary English speakers could follow commands without significant aphasia and participated on a rolling basis. The primary outcome was classification either as a pass or fail equivalent using a dysphagia screening test as a label. Voice data was recorded from patients who spoke a standardized set of vowels, words, and sentences from the National Institute of Health Stroke Scale. Seventy patients were recruited and 68 were included in the analysis, with 40 in training and 28 in testing cohorts, respectively. Speech from patients was segmented into 1,579 audio clips, from which 6,655 Mel-spectrogram images were computed and used as inputs for deep-learning models (DenseNet and ConvNext, separately and together). Clip-level and participant-level swallowing status predictions were obtained through a voting method.ResultsThe models demonstrated clip-level dysphagia screening sensitivity of 71% and specificity of 77% (F1 = 0.73, AUC = 0.80 [95% CI: 0.78–0.82]). At the participant level, the sensitivity and specificity were 89 and 79%, respectively (F1 = 0.81, AUC = 0.91 [95% CI: 0.77–1.05]).DiscussionThis study is the first to demonstrate the feasibility of applying deep learning to classify vocalizations to detect post-stroke dysphagia. Our findings suggest potential for enhancing dysphagia screening in clinical settings. https://github.com/UofTNeurology/masa-opensource</p
    corecore