16 research outputs found

    Gaze to social scenes by individuals with autism (Liang & Wilkinson, 2018)

    No full text
    <div><b>Purpose: </b>A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support.</div><div><b>Method: </b>Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified.</div><div><b>Results: </b>The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants’ own viewing pace was considered, individuals with autism resembled those with Down syndrome.</div><div><b>Conclusion: </b>The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed.</div><div><br></div><div><b>Supplemental Material S1. </b>Operation definitions and calculation method for the dependent variables.</div><div><br></div><div>Liang, J., & Wilkinson, K. (2018). Gaze toward naturalistic social scenes by individuals with intellectual and developmental disabilities: Implications for augmentative and alternative communication designs. <i>Journal of Speech, Language, and Hearing Research, 61, </i>1157–1170<i>. </i>https://doi.org/10.1044/2018_JSLHR-L-17-0331</div><div><br></div

    DataSheet1_An efficient five-lncRNA signature for lung adenocarcinoma prognosis, with AL606489.1 showing sexual dimorphism.docx

    No full text
    Background: Lung adenocarcinoma (LUAD) is a sex-biased and easily metastatic malignant disease. A signature based on 5 long non-coding RNAs (lncRNAs) has been established to promote the overall survival (OS) prediction effect on LUAD.Methods: The RNA expression profiles of LUAD patients were obtained from The Cancer Genome Atlas. OS-associated lncRNAs were identified based on the differential expression analysis between LUAD and normal samples followed by survival analysis, univariate and multivariate Cox proportional hazards regression analyses. OS-associated lncRNA with sex dimorphism was determined based on the analysis of expression between males and females. Functional enrichment analysis of the Gene Ontology (GO) terms and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways was performed to explore the possible mechanisms of 5-lncRNA signatures.Results: A 5-lncRNA signature (composed of AC068228.1, SATB2-AS1, LINC01843, AC026355.1, and AL606489.1) was found to be effective in predicting high-risk LUAD patients as well as applicable to female and male subgroups and Conclusion: Our 5-lncRNA signature could efficaciously predict the OS of LUAD patients. AL606489.1 demonstrated gender dimorphism, which provides a new direction for mechanistic studies on sexual dimorphism.</p

    Early Development of Emotional Competence (EDEC) assessment tool (Na et al., 2017)

    No full text
    <div><b>Purpose: </b>This article introduces and provides initial data supporting “The Early Development of Emotional Competence (EDEC): A tool for children with complex communication needs (CCNs).” The EDEC was developed to raise awareness about the relation of language and emotional competence and to maximize the likelihood that intervention includes language to discuss emotions in ways that are consistent with the values and goals of the family.</div><div><b>Method: </b>First, the theoretical and clinical foundations of the EDEC development were discussed. Then, a description of preferred translation practices was provided, with examples of Korean and Mandarin Chinese translations. Finally, initial data from a pilot study with two sociocultural communities (i.e., 10 American and 10 Korean mothers of children developing typically) were presented to demonstrate the potential of the tool.</div><div><b>Results: </b>The pilot test offered preliminary support for the sensitivity of the EDEC. The tool solicited responses reflecting cultural differences between American and Korean mothers’ perception of a child’s emotional skills and mother-child conversation about emotions as predicted based on many cross-cultural studies in emotion.</div><div><b>Conclusions: </b>The information elicited from the EDEC shows promise for enabling culturally natural conversation about emotions with appropriate vocabulary and phrases in their augmentative and alternative communication systems.</div><div><br></div><div><b>Supplemental Material S1. </b>English version of the Early Development of Emotional Competence (EDEC).</div><div><br></div><div><b>Supplemental Material S2.</b> Korean version of the Early Development of Emotional Competence (EDEC).</div><div><br></div><div><b>Supplemental Material S3.</b> Mandarin Chinese version of the Early Development of Emotional Competence (EDEC). </div><div><b><br></b></div><div><b>Supplemental Material S4. </b>Codebook for the Early Development of Emotional Competence (EDEC). </div><div><br></div><div>Na, J. Y., Wilkinson, K., & Liang, J. (2017). Early Development of Emotional Competence (EDEC) assessment tool for children with complex communication needs: Development and evidence. <i>American Journal of Speech-Language Pathology, 27,</i> 24–36. https://doi.org/10.1044/2017_AJSLP-16-0058</div

    A visual guide to optimal selection of window length (<i>m</i>) and tolerance (<i>r</i>) parameters for SampEn estimation of fMRI time series of length 128.

    No full text
    <p>(a) the median value of relative error of SampEn is shown in pseudocolor. (b) their changes with <i>m</i> and <i>r</i> is shown as a color ribbon map.</p

    The activated clusters detected by (a) SampEn, in which statistical <i>T</i><sup>+</sup> map reveals an activation change in two experimental paradigms (neutral-blank to threat-neutral); (b) SPM12, in which statistical <i>T</i> map reveals activations during processing of neutral-blank (hot orange) and threat-neutral (winter blue).

    No full text
    <p>The activated clusters detected by (a) SampEn, in which statistical <i>T</i><sup>+</sup> map reveals an activation change in two experimental paradigms (neutral-blank to threat-neutral); (b) SPM12, in which statistical <i>T</i> map reveals activations during processing of neutral-blank (hot orange) and threat-neutral (winter blue).</p

    Illustration of SampEn algorithm with the embedding dimension <i>m</i> = 2.

    No full text
    <p>The colored bands show the tolerance regions <i>r</i>. (a) Green arrow denotes template vector <i>u</i>(10) = [<i>x</i>(10), <i>x</i>(11)]. (b) Only vectors <i>u</i>(20) = [<i>x</i>(20), <i>x</i>(21)], <i>u</i>(35) = [<i>x</i>(35), <i>x</i>(36)] (red arrow) falling into these bands were counted to match the template vector: <i>u</i>(10) = [<i>x</i>(10), <i>x</i>(11)].</p

    Segmentation results for four slices from four patient.

    No full text
    <p>The first row shows the original images; the second row is segmentation results by applying Dirichlet process (DP) model to each image individually; the third row is the results of the proposed hierarchical DP (HDP) model, the last row is the segmentation results of random walk.</p

    Segmentation results from slices from four patients.

    No full text
    <p>The first row shows the original images, the second row is segmentation results by applying DP model to each image individually, the third row is the results of the proposed model, the last row is the segmentation results of random walk.</p

    Inference time for presented three methods for segmenting four slices in Section. segmentation from the same patient.

    No full text
    <p>The computation time for DP and random walk is the combined time for segmenting four slices.</p

    Segmentation results for four slices from the same patient.

    No full text
    <p>The first row shows the original images; the second row is segmentation results by applying Dirichlet process (DP) model to each image individually; the third row is the results of the proposed hierarchical DP (HDP) model, the last row is the segmentation results of random walk.</p
    corecore