12,166 research outputs found

    Expression and Functional Studies on the Noncoding RNA, PRINS.

    Get PDF
    PRINS, a noncoding RNA identified earlier by our research group, contributes to psoriasis susceptibility and cellular stress response. We have now studied the cellular and histological distribution of PRINS by using in situ hybridization and demonstrated variable expressions in different human tissues and a consistent staining pattern in epidermal keratinocytes and in vitro cultured keratinocytes. To identify the cellular function(s) of PRINS, we searched for a direct interacting partner(s) of this stress-induced molecule. In HaCaT and NHEK cell lysates, the protein proved to be nucleophosmin (NPM) protein as a potential physical interactor with PRINS. Immunohistochemical experiments revealed an elevated expression of NPM in the dividing cells of the basal layers of psoriatic involved skin samples as compared with healthy and psoriatic uninvolved samples. Others have previously shown that NPM is a ubiquitously expressed nucleolar phosphoprotein which shuttles to the nucleoplasm after UV-B irradiation in fibroblasts and cancer cells. We detected a similar translocation of NPM in UV-B-irradiated cultured keratinocytes. The gene-specific silencing of PRINS resulted in the retention of NPM in the nucleolus of UV-B-irradiated keratinocytes; suggesting that PRINS may play a role in the NPM-mediated cellular stress response in the skin

    Towards Indonesian Speech-Emotion Automatic Recognition (I-SpEAR)

    Full text link
    Even though speech-emotion recognition (SER) has been receiving much attention as research topic, there are still some disputes about which vocal features can identify certain emotion. Emotion expression is also known to be differed according to the cultural backgrounds that make it important to study SER specific to the culture where the language belongs to. Furthermore, only a few studies addresses the SER in Indonesian which what this study attempts to explore. In this study, we extract simple features from 3420 voice data gathered from 38 participants. The features are compared by means of linear mixed effect model which shows that people who are in emotional and non-emotional state can be differentiated by their speech duration. Using SVM and speech duration as input feature, we achieve 76.84% average accuracy in classifying emotional and non-emotional speech.Comment: 4 pages, 3 tables, published in 4th International Conference on New Media (Conmedia) on 8-10 Nov. 2017 (http://conmedia.umn.ac.id/) [in print as in Sept. 17, 2017
    • …
    corecore