272 research outputs found

    Service Failure Complaints Identification in Social Media: A Text Classification Approach

    Get PDF
    The emergence of social media has brought up plenty of platforms where dissatisfied customers can share their service encounter experiences. Those customers’ feedbacks have been widely recognized as valuable information sources for improving service quality. Due to the sparse distribution of customer complaints and diversity of topics related to non-complaints in social media, manually identifying complaints is time-consuming and inefficient. In this study, a supervised learning approach including samples enlargement and classifiers construct was proposed. Applying small labeled samples as training samples, reliable complaints samples and non-complaints samples were identified from the unlabeled dataset during the sample enlargement process. Combining the enlarged samples and the labeled samples, SVM and KNN algorithms were employed to construct the classifier. Empirical results show that the proposed approach can efficiently distinguish complaints from non-complaints in social media, especially when the number of labeled samples is very small

    Wisdom of Experts and Crowds: Different Impacts of Analyst Recommendation and Online Search on the Stock Market

    Get PDF
    Sell-side analysts are professional experts while crowds are usually unsophisticated individual investors in the stock market. Understanding the different roles of experts and crowds in the stock market is a fundamental issue for both academia and industry. This empirical study tries to investigate their influences on the stock market by figuring out the following two questions: (1) Will experts and crowds have different impacts on stock prices? (2) Will experts and crowds discriminatively affect stock trading volumes? Adopting the fixed-effect model with panel data from Sogou and CSMAR, we find that experts and crowds have different impacts on the stock market. The wisdom of experts (i.e., analyst recommendation) has a more durable effect on stock prices but a smaller impact on stock turnover compared to the wisdom of crowds (i.e., abnormal search volume index)

    Environment-Centric Safety Requirements forAutonomous Unmanned Systems

    Get PDF
    Autonomous unmanned systems (AUS) emerge to take place of human operators in harsh or dangerous environments. However, such environments are typically dynamic and uncertain, causing unanticipated accidents when autonomous behaviours are no longer safe. Even though safe autonomy has been considered in the literature, little has been done to address the environmental safety requirements of AUS systematically. In this work, we propose a taxonomy of environment-centric safety requirements for AUS, and analyse the neglected issues to suggest several new research directions towards the vision of environment-centric safe autonomy

    Myosin II Light Chain Phosphorylation Regulates Membrane Localization and Apoptotic Signaling of Tumor Necrosis Factor Receptor-1

    Get PDF
    Activation of myosin II by myosin light chain kinase (MLCK) produces the force for many cellular processes including muscle contraction, mitosis, migration, and other cellular shape changes. The results of this study show that inhibition or potentiation of myosin II activation via over-expression of a dominant negative or wild type MLCK can delay or accelerate tumor necrosis factor-α (TNF)-induced apoptotic cell death in cells. Changes in the activation of caspase-8 that parallel changes in regulatory light chain phosphorylation levels reveal that myosin II motor activities regulate TNF receptor-1 (TNFR-1) signaling at an early step in the TNF death signaling pathway. Treatment of cells with either ionomycin or endotoxin (lipopolysaccharide) leads to activation of myosin II and increased translocation of TNFR-1 to the plasma membrane independent of TNF signaling. The results of these studies establish a new role for myosin II motor activity in regulating TNFR-1-mediated apoptosis through the translocation of TNFR-1 to or within the plasma membrane

    RNA editing of nuclear transcripts in Arabidopsis thaliana

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>RNA editing is a transcript-based layer of gene regulation. To date, no systemic study on RNA editing of plant nuclear genes has been reported. Here, a transcriptome-wide search for editing sites in nuclear transcripts of Arabidopsis (<it>Arabidopsis thaliana</it>) was performed.</p> <p>Results</p> <p>MPSS (massively parallel signature sequencing) and PARE (parallel analysis of RNA ends) data retrieved from public databases were utilized, focusing on one-base-conversion editing. Besides cytidine (C)-to-uridine (U) editing in mitochondrial transcripts, many nuclear transcripts were found to be diversely edited. Interestingly, a sizable portion of these nuclear genes are involved in chloroplast- or mitochondrion-related functions, and many editing events are tissue-specific. Some editing sites, such as adenosine (A)-to-U editing loci, were found to be surrounded by peculiar elements. The editing events of some nuclear transcripts are highly enriched surrounding the borders between coding sequences (CDSs) and 3′ untranslated regions (UTRs), suggesting site-specific editing. Furthermore, RNA editing is potentially implicated in new start or stop codon generation, and may affect alternative splicing of certain protein-coding transcripts. RNA editing in the precursor microRNAs (pre-miRNAs) of <it>ath-miR854</it> family, resulting in secondary structure transformation, implies its potential role in microRNA (miRNA) maturation.</p> <p>Conclusions</p> <p>To our knowledge, the results provide the first global view of RNA editing in plant nuclear transcripts.</p

    Breathing Life into Faces: Speech-driven 3D Facial Animation with Natural Head Pose and Detailed Shape

    Full text link
    The creation of lifelike speech-driven 3D facial animation requires a natural and precise synchronization between audio input and facial expressions. However, existing works still fail to render shapes with flexible head poses and natural facial details (e.g., wrinkles). This limitation is mainly due to two aspects: 1) Collecting training set with detailed 3D facial shapes is highly expensive. This scarcity of detailed shape annotations hinders the training of models with expressive facial animation. 2) Compared to mouth movement, the head pose is much less correlated to speech content. Consequently, concurrent modeling of both mouth movement and head pose yields the lack of facial movement controllability. To address these challenges, we introduce VividTalker, a new framework designed to facilitate speech-driven 3D facial animation characterized by flexible head pose and natural facial details. Specifically, we explicitly disentangle facial animation into head pose and mouth movement and encode them separately into discrete latent spaces. Then, these attributes are generated through an autoregressive process leveraging a window-based Transformer architecture. To augment the richness of 3D facial animation, we construct a new 3D dataset with detailed shapes and learn to synthesize facial details in line with speech content. Extensive quantitative and qualitative experiments demonstrate that VividTalker outperforms state-of-the-art methods, resulting in vivid and realistic speech-driven 3D facial animation
    corecore