1,022 research outputs found

    Activation of CD147 with Cyclophilin A Induces the Expression of IFITM1 through ERK and PI3K in THP-1 Cells

    Get PDF
    CD147, as a receptor for Cyclophilins, is a multifunctional transmembrane glycoprotein. In order to identify genes that are induced by activation of CD147, THP-1 cells were stimulated with Cyclophilin A and differentially expressed genes were detected using PCR-based analysis. Interferon-induced transmembrane 1 (IFITM1) was detected to be induced and it was confirmed by RT-PCR and Western blot analysis. CD147-induced expression of IFITM1 was blocked by inhibitors of ERK, PI3K, or NF-κB, but not by inhibitors of p38, JNK, or PKC. IFITM1 appears to mediate inflammatory activation of THP-1 cells since cross-linking of IFITM1 with specific monoclonal antibody against it induced the expression of proinflammatory mediators such as IL-8 and MMP-9. These data indicate that IFITM1 is one of the pro-inflammatory mediators that are induced by signaling initiated by the activation of CD147 in macrophages and activation of ERK, PI3K, and NF-κB is required for the expression of IFITM1

    Hybrid bounds for twisted L-functions

    Get PDF
    The aim of this paper is to derive bounds on the critical line Rs 1/2 for L- functions attached to twists f circle times chi of a primitive cusp form f of level N and a primitive character modulo q that break convexity simultaneously in the s and q aspects. If f has trivial nebentypus, it is shown that L(f circle times chi, s) << (N vertical bar s vertical bar q)(epsilon) N-4/5(vertical bar s vertical bar q)(1/2-1/40), where the implied constant depends only on epsilon > 0 and the archimedean parameter of f. To this end, two independent methods are employed to show L(f circle times chi, s) << (N vertical bar s vertical bar q)(epsilon) N-1/2 vertical bar S vertical bar(1/2)q(3/8) and L(g,s) << D-2/3 vertical bar S vertical bar(5/12) for any primitive cusp form g of level D and arbitrary nebentypus (not necessarily a twist f circle times chi of level D vertical bar Nq(2))

    Capturing scattered discriminative information using a deep architecture in acoustic scene classification

    Full text link
    Frequently misclassified pairs of classes that share many common acoustic properties exist in acoustic scene classification (ASC). To distinguish such pairs of classes, trivial details scattered throughout the data could be vital clues. However, these details are less noticeable and are easily removed using conventional non-linear activations (e.g. ReLU). Furthermore, making design choices to emphasize trivial details can easily lead to overfitting if the system is not sufficiently generalized. In this study, based on the analysis of the ASC task's characteristics, we investigate various methods to capture discriminative information and simultaneously mitigate the overfitting problem. We adopt a max feature map method to replace conventional non-linear activations in a deep neural network, and therefore, we apply an element-wise comparison between different filters of a convolution layer's output. Two data augment methods and two deep architecture modules are further explored to reduce overfitting and sustain the system's discriminative power. Various experiments are conducted using the detection and classification of acoustic scenes and events 2020 task1-a dataset to validate the proposed methods. Our results show that the proposed system consistently outperforms the baseline, where the single best performing system has an accuracy of 70.4% compared to 65.1% of the baseline.Comment: Submitted to DCASE2020 worksho

    How Many Presentations Are Published as Full Papers?

    Get PDF
    BackgroundThe publication rate of presentations at medical international meetings has ranged from 11% to 78% with an average of 45%. To date, there are no studies about the final rate of publications at scientific meetings associated with plastic surgery from Korea. The present authors investigated the publication rate among the presentations at meetings associated with plastic surgery.MethodsThe titles and authors of the abstracts from oral and poster presentations were collected from the program books of the Congress of the Korean Society of Plastic and Reconstructive Surgeons (CKSPRS) for 2005 to 2007 (58th-63rd). All of the abstracts presented were searched for using PubMed, KoreaMed, KMbase, and Google Scholar. The titles, key words from the titles, and the authors' names were then entered in database programs. The parameters reviewed included the publication rate, type of presentation including running time, affiliation, subspecialty, time to publication, and publication journal.ResultsA total of 1,176 abstracts presented at the CKSPRS from 2005 to 2007 were evaluated. 38.7% of the abstracts, of which oral presentations accounted for 41.0% and poster presentations 34.8%, were published as full papers. The mean time to publication was 15.04 months. Among journals of publication, the Journal of the Korean Society of Plastic and Reconstructive Surgeons was most used.ConclusionsBrilliant ideas and innovative approaches are being discussed at CKSPRS. The 38.7% publication rate found from this research appeared a bit lower than the average rate of medical meetings. If these valuable presentations are not available as full papers, the research would be a waste of time and effort

    Convolution channel separation and frequency sub-bands aggregation for music genre classification

    Full text link
    In music, short-term features such as pitch and tempo constitute long-term semantic features such as melody and narrative. A music genre classification (MGC) system should be able to analyze these features. In this research, we propose a novel framework that can extract and aggregate both short- and long-term features hierarchically. Our framework is based on ECAPA-TDNN, where all the layers that extract short-term features are affected by the layers that extract long-term features because of the back-propagation training. To prevent the distortion of short-term features, we devised the convolution channel separation technique that separates short-term features from long-term feature extraction paths. To extract more diverse features from our framework, we incorporated the frequency sub-bands aggregation method, which divides the input spectrogram along frequency bandwidths and processes each segment. We evaluated our framework using the Melon Playlist dataset which is a large-scale dataset containing 600 times more data than GTZAN which is a widely used dataset in MGC studies. As the result, our framework achieved 70.4% accuracy, which was improved by 16.9% compared to a conventional framework

    Integrated Parameter-Efficient Tuning for General-Purpose Audio Models

    Full text link
    The advent of hyper-scale and general-purpose pre-trained models is shifting the paradigm of building task-specific models for target tasks. In the field of audio research, task-agnostic pre-trained models with high transferability and adaptability have achieved state-of-the-art performances through fine-tuning for downstream tasks. Nevertheless, re-training all the parameters of these massive models entails an enormous amount of time and cost, along with a huge carbon footprint. To overcome these limitations, the present study explores and applies efficient transfer learning methods in the audio domain. We also propose an integrated parameter-efficient tuning (IPET) framework by aggregating the embedding prompt (a prompt-based learning approach), and the adapter (an effective transfer learning method). We demonstrate the efficacy of the proposed framework using two backbone pre-trained audio models with different characteristics: the audio spectrogram transformer and wav2vec 2.0. The proposed IPET framework exhibits remarkable performance compared to fine-tuning method with fewer trainable parameters in four downstream tasks: sound event classification, music genre classification, keyword spotting, and speaker verification. Furthermore, the authors identify and analyze the shortcomings of the IPET framework, providing lessons and research directions for parameter efficient tuning in the audio domain.Comment: 5 pages, 3 figures, submit to ICASSP202
    corecore