4 research outputs found

    Extracting scientific trends by mining topics from Call for Papers

    Get PDF
    © 2019, Emerald Publishing Limited. Purpose: The purpose of this paper is to present a novel approach for mining scientific trends using topics from Call for Papers (CFP). The work contributes a valuable input for researchers, academics, funding institutes and research administration departments by sharing the trends to set directions of research path. Design/methodology/approach: The authors procure an innovative CFP data set to analyse scientific evolution and prestige of conferences that set scientific trends using scientific publications indexed in DBLP. Using the Field of Research code 804 from Australian Research Council, the authors identify 146 conferences (from 2006 to 2015) into different thematic areas by matching the terms extracted from publication titles with the Association for Computing Machinery Computing Classification System. Furthermore, the authors enrich the vocabulary of terms from the WordNet dictionary and Growbag data set. To measure the significance of terms, the authors adopt the following weighting schemas: probabilistic, gram, relative, accumulative and hierarchal. Findings: The results indicate the rise of “big data analytics” from CFP topics in the last few years. Whereas the topics related to “privacy and security” show an exponential increase, the topics related to “semantic web” show a downfall in recent years. While analysing publication output in DBLP that matches CFP indexed in ERA Core A* to C rank conference, the authors identified that A* and A tier conferences not merely set publication trends, since B or C tier conferences target similar CFP. Originality/value: Overall, the analyses presented in this research are prolific for the scientific community and research administrators to study research trends and better data management of digital libraries pertaining to the scientific literature

    Suprasellar pilocytic astrocytoma in an adult with hemorrhage and leptomeningeal dissemination: case report and review of literature

    Get PDF
    Pilocytic astrocytoma (PA) is a low-grade tumor. It has an excellent prognosis after total resection. Leptomeningeal dissemination and hemorrhage are very rare to be associated with PA and lead to unfavorable prognosis. A 35-year-old man was diagnosed with a hemorrhagic suprasellar PA in 2006. Subsequent examination in 2007 revealed another large subdural hemorrhagic lesion in the sacral region, which proved to be PA by histopathologic assessment. Other leptomeningeal foci were discovered mainly at the craniocervical junction. The patient underwent subtotal resection and received chemotherapy with disease control for 7 years. Progression of the disseminated disease has recently occurred; however, the patient is still alive with stable disease after radiotherapy. The radiological features, management, and relevant literature are also presented. Our report heightens the awareness of PA in the adult population and the importance of close surveillance for the leptomeningeal spread, especially for sellar region tumors

    Transforming Language Translation: A Deep Learning Approach to Urdu–English Translation

    No full text
    Machine translation has revolutionized the field of language translation in the last decade. Initially dominated by statistical models, the rise of deep learning techniques has led to neural networks, particularly Transformer models, taking the lead. These models have demonstrated exceptional performance in natural language processing tasks, surpassing traditional sequence-to-sequence models like RNN, GRU, and LSTM. With advantages like better handling of long-range dependencies and requiring less training time, the NLP community has shifted towards using Transformers for sequence-to-sequence tasks. In this work, we leverage the sequence-to-sequence transformer model to translate Urdu (a low resourced language) to English. Our model is based on a variant of transformer with some changes as activation dropout, attention dropout and final layer normalization. We have used four different datasets (UMC005, Tanzil, The Wire, and PIB) from two categories (religious and news) to train our model. The achieved results demonstrated that the model’s performance and quality of translation varied depending on the dataset used for fine-tuning. Our designed model has out performed the baseline models with 23.9 BLEU, 0.46 chrf, 0.44 METEOR and 60.75 TER scores. The enhanced performance attributes to meticulous parameter tuning, encompassing modifications in architecture and optimization techniques. Comprehensive parametric details regarding model configurations and optimizations are provided to elucidate the distinctiveness of our approach and how it surpasses prior works. We provide source code via GitHub for future studies. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024
    corecore