1,210 research outputs found

    A Topic Modeling Guided Approach for Semantic Knowledge Discovery in e-Commerce

    Get PDF
    The task of mining large unstructured text archives, extracting useful patterns and then organizing them into a knowledgebase has attained a great attention due to its vast array of immediate applications in business. Businesses thus demand new and efficient algorithms for leveraging potentially useful patterns from heterogeneous data sources that produce huge volumes of unstructured data. Due to the ability to bring out hidden themes from large text repositories, topic modeling algorithms attained significant attention in the recent past. This paper proposes an efficient and scalable method which is guided by topic modeling for extracting concepts and relationships from e-commerce product descriptions and organizing them into knowledgebase. Semantic graphs can be generated from such a knowledgebase on which meaning aware product discovery experience can be built for potential buyers. Extensive experiments using proposed unsupervised algorithms with e-commerce product descriptions collected from open web shows that our proposed method outperforms some of the existing methods of leveraging concepts and relationships so that efficient knowledgebase construction is possible

    SpEL: Structured Prediction for Entity Linking

    Full text link
    Entity linking is a prominent thread of research focused on structured data creation by linking spans of text to an ontology or knowledge source. We revisit the use of structured prediction for entity linking which classifies each individual input token as an entity, and aggregates the token predictions. Our system, called SpEL (Structured prediction for Entity Linking) is a state-of-the-art entity linking system that uses some new ideas to apply structured prediction to the task of entity linking including: two refined fine-tuning steps; a context sensitive prediction aggregation strategy; reduction of the size of the model's output vocabulary, and; we address a common problem in entity-linking systems where there is a training vs. inference tokenization mismatch. Our experiments show that we can outperform the state-of-the-art on the commonly used AIDA benchmark dataset for entity linking to Wikipedia. Our method is also very compute efficient in terms of number of parameters and speed of inference

    ClimateNLP: Analyzing Public Sentiment Towards Climate Change Using Natural Language Processing

    Full text link
    Climate change's impact on human health poses unprecedented and diverse challenges. Unless proactive measures based on solid evidence are implemented, these threats will likely escalate and continue to endanger human well-being. The escalating advancements in information and communication technologies have facilitated the widespread availability and utilization of social media platforms. Individuals utilize platforms such as Twitter and Facebook to express their opinions, thoughts, and critiques on diverse subjects, encompassing the pressing issue of climate change. The proliferation of climate change-related content on social media necessitates comprehensive analysis to glean meaningful insights. This paper employs natural language processing (NLP) techniques to analyze climate change discourse and quantify the sentiment of climate change-related tweets. We use ClimateBERT, a pretrained model fine-tuned specifically for the climate change domain. The objective is to discern the sentiment individuals express and uncover patterns in public opinion concerning climate change. Analyzing tweet sentiments allows a deeper comprehension of public perceptions, concerns, and emotions about this critical global challenge. The findings from this experiment unearth valuable insights into public sentiment and the entities associated with climate change discourse. Policymakers, researchers, and organizations can leverage such analyses to understand public perceptions, identify influential actors, and devise informed strategies to address climate change challenges

    Exploring the Perceptions of Generations X, Y and Z about Online Platforms and Digital Marketing Activities – A Focus-Group Discussion Based Study

    Get PDF
    Purpose: This study analyzes the perceptions and attitudes of GenX, GenY and GenZ towards online platforms and digital marketing activities.   Theoretical framework: This study is qualitative in nature and data were collected from three separate focus group discussions, one each among generations X, Y and Z. Secondary data sources like previous research articles, internet sources and books were referred.   Design/methodology/approach: This article is intended to get insights regarding the online platforms and digital marketing consumption patterns to understand the perceptions and attitudes of GenX, GenY and GenZ towards various online platforms. Group discussion was conducted among all generations with pre-planned questions prepared by the researcher; participants were from researcher’s personal and professional network. From transcripts were prepared and information regarding their perceptions on digital marketing and online platforms were obtained and thematic analysis was done using NVivo. Ten themes and sub-themes were identified from the chart presented through Nvivo.   Findings: The perceptions of three generations regarding online platforms and digital marketing activities differ significantly as GenX are digital migrants, GenY are digital natives and GenZ are mobile natives.   Research, Practical & Social implications: The emergence of internet and digitalization has forced companies to concentrate more on online platforms and digital marketing avenues. Different generations’ interest, traits, perceptions, habits, etc differ and hence there is a need to analyze and understand the perceptions of different generational cohorts for businesses to develop an effective digital marketing strategy. This study would pave the way for more studies and researches which would benefit both academics and industry.   Originality/value: This study would help to understand the perceptions of different generational cohorts for businesses to develop an effective digital marketing strategy.

    Optimized Screening of Glaucoma using Fundus Images and Deep Learning

    Get PDF
    Diabetic retinopathy, glaucoma, and age-related macular degeneration are among the leading causes of global visual loss. Early detection and diagnosis of these conditions are crucial to reduce vision loss and improve patient outcomes. In recent years, deep learning algorithms have shown great potential in automating the diagnosis and categorization of eye disorders using medical photos. For this purpose, the ResNet-50 architecture is employed in a deep learning-based strategy. The approach involves fine-tuning a pre-trained ResNet-50 model using over 5,000 retinal pictures from the ODIR dataset, covering ten different ocular diseases. To enhance the model's generalization performance and avoid overfitting, various data augmentation techniques are applied to the training data. The model successfully detects glaucoma-related ocular illnesses, including cataract, diabetic retinopathy, and healthy eyes. Performance evaluation using metrics like accuracy, precision, recall, and F1-score shows that the model achieved 92.60% accuracy, 93.54% precision, 91.60% recall, and an F1-score of 91.68%. These results indicate that the proposed strategy outperforms many state-of-the-art approaches in the detection and categorization of eye disorders. This success underscores the potential of deep learning-based methods in automated ocular illness identification, facilitating early diagnosis and timely treatment to ultimately improve patient outcomes
    • …
    corecore