65 research outputs found

    Social Network Analysis of Ontology Edit Logs

    Get PDF
    This paper presents an approach applying social network analysis on collaborative edit log data. Semantic Web Wiki and FAO ontologies are given as case studies. A number of users that are editing the same ontology or the same pages can be viewed as a social network of people interacting via the ontology. We propose to represent the edit log files as a graph either of users that are connected if they are editing the same ontology concepts or of concepts that are connected if edited by the same users. We apply social network analysis on such graphs in order to provide some insights into activity of the wiki/ontology editors. Finally, a plugin was developed which provides a comfortable GUI to some of the used analysis techniques, so that the people interested in monitoring the editing activity can perform that analysis and visualization on their own.</span

    Profiling the news spreading barriers using news headlines

    Full text link
    News headlines can be a good data source for detecting the news spreading barriers in news media, which may be useful in many real-world applications. In this paper, we utilize semantic knowledge through the inference-based model COMET and sentiments of news headlines for barrier classification. We consider five barriers including cultural, economic, political, linguistic, and geographical, and different types of news headlines including health, sports, science, recreation, games, homes, society, shopping, computers, and business. To that end, we collect and label the news headlines automatically for the barriers using the metadata of news publishers. Then, we utilize the extracted commonsense inferences and sentiments as features to detect the news spreading barriers. We compare our approach to the classical text classification methods, deep learning, and transformer-based methods. The results show that the proposed approach using inferences-based semantic knowledge and sentiment offers better performance than the usual (the average F1-score of the ten categories improves from 0.41, 0.39, 0.59, and 0.59 to 0.47, 0.55, 0.70, and 0.76 for the cultural, economic, political, and geographical respectively) for classifying the news-spreading barriers.Comment: arXiv admin note: substantial text overlap with arXiv:2304.0816

    Political and Economic Patterns in COVID-19 News: From Lockdown to Vaccination

    Full text link
    The purpose of this study is to analyse COVID-19 related news published across different geographical places, in order to gain insights in reporting differences. The COVID-19 pandemic had a major outbreak in January 2020 and was followed by different preventive measures, lockdown, and finally by the process of vaccination. To date, more comprehensive analysis of news related to COVID-19 pandemic are missing, especially those which explain what aspects of this pandemic are being reported by newspapers inserted in different economies and belonging to different political alignments. Since LDA is often less coherent when there are news articles published across the world about an event and you look answers for specific queries. It is because of having semantically different content. To address this challenge, we performed pooling of news articles based on information retrieval using TF-IDF score in a data processing step and topic modeling using LDA with combination of 1 to 6 ngrams. We used VADER sentiment analyzer to analyze the differences in sentiments in news articles reported across different geographical places. The novelty of this study is to look at how COVID-19 pandemic was reported by the media, providing a comparison among countries in different political and economic contexts. Our findings suggest that the news reporting by newspapers with different political alignment support the reported content. Also, economic issues reported by newspapers depend on economy of the place where a newspaper resides

    Correcting the Hub Occurrence Prediction Bias in Many Dimensions

    Get PDF
    Data reduction is a common pre-processing step for k-nearest neighbor classification (kNN). The existing prototype selection methods implement different criteria for selecting relevant points to use in classification, which constitutes a selection bias. This study examines the nature of the instance selection bias in intrinsically high-dimensional data. In high-dimensional feature spaces, hubs are known to emerge as centers of influence in kNN classification. These points dominate most kNN sets and are often detrimental to classification performance. Our experiments reveal that different instance selection strategies bias the predictions of the behavior of hub-points in high-dimensional data in different ways. We propose to introduce an intermediate un-biasing step when training the neighbor occurrence models and we demonstrate promising improvements in various hubness-aware classification methods, on a wide selection of high-dimensional synthetic and real-world datasets

    Analyzing Tag Semantics Across Collaborative Tagging Systems

    No full text
    The objective of our group was to exploit state-of-the-art Information Retrieval methods for finding associations and dependencies between tags, capturing and representing differences in tagging behavior and vocabulary of various folksonomies, with the overall aim to better understand the semantics of tags and the tagging process. Therefore we analyze the semantic content of tags in the Flickr and Delicious folksonomies. We find that: tag context similarity leads to meaningful results in Flickr, despite its narrow folksonomy character; the comparison of tags across Flickr and Delicious shows little semantic overlap, being tags in Flickr associated more to visual aspects rather than technological as it seems to be in Delicious; there are regions in the tag-tag space, provided with the cosine similarity metric, that are characterized by high density; the order of tags inside a post has a semantic relevance

    Predicting Operators Fatigue in a Human in the AI Loop for Defect Detection in Manufacturing

    Get PDF
    Quality inspection, typically performed manually by workers in the past, is now rapidly switching to automated solutions, using artificial intelligence (AI)-driven methods. This elevates the job function of the quality inspection team from the physical inspection tasks to tasks related to managing workflows in synergy with AI agents, for example, interpreting inspection outcomes or labeling inspection image data for the AI models. In this context, we have studied how defect inspection can be enhanced, providing defect hints to the operator to ease defect identification. Furthermore, we developed machine learning models to recognize and predict operators’ fatigue. By doing so, we can proactively take mitigation actions to enhance the workers’ well-being and ensure the highest defect inspection quality standards. We consider such processes to empower human and non-human actors in manufacturing and the sociotechnical production system. The paper first outlines the conceptual approach for integrating the operator in the AI-driven quality inspection process while implementing a fatigue monitoring system to enhance work conditions. Furthermore, it describes how this was implemented by leveraging data and experiments performed for a real-world manufacturing use case

    Using Machine Learning on Sensor Data

    Get PDF
    Extracting useful information from raw sensor data requires specific methods and algorithms. We describe a vertical system integration of a sensor node and a toolkit of machine learning algorithms for predicting the number of persons located in a closed space. The dataset used as input for the learning algorithms is composed of automatically collected sensor data and additional manually introduced data. We analyze the dataset and evaluate the performance of two types ofmachine learning algorithms on this dataset: classification and regression. With our system settings, the experiments show that augmenting sensor data with proper information can improve prediction results and also the classification algorithm performed better

    A System for Publishing Sensor Data on the Semantic Web

    Get PDF
    The development of sensor technologies in the last years offers support for new applications for the Internet of Things. The availability of the data resulting from sensor networks deployed in different environments is important for a number of advanced services, such as traffic flow prediction and power consumption monitoring. We propose a system for publishing sensor data following the linked data principles and providing hereby integration with the Semantic Web. The main components are the Semantic Enrichment component and the Data Publishing component, while for storing sensor data we use a relational database
    corecore