5,725 research outputs found
A knowledge graph-supported information fusion approach for multi-faceted conceptual modelling
It has become progressively more evident that a single data source is unable to comprehensively capture the
variability of a multi-faceted concept, such as product design, driving behaviour or human trust, which has
diverse semantic orientations. Therefore, multi-faceted conceptual modelling is often conducted based on multi-sourced data covering indispensable aspects, and information fusion is frequently applied to cope with the high
dimensionality and data heterogeneity. The consideration of intra-facets relationships is also indispensable. In
this context, a knowledge graph (KG), which can aggregate the relationships of multiple aspects by semantic
associations, was exploited to facilitate the multi-faceted conceptual modelling based on heterogeneous and
semantic-rich data. Firstly, rules of fault mechanism are extracted from the existing domain knowledge repository, and node attributes are extracted from multi-sourced data. Through abstraction and tokenisation of
existing knowledge repository and concept-centric data, rules of fault mechanism were symbolised and integrated with the node attributes, which served as the entities for the concept-centric knowledge graph (CKG).
Subsequently, the transformation of process data to a stack of temporal graphs was conducted under the CKG
backbone. Lastly, the graph convolutional network (GCN) model was applied to extract temporal and attribute
correlation features from the graphs, and a temporal convolution network (TCN) was built for conceptual
modelling using these features. The effectiveness of the proposed approach and the close synergy between the
KG-supported approach and multi-faceted conceptual modelling is demonstrated and substantiated in a case
study using real-world data
The Use of Clustering Methods in Memory-Based Collaborative Filtering for Ranking-Based Recommendation Systems
This research explores the application of clustering techniques and frequency normalization in collaborative filtering to enhance the performance of ranking-based recommendation systems. Collaborative filtering is a popular approach in recommendation systems that relies on user-item interaction data. In ranking-based recommendation systems, the goal is to provide users with a personalized list of items, sorted by their predicted relevance. In this study, we propose a novel approach that combines clustering and frequency normalization techniques. Clustering, in the context of data analysis, is a technique used to organize and group together users or items that share similar characteristics or features. This method proves beneficial in enhancing recommendation accuracy by uncovering hidden patterns within the data. Additionally, frequency normalization is utilized to mitigate potential biases in user-item interaction data, ensuring fair and unbiased recommendations. The research methodology involves data preprocessing, clustering algorithm selection, frequency normalization techniques, and evaluation metrics. Experimental results demonstrate that the proposed method outperforms traditional collaborative filtering approaches in terms of ranking accuracy and recommendation quality. This approach has the potential to enhance recommendation systems across various domains, including e-commerce, content recommendation, and personalized advertising
Deep sequential pattern mining for readability enhancement of Indonesian summarization
In text summarization research, readability is a great issue that must be addressed. Our hypothesis is readability can be accomplished by using text representations that keep the meaning of text documents intact. Therefore, this study aims to combine sequential pattern mining (SPM) in producing a sequence of a word as text representation with unsupervised deep learning to produce an Indonesian text summary called DeepSPM. This research uses PrefixSpan as an SPM algorithm and deep belief network (DBN) as an unsupervised deep learning method. This research uses 18,774 Indonesian news text from IndoSum. The readability aspect is evaluated by recall-oriented understudy for gisting evaluation (ROUGE) as a co-selection-based analysis; Dwiyanto Djoko Pranowo metrics, Gunning fog index (GFI), and Flesch-Kincaid grade level (FKGL) as content-based analysis; and human readability evaluation with two experts. The experiment result shows that DeepSPM yields better than DBN, with the F-measure value of ROUGE-1 enhanced to 0.462, ROUGE-2 is 0.37, and ROUGE-L is 0.41. The significance of ROUGE results also be tested using T-Test. The content-based analysis and human readability evaluation findings are conformable with the findings of co-selection-based analysis that generated summaries are only partially readable or have a medium level of readability aspect
AGI-P: A Gender Identification Framework for Authorship Analysis Using Customized Fine-Tuning of Multilingual Language Model
In this investigation, we propose a solution for the author’s gender identification task called AGI-P. This task has several real-world applications across different fields, such as marketing and advertising, forensic linguistics, sociology, recommendation systems, language processing, historical analysis, education, and language learning. We created a new dataset to evaluate our proposed method. The dataset is balanced in terms of gender using a random sampling method and consists of 1944 samples in total. We use accuracy as an evaluation measure and compare the performance of the proposed solution (AGI-P) against state-of-the-art machine learning classifiers and fine-tuned pre-trained multilingual language models such as DistilBERT, mBERT, XLM-RoBERTa, and Multilingual DEBERTa. In this regard, we also propose a customized fine-tuning strategy that improves the accuracy of the pre-trained language models for the author gender identification task. Our extensive experimental studies reveal that our solution (AGI-P) outperforms the well-known machine learning classifiers and fine-tuned pre-trained multilingual language models with an accuracy level of 92.03%. Moreover, the pre-trained multilingual language models, fine-tuned with the proposed customized strategy, outperform the fine-tuned pre-trained language models using an out-of-the-box fine-tuning strategy. The codebase and corpus can be accessed on our GitHub page at: https://github.com/mumairhassan/AGI-
Understanding feeling-of-knowing in information search : an EEG study
The realisation and the variability of information needs (IN) with respect to a searcher’s gap in knowledge is driven by the perceived Anomalous State of Knowledge (ASK). The concept of Feeling-of-Knowing (FOK), as the introspective feeling of knowledge awareness, shares the characteristics of an ASK state. From an IR perspective, FOK as a premise to trigger IN is unexplored. Motivated by the neuroimaging studies in IR, we investigate the neurophysiological drivers associated with FOK, to provide evidence validating FOK as a distinctive state in IN realisation. We employ Electroencephalography to capture the brain activity of 24 healthy participants performing a textual Question Answering IR scenario. We analyse the evoked neural patterns corresponding to three states of knowledge: i.e., (1)“I know”, (2)“FOK”, (3)“I do not know”. Our findings show the distinct neurophysiological signatures (N1, P2, N400, P6) in response to information segments processed in the context of our three levels. They further reveal that the brain manifestation associated with “FOK” does not significantly differ from the ones associated with “I do not know”, indicating their association with recognition of a gap in knowledge and as such could further inform the IN formation on different levels of knowing
Recommended from our members
Computational Argumentation-based Chatbots: a Survey
The article archived on this institutional repository is a preprint. It has not been certified by peer review.Chatbots are conversational software applications designed to interact dialectically with users for a plethora of different purposes. Surprisingly, these colloquial agents have only recently been coupled with computational models of arguments (i.e. computational argumentation), whose aim is to formalise, in a machine-readable format, the ordinary exchange of information that characterises human communications. Chatbots may employ argumentation with different degrees and in a variety of manners. The present survey sifts through the literature to review papers concerning this kind of argumentation-based bot, drawing conclusions about the benefits and drawbacks that this approach entails in comparison with standard chatbots, while also envisaging possible future development and integration with the Transformer-based architecture and state-of-the-art Large Language models
AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0
The advancements in human-centered artificial intelligence (HCAI) systems for Industry 5.0 is a new phase of industrialization that places the worker at the center of the production process and uses new technologies to increase prosperity beyond jobs and growth. HCAI presents new objectives that were unreachable by either humans or machines alone, but this also comes with a new set of challenges. Our proposed method accomplishes this through the knowlEdge architecture, which enables human operators to implement AI solutions using a zero-touch framework. It relies on containerized AI model training and execution, supported by a robust data pipeline and rounded off with human feedback and evaluation interfaces. The result is a platform built from a number of components, spanning all major areas of the AI lifecycle. We outline both the architectural concepts and implementation guidelines and explain how they advance HCAI systems and Industry 5.0. In this article, we address the problems we encountered while implementing the ideas within the edge-to-cloud continuum. Further improvements to our approach may enhance the use of AI in Industry 5.0 and strengthen trust in AI systems
Information retrieval and machine learning methods for academic expert finding
In the context of academic expert finding, this paper investigates and compares the performance of information retrieval (IR) and machine learning (ML) methods, including deep learning, to approach the problem of identifying academic figures who are experts in different domains when a potential user requests their expertise. IR-based methods construct multifaceted textual profiles for each expert by clustering information from their scientific publications. Several methods fully tailored for this problem are presented in this paper. In contrast, ML-based methods treat expert finding as a classification task, training automatic text classifiers using publications authored by experts. By comparing these approaches, we contribute to a deeper understanding of academic-expert-finding techniques and their applicability in knowledge discovery. These methods are tested with two large datasets from the biomedical field: PMSC-UGR and CORD-19. The results show how IR techniques were, in general, more robust with both datasets and more suitable than the ML-based ones, with some exceptions showing good performance.Agencia Estatal de Investigación | Ref. PID2019-106758GB-C31Agencia Estatal de Investigación | Ref. PID2020-113230RB-C22FEDER/Junta de Andalucía | Ref. A-TIC-146-UGR2
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
- …