50 research outputs found

    A Survey on Semantic Processing Techniques

    Full text link
    Semantic processing is a fundamental research domain in computational linguistics. In the era of powerful pre-trained language models and large language models, the advancement of research in this domain appears to be decelerating. However, the study of semantics is multi-dimensional in linguistics. The research depth and breadth of computational semantic processing can be largely improved with new technologies. In this survey, we analyzed five semantic processing tasks, e.g., word sense disambiguation, anaphora resolution, named entity recognition, concept extraction, and subjectivity detection. We study relevant theoretical research in these fields, advanced methods, and downstream applications. We connect the surveyed tasks with downstream applications because this may inspire future scholars to fuse these low-level semantic processing tasks with high-level natural language processing tasks. The review of theoretical research may also inspire new tasks and technologies in the semantic processing domain. Finally, we compare the different semantic processing techniques and summarize their technical trends, application trends, and future directions.Comment: Published at Information Fusion, Volume 101, 2024, 101988, ISSN 1566-2535. The equal contribution mark is missed in the published version due to the publication policies. Please contact Prof. Erik Cambria for detail

    Hybrid fuzzy multi-objective particle swarm optimization for taxonomy extraction

    Get PDF
    Ontology learning refers to an automatic extraction of ontology to produce the ontology learning layer cake which consists of five kinds of output: terms, concepts, taxonomy relations, non-taxonomy relations and axioms. Term extraction is a prerequisite for all aspects of ontology learning. It is the automatic mining of complete terms from the input document. Another important part of ontology is taxonomy, or the hierarchy of concepts. It presents a tree view of the ontology and shows the inheritance between subconcepts and superconcepts. In this research, two methods were proposed for improving the performance of the extraction result. The first method uses particle swarm optimization in order to optimize the weights of features. The advantage of particle swarm optimization is that it can calculate and adjust the weight of each feature according to the appropriate value, and here it is used to improve the performance of term and taxonomy extraction. The second method uses a hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems that ensures that the membership functions and fuzzy system rule sets are optimized. The advantage of using a fuzzy system is that the imprecise and uncertain values of feature weights can be tolerated during the extraction process. This method is used to improve the performance of taxonomy extraction. In the term extraction experiment, five extracted features were used for each term from the document. These features were represented by feature vectors consisting of domain relevance, domain consensus, term cohesion, first occurrence and length of noun phrase. For taxonomy extraction, matching Hearst lexico-syntactic patterns in documents and the web, and hypernym information form WordNet were used as the features that represent each pair of terms from the texts. These two proposed methods are evaluated using a dataset that contains documents about tourism. For term extraction, the proposed method is compared with benchmark algorithms such as Term Frequency Inverse Document Frequency, Weirdness, Glossary Extraction and Term Extractor, using the precision performance evaluation measurement. For taxonomy extraction, the proposed methods are compared with benchmark methods of Feature-based and weighting by Support Vector Machine using the f-measure, precision and recall performance evaluation measurements. For the first method, the experiment results concluded that implementing particle swarm optimization in order to optimize the feature weights in terms and taxonomy extraction leads to improved accuracy of extraction result compared to the benchmark algorithms. For the second method, the results concluded that the hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems leads to improved performance of taxonomy extraction results when compared to the benchmark methods, while adjusting the fuzzy membership function and keeping the number of fuzzy rules to a minimum number with a high degree of accuracy

    LiDom builder: Automatising the construction of multilingual domain modules

    Get PDF
    136 p.Laburpena Lan honetan LiDOM Builder tresnaren analisi, diseinu eta ebaluazioa aurkezten dira. Teknologian oinarritutako hezkuntzarako tresnen Domeinu Modulu Eleaniztunak testuliburu elektronikoetatik era automatikoan erauztea ahalbidetzen du LiDOM Builderek. Ezagutza eskuratzeko, Hizkuntzaren Prozesamendurako eta Ikaste Automatikorako teknikekin batera, hainbat baliabide eleaniztun erabiltzen ditu, besteak beste, Wikipedia eta WordNet.Domeinu Modulu Elebakarretik Domeinu Modulu Eleaniztunerako bidean, LiDOM Builder tresna DOM-Sortze ingurunearen (Larrañaga, 2012; Larrañaga et al., 2014) bilakaera dela esan genezake. Horretarako, LiDOM Builderek domeinua ikuspegi eleaniztun batetik adieraztea ahalbidetzen duen mekanismoa dakar. Domeinu Modulu Eleaniztunak bi maila ezberdinetako ezagutza jasotzen du: Ikaste Domeinuaren Ontologia (IDO), non hizkuntza ezberdinetan etiketatutako topikoak eta hauen arteko erlazio pedagogikoak jasotzen baitira, eta Ikaste Objektuak (IO), hau da, metadatuekin etiketatutako baliabide didaktikoen bilduma, hizkuntza horietan. LiDOM Builderek onartutako hizkuntza guztietan domeinuaren topikoak adierazteko aukera ematen du. Topiko bakoitza lotuta dago dagokion hizkuntzako bere etiketa baliokidearekin. Gainera, IOak deskribatzeko metadatu aberastuak erabiltzen ditu hizkuntza desberdinetan parekideak diren baliabide didaktikoak lotzeko.LiDOM Builderen, hasiera batean, domeinu-modulua hizkuntza jakin batean idatzitako dokumentu batetik erauziko da eta, baliabide eleaniztunak erabiliko dira, gerora, bai topikoak bai IOak beste hizkuntzetan ere lortzeko. Lan honetan, Ingelesez idatzitako liburuek osatuko dute informazio-iturri nagusia bai doitze-prozesuan bai ebaluazio-prozesuan. Zehazki, honako testuliburu hauek erabili dira: Principles of Object Oriented Programming (Wong and Nguyen, 2010), Introduction to Astronomy (Morison, 2008) eta Introduction to Molecular Biology (Raineri, 2010). Baliabide eleaniztunei dagokienez, Wikipedia, WordNet eta Wikipediatik erauzitako beste hainbat ezagutza-base erabili dira. Testuliburuetatik Domeinu Modulu Eleaniztunak eraikitzeko, LiDOM Builder hiru modulu nagusitan oinarritzen da: LiTeWi eta LiReWi moduluak IDO eleaniztuna eraikitzeaz arduratuko dira eta LiLoWi, aldiz, IO eleaniztunak eraikitzeaz. Jarraian, aipatutako modulu bakoitza xehetasun gehiagorekin azaltzen da.¿ LiTeWi (Conde et al., 2015) moduluak, edozein ikaste-domeinutako testuliburu batetik abiatuta, Hezkuntzarako Ontologia bati dagozkion hainbat termino eleaniztun identifikatuko ditu, hala nola TF-IDF, KP-Miner, CValue eta Shallow Parsing Grammar. Hori lortzeko, gainbegiratu gabeko datu-erauzketa teknikez eta Wikipediaz baliatzen da. Ontologiako topikoak erauzteak LiTeWi-n hiru urrats ditu: lehenik hautagai diren terminoen erauzketa; bigarrenik, lortutako terminoen konbinatzea eta fintzea azken termino zerrenda osatuz; eta azkenik, zerrendako terminoak beste hizkuntzetara mapatzea Wikipedia baliatuz.¿ LiReWi (Conde et al., onartzeko) moduluak Hezkuntzarako Ontologia erlazio pedagogikoez aberastuko du, beti ere testuliburua abiapuntu gisa erabilita. Lau motatako erlazio pedagogikoak erauziko ditu (isA, partOf, prerequisite eta pedagogicallyClose) hainbat teknika eta ezagutza-base konbinatuz. Ezagutza-baseen artean Wikipedia, WordNet, WikiTaxonomy, WibiTaxonomy eta WikiRelations daude. LiReWi-k ere hiru urrats emango ditu erlazioak lortzeko: hasteko, ontologiako topikoak erlazioak erauzteko erabiliko diren ezagutza-base desberdinekin mapatuko ditu; gero, hainbat erlazio-erauzle, bakoitza teknika desberdin batean oinarritzen dena, exekutatuko ditu konkurrenteki erlazio hautagaiak erauzteko; eta, bukatzeko, lortutako emaitza guztiak konbinatu eta iragaziko ditu erlazio pedagogikoen azken multzoa lortuz. Gainera, DOM-Sortzetik LiDOM Buildererako trantsizioan, tesi honetan hobetu egin dira dokumentuen indizeetatik erauzitako isA eta partOf erlazioak, Wikipedia baliabide gehigarri bezala erabilita (Conde et al., 2014).¿ LiLoWi moduluak IOak -batzuk eleaniztunak- erauziko ditu, abiapuntuko testuliburutik ez ezik Wikipedia edo WordNet bezalako ezagutza-baseetatik ere. IDO ontologiako topiko bakoitza Wikipedia eta WordNet-ekin mapatu ostean, LiLoWi-k baliabide didaktikoak erauziko ditu hainbat IO erauzlez baliatuz.IO erauzketa-prozesuan, DOM-Sortzetik LiDOM Buildereko bidean, eta Wikipedia eta WordNet erabili aurretik, ingelesa hizkuntza ere gehitu eta ebaluatu da (Conde et al., 2012).LiDOM Builderen ebaluaziori dagokionez, modulu bakoitza bere aldetik testatua eta ebaluatua izan da bai Gold-standard teknika bai aditu-ebaluazioa baliatuz. Gainera, Wikipedia eta WordNet ezagutza-baseen integrazioak IOen erauzketari ekarri dion hobekuntza ere ebaluatu da. Esan genezake kasu guztietan lortu diren emaitzak oso onak direla.Bukatzeko, eta laburpen gisa, lau dira LiDOM Builderek Domeinu Modulu Eleaniztunaren arloari egin dizkion ekarpen nagusiak:¿ Domeinu Modulu Eleaniztunak adierazteko mekanismo egokia.¿ LiTeWiren garapena. Testuliburuetatik Hezkuntzarako Ontologietarako terminologia eleaniztuna erauztea ahalbidetzen du modulu honek. Ingelesa eta Gaztelera hizkuntzentzako termino-erauzlea eskura dago https://github.com/Neuw84/LiTe URLan.¿ LiReWiren garapena. Testuliburuetatik Hezkuntzarako Ontologietarako erlazio pedagogikoak erauztea ahalbidetzen du modulu honek. Erabiltzen duen Wikipedia/WordNet mapatzailea eskura dago https://github.com/Neuw84/Wikipedia2WordNet URLan.¿ LiLoWiren garapena. Testuliburua eta Wikipedia eta WordNet ezagutza-baseak erabilita IO eleaniztunak erauztea ahalbidetzen du modulu honek

    Distributed Document Clustering and Cluster Summarization in Peer-to-Peer Environments

    Get PDF
    This thesis addresses difficult challenges in distributed document clustering and cluster summarization. Mining large document collections poses many challenges, one of which is the extraction of topics or summaries from documents for the purpose of interpretation of clustering results. Another important challenge, which is caused by new trends in distributed repositories and peer-to-peer computing, is that document data is becoming more distributed. We introduce a solution for interpreting document clusters using keyphrase extraction from multiple documents simultaneously. We also introduce two solutions for the problem of distributed document clustering in peer-to-peer environments, each satisfying a different goal: maximizing local clustering quality through collaboration, and maximizing global clustering quality through cooperation. The keyphrase extraction algorithm efficiently extracts and scores candidate keyphrases from a document cluster. The algorithm is called CorePhrase and is based on modeling document collections as a graph upon which we can leverage graph mining to extract frequent and significant phrases, which are used to label the clusters. Results show that CorePhrase can extract keyphrases relevant to documents in a cluster with very high accuracy. Although this algorithm can be used to summarize centralized clusters, it is specifically employed within distributed clustering to both boost distributed clustering accuracy, and to provide summaries for distributed clusters. The first method for distributed document clustering is called collaborative peer-to-peer document clustering, which models nodes in a peer-to-peer network as collaborative nodes with the goal of improving the quality of individual local clustering solutions. This is achieved through the exchange of local cluster summaries between peers, followed by recommendation of documents to be merged into remote clusters. Results on large sets of distributed document collections show that: (i) such collaboration technique achieves significant improvement in the final clustering of individual nodes; (ii) networks with larger number of nodes generally achieve greater improvements in clustering after collaboration relative to the initial clustering before collaboration, while on the other hand they tend to achieve lower absolute clustering quality than networks with fewer number of nodes; and (iii) as more overlap of the data is introduced across the nodes, collaboration tends to have little effect on improving clustering quality. The second method for distributed document clustering is called hierarchically-distributed document clustering. Unlike the collaborative model, this model aims at producing one clustering solution across the whole network. It specifically addresses scalability of network size, and consequently the distributed clustering complexity, by modeling the distributed clustering problem as a hierarchy of node neighborhoods. Summarization of the global distributed clusters is achieved through a distributed version of the CorePhrase algorithm. Results on large document sets show that: (i) distributed clustering accuracy is not affected by increasing the number of nodes for networks of single level; (ii) we can achieve decent speedup by making the hierarchy taller, but on the expense of clustering quality which degrades as we go up the hierarchy; (iii) in networks that grow arbitrarily, data gets more fragmented across neighborhoods causing poor centroid generation, thus suggesting we should not increase the number of nodes in the network beyond a certain level without increasing the data set size; and (iv) distributed cluster summarization can produce accurate summaries similar to those produced by centralized summarization. The proposed algorithms offer high degree of flexibility, scalability, and interpretability of large distributed document collections. Achieving the same results using current methodologies require centralization of the data first, which is sometimes not feasible

    Open Data

    Get PDF
    Open data is freely usable, reusable, or redistributable by anybody, provided there are safeguards in place that protect the data’s integrity and transparency. This book describes how data retrieved from public open data repositories can improve the learning qualities of digital networking, particularly performance and reliability. Chapters address such topics as knowledge extraction, Open Government Data (OGD), public dashboards, intrusion detection, and artificial intelligence in healthcare

    Artificial intelligence : A powerful paradigm for scientific research

    Get PDF
    Y Artificial intelligence (AI) coupled with promising machine learning (ML) techniques well known from computer science is broadly affecting many aspects of various fields including science and technology, industry, and even our day-to-day life. The ML techniques have been developed to analyze high-throughput data with a view to obtaining useful insights, categorizing, predicting, and making evidence-based decisions in novel ways, which will promote the growth of novel applications and fuel the sustainable booming of AI. This paper undertakes a comprehensive survey on the development and application of AI in different aspects of fundamental sciences, including information science, mathematics, medical science, materials science, geoscience, life science, physics, and chemistry. The challenges that each discipline of science meets, and the potentials of AI techniques to handle these challenges, are discussed in detail. Moreover, we shed light on new research trends entailing the integration of AI into each scientific discipline. The aim of this paper is to provide a broad research guideline on fundamental sciences with potential infusion of AI, to help motivate researchers to deeply understand the state-of-the-art applications of AI-based fundamental sciences, and thereby to help promote the continuous development of these fundamental sciences.Peer reviewe

    A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4

    Full text link
    Large language models (LLMs) are a special class of pretrained language models obtained by scaling model size, pretraining corpus and computation. LLMs, because of their large size and pretraining on large volumes of text data, exhibit special abilities which allow them to achieve remarkable performances without any task-specific training in many of the natural language processing tasks. The era of LLMs started with OpenAI GPT-3 model, and the popularity of LLMs is increasing exponentially after the introduction of models like ChatGPT and GPT4. We refer to GPT-3 and its successor OpenAI models, including ChatGPT and GPT4, as GPT-3 family large language models (GLLMs). With the ever-rising popularity of GLLMs, especially in the research community, there is a strong need for a comprehensive survey which summarizes the recent research progress in multiple dimensions and can guide the research community with insightful future research directions. We start the survey paper with foundation concepts like transformers, transfer learning, self-supervised learning, pretrained language models and large language models. We then present a brief overview of GLLMs and discuss the performances of GLLMs in various downstream tasks, specific domains and multiple languages. We also discuss the data labelling and data augmentation abilities of GLLMs, the robustness of GLLMs, the effectiveness of GLLMs as evaluators, and finally, conclude with multiple insightful future research directions. To summarize, this comprehensive survey paper will serve as a good resource for both academic and industry people to stay updated with the latest research related to GPT-3 family large language models.Comment: Preprint under review, 58 page

    A WEB PERSONALIZATION ARTIFACT FOR UTILITY-SENSITIVE REVIEW ANALYSIS

    Get PDF
    Online customer reviews are web content voluntarily posted by the users of a product (e.g. camera) or service (e.g. hotel) to express their opinions about the product or service. Online reviews are important resources for businesses and consumers. This dissertation focuses on the important consumer concern of review utility, i.e., the helpfulness or usefulness of online reviews to inform consumer purchase decisions. Review utility concerns consumers since not all online reviews are useful or helpful. And, the quantity of the online reviews of a product/service tends to be very large. Manual assessment of review utility is not only time consuming but also information overloading. To address this issue, review helpfulness research (RHR) has become a very active research stream dedicated to study utility-sensitive review analysis (USRA) techniques for automating review utility assessment. Unfortunately, prior RHR solution is inadequate. RHR researchers call for more suitable USRA approaches. Our current research responds to this urgent call by addressing the research problem: What is an adequate USRA approach? We address this problem by offering novel Design Science (DS) artifacts for personalized USRA (PUSRA). Our proposed solution extends not only RHR research but also web personalization research (WPR), which studies web-based solutions for personalized web provision. We have evaluated the proposed solution by applying three evaluation methods: analytical, descriptive, and experimental. The evaluations corroborate the practical efficacy of our proposed solution. This research contributes what we believe (1) the first DS artifacts to the knowledge body of RHR and WPR, and (2) the first PUSRA contribution to USRA practice. Moreover, we consider our evaluations of the proposed solution the first comprehensive assessment of USRA solutions. In addition, this research contributes to the advancement of decision support research and practice. The proposed solution is a web-based decision support artifact with the capability to substantially improve accurate personalized webpage provision. Also, website designers can apply our research solution to transform their works fundamentally. Such transformation can add substantial value to businesses

    Robust Dialog Management Through A Context-centric Architecture

    Get PDF
    This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user’s goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine’s ability to communicate may be hindered by poor reception of utterances, caused by a user’s inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by ContextBased Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user’s assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users

    The Future of Information Sciences : INFuture2009 : Digital Resources and Knowledge Sharing

    Get PDF
    corecore