27 research outputs found

    Contextual Understanding in Neural Dialog Systems: the Integration of External Knowledge Graphs for Generating Coherent and Knowledge-rich Conversations

    Get PDF
    The integration of external knowledge graphs has emerged as a powerful approach to enrich conversational AI systems with coherent and knowledge-rich conversations. This paper provides an overview of the integration process and highlights its benefits. Knowledge graphs serve as structured representations of information, capturing the relationships between entities through nodes and edges. They offer an organized and efficient means of representing factual knowledge. External knowledge graphs, such as DBpedia, Wikidata, Freebase, and Google's Knowledge Graph, are pre-existing repositories that encompass a wide range of information across various domains. These knowledge graphs are compiled by aggregating data from diverse sources, including online encyclopedias, databases, and structured repositories. To integrate an external knowledge graph into a conversational AI system, a connection needs to be established between the system and the knowledge graph. This can be achieved through APIs or by importing a copy of the knowledge graph into the AI system's internal storage. Once integrated, the conversational AI system can query the knowledge graph to retrieve relevant information when a user poses a question or makes a statement. When analyzing user inputs, the conversational AI system identifies entities or concepts that require additional knowledge. It then formulates queries to retrieve relevant information from the integrated knowledge graph. These queries may involve searching for specific entities, retrieving related entities, or accessing properties and attributes associated with the entities. The obtained information is used to generate coherent and knowledge-rich responses. By integrating external knowledge graphs, conversational AI systems can augment their internal knowledge base and provide more accurate and up-to-date responses. The retrieved information allows the system to extract relevant facts, provide detailed explanations, or offer additional context. This integration empowers AI systems to deliver comprehensive and insightful responses that enhance user experience. As external knowledge graphs are regularly updated with new information and improvements, conversational AI systems should ensure their integrated knowledge graphs remain current. This can be achieved through periodic updates, either by synchronizing the system's internal representation with the external knowledge graph or by querying the external knowledge graph in real-time

    A comparative study of Chinese and European Internet companies' privacy policy based on knowledge graph

    Get PDF
    Privacy policy is not only a means of industry self-discipline, but also a way for users to protect their online privacy. The European Union (EU) promulgated the General Data Protection Regulation (GDPR) on May 25th, 2018, while China has no explicit personal data protection law. Based on knowledge graph, this thesis makes a comparative analysis of the Chinese and European Internet companies’ privacy policies, and combines with the relevant provisions of GDPR, puts forward suggestions on the privacy policy of Internet companies, so as to solve the problem of personal in-formation protection to a certain extent. Firstly, this thesis chooses the process and methods of knowledge graph construction and analysis. The process of constructing and analyzing the knowledge graph is: data preprocessing, entity extraction, storage in graph database and query. Data preprocessing includes word segmentation and part-of-speech tagging, as well as text format adjustment. Entity extraction is the core of knowledge graph construction in this thesis. Based on the principle of Conditional Random Fields (CRF), CFR++ toolkit is used for the entity extraction. Subsequently, the extracted entities are transformed into “.csv” format and stored in the graph database Neo4j, so the knowledge graph is generated. Cypher query statements can be used to query information in the graph database. The next part is about comparison and analysis of the Internet companies’ privacy policies in China and Europe. After sampling, the overall characteristics of the privacy policies of Chinese and European Internet companies are compared. According to the process of constructing knowledge graphs mentioned above, the “collected information” and “contact us” parts of the privacy policy are used to construct the knowledge graphs. Finally, combined with the relevant content of GDPR, the results of the comparative analysis are further discussed, and suggestions are proposed. Although Chinese Internet companies’ privacy policies have some merits, they are far inferior to those of European Internet companies. China also needs to enact a personal data protection law according to its national conditions. This thesis applies knowledge graph to the privacy policy research, and analyses Internet companies’ privacy policies from a comparative perspective. It also discusses the comparative results with GDPR and puts forward suggestions, and provides reference for the formulation of China's personal information protection law

    AD-AutoGPT: An Autonomous GPT for Alzheimer's Disease Infodemiology

    Full text link
    In this pioneering study, inspired by AutoGPT, the state-of-the-art open-source application based on the GPT-4 large language model, we develop a novel tool called AD-AutoGPT which can conduct data collection, processing, and analysis about complex health narratives of Alzheimer's Disease in an autonomous manner via users' textual prompts. We collated comprehensive data from a variety of news sources, including the Alzheimer's Association, BBC, Mayo Clinic, and the National Institute on Aging since June 2022, leading to the autonomous execution of robust trend analyses, intertopic distance maps visualization, and identification of salient terms pertinent to Alzheimer's Disease. This approach has yielded not only a quantifiable metric of relevant discourse but also valuable insights into public focus on Alzheimer's Disease. This application of AD-AutoGPT in public health signifies the transformative potential of AI in facilitating a data-rich understanding of complex health narratives like Alzheimer's Disease in an autonomous manner, setting the groundwork for future AI-driven investigations in global health landscapes.Comment: 20 pages, 4 figure

    Automatic User Profile Construction for a Personalized News Recommender System Using Twitter

    Get PDF
    Modern society has now grown accustomed to reading online or digital news. However, the huge corpus of information available online poses a challenge to users when trying to find relevant articles. A hybrid system “Personalized News Recommender Using Twitter’ has been developed to recommend articles to a user based on the popularity of the articles and also the profile of the user. The hybrid system is a fusion of a collaborative recommender system developed using tweets from the “Twitter” public timeline and a content recommender system based the user’s past interests summarized in their conceptual user profile. In previous work, a user’s profile was built manually by asking the user to explicitly rate his/her interest in a category by entering a score for the corresponding category. This is not a reliable approach as the user may not be able to accurately specify their interest for a category with a number. In this work, an automatic profile builder was developed that uses an implicit approach to build the user’s profile. The specificity of the user profile was also increased to incorporate fifteen categories versus seven in the previous system. We concluded with an experiment to study the impact of automatic profile builder and the increased set of categories on the accuracy of the hybrid news recommender syste

    Learning Ontology Relations by Combining Corpus-Based Techniques and Reasoning on Data from Semantic Web Sources

    Get PDF
    The manual construction of formal domain conceptualizations (ontologies) is labor-intensive. Ontology learning, by contrast, provides (semi-)automatic ontology generation from input data such as domain text. This thesis proposes a novel approach for learning labels of non-taxonomic ontology relations. It combines corpus-based techniques with reasoning on Semantic Web data. Corpus-based methods apply vector space similarity of verbs co-occurring with labeled and unlabeled relations to calculate relation label suggestions from a set of candidates. A meta ontology in combination with Semantic Web sources such as DBpedia and OpenCyc allows reasoning to improve the suggested labels. An extensive formal evaluation demonstrates the superior accuracy of the presented hybrid approach

    Applying Wikipedia to Interactive Information Retrieval

    Get PDF
    There are many opportunities to improve the interactivity of information retrieval systems beyond the ubiquitous search box. One idea is to use knowledge bases—e.g. controlled vocabularies, classification schemes, thesauri and ontologies—to organize, describe and navigate the information space. These resources are popular in libraries and specialist collections, but have proven too expensive and narrow to be applied to everyday webscale search. Wikipedia has the potential to bring structured knowledge into more widespread use. This online, collaboratively generated encyclopaedia is one of the largest and most consulted reference works in existence. It is broader, deeper and more agile than the knowledge bases put forward to assist retrieval in the past. Rendering this resource machine-readable is a challenging task that has captured the interest of many researchers. Many see it as a key step required to break the knowledge acquisition bottleneck that crippled previous efforts. This thesis claims that the roadblock can be sidestepped: Wikipedia can be applied effectively to open-domain information retrieval with minimal natural language processing or information extraction. The key is to focus on gathering and applying human-readable rather than machine-readable knowledge. To demonstrate this claim, the thesis tackles three separate problems: extracting knowledge from Wikipedia; connecting it to textual documents; and applying it to the retrieval process. First, we demonstrate that a large thesaurus-like structure can be obtained directly from Wikipedia, and that accurate measures of semantic relatedness can be efficiently mined from it. Second, we show that Wikipedia provides the necessary features and training data for existing data mining techniques to accurately detect and disambiguate topics when they are mentioned in plain text. Third, we provide two systems and user studies that demonstrate the utility of the Wikipedia-derived knowledge base for interactive information retrieval

    Semantic Web for Everyone: Exploring Semantic Web Knowledge Bases via Contextual Tag Clouds and Linguistic Interpretations

    Get PDF
    The amount of Semantic Web data is huge and still keeps growing rapidly today. However most users are still not able to use a Semantic Web Knowledge Base (KB) effectively as desired to due to the lack of various background knowledge. Furthermore, the data is usually heterogeneous, incomplete, and even contains errors, which further impairs understanding the dataset. How to quickly familiarize users with the ontology and data in a KB is an important research challenge to the Semantic Web community.The core part of our proposed resolution to the problem is the contextual tag cloud system: a novel application that helps users explore a large scale RDF(Resource Description Framework) dataset. The tags in our system are ontological terms (classes and properties), and a user can construct a context with a set of tags that defines a subset of instances. Then in the contextual tag cloud, the font size of each tag depends on the number of instances that are associated with that tag and all tags in the context. Each contextual tag cloud serves as a summary of the distribution of relevant data, and by changing the context, the user can quickly gain an understanding of patterns in the data. Furthermore, the user can choose to include different RDFS entailment regimes in the calculations of tag sizes, thereby understanding the impact of semantics on the data. To resolve the key challenge of scalability, we combine a scalable preprocessing approach with a specially-constructed inverted index and co-occurrence matrix, use three approaches to prune unnecessary counts for faster online computations, and design a paging and streaming interface. Via experimentation, we show how much our design choices benefit the responsiveness of our system. We conducted a preliminary user study on this system, and find novice participants felt the system provided a good means to investigate the data and were able to complete assigned tasks more easily than using a baseline interface.We then extend the definition of tags to more general categories, particularly including property values, chaining property values, or functions on these values. With a totally different scenario and more general tags, we find the system can be used to discover interesting value space patterns. To adapt the different dataset, we modify the infrastructure with new indexing data structure, and propose two strategies for online queries, which will be chosen based on different requests, in order to maintain responsiveness of the system.In addition, we consider other approaches to help users locate classes by natural language inputs. Using an external lexicon, Word Sense Disambiguation (WSD) on the label words of classes is one way to understand these classes. We propose our novel WSD approach with our probability model, derive the problem formula into small computable pieces, and propose ways to estimate the values of these pieces. For the other approach, instead of relying on external sources, we investigate how to retrieve query-relevant classes by using the annotations of instances associated with classes in the knowledge base. We propose a general framework of this approach, which consists of two phases: the keyword query is first used to locate relevant instances; then we induce the classes given this list of weighted matched instances.Following the description of the accomplished work, I propose some important future work for extending the current system, and finally conclude the dissertation

    Improvements to GeoQA, a Question Answering system for Geospatial Questions

    Get PDF
    Η παρούσα εργασία αποτελεί μια προσπάθεια για συγκέντρωση, μελέτη και σύγκριση συστημάτων απάντησης ερωτήσεων όπως τα QUINT, TEMPO και NEQA και του σκελετού συστημάτων απάντησης ερωτήσεων Frankenstein. Η μελέτη επικεντρώνεται στην απάντηση ερωτήσεων σε γεωχωρικά δεδομένα και πιο στο σύστημα GeoQA. Το σύστημα αυτό έχει προταθεί πρόσφατα και ειναι το πρώτο σύστημα απάντησης ερωτήσεων πάνω σε συνδεδεμένα γεωχωρικά δεδομένα βασιζόμενο σε πρότυπα. Βελτιώνουμε το παραπάνω σύστημα χρησιμοποιώντας τα δεδομένα για το σχήμα των βάσεων γνώσης του, προσθέτοντας πρότυπα για πιο σύνθετες ερωτήσεις και αναπτύσσοντας το υποσύστημα για την επεξεργασία φυσικής γλώσσας.We study the question-answering GeoQA which was proposed recently. GeoQA is the first template-based question answering system for linked geospatial data. We improve this system by exploiting the data schema information of the kb’s it’s using, adding more templates for more complex queries and by improving the natural language processing module in order to recognize the patterns. The current work is also an attempt to concentrate, study and compare some other question-answering systems like QUINT, Qanary methodology and Frankenstein framework for question answering systems
    corecore