26 research outputs found

    Towards semantic-aware and ontology-based e-Government service integration - an applicative case study of Saudi Arabia's King Abdullah Scholarship program

    Full text link
    By improving the quality of e-government services by enabling access to services across different government agencies through one portal, services integration plays a key role in e-government development. This paper proposes a conceptual framework of ontology based e-government service integration, using Saudi Arabia's King Abdullah Scholarship Program (SAKASP) as a case study. SAKASP is a multi-domain program in which students must collect information from various Ministries to complete applications and the administering authority must verify the information supplied by the Ministries. The current implementation of SAKASP is clumsy because it is a mixture of online submission and manual collection and verification of information; its time-consuming and tedious procedures are inconvenient for the applicants and inefficient for the administrators. The proposed framework provides an integrated service by employing semantic web service (SWS) and ontology, improving the current implementation of SAKASP by automatically collecting and processing the related information for a given application. The article includes a typical scenario that demonstrates the workflow of the framework. This framework is applicable to other multi-domain e-government services. © Springer-Verlag Berlin Heidelberg 2010

    Cross-lingual knowledge linking across wiki knowledge bases

    Full text link
    Wikipedia becomes one of the largest knowledge bases on the Web. It has attracted 513 million page views per day in January 2012. However, one critical issue for Wikipedia is that articles in different language are very unbalanced. For example, the number of articles on Wikipedia in English has reached 3.8 million, while the number of Chinese articles is still less than half million and there are only 217 thousand cross-lingual links between articles of the two languages. On the other hand, there are more than 3.9 million Chinese Wi-ki articles on Baidu Baike and Hudong.com, two popular encyclopedias in Chinese. One important question is how to link the knowledge entries distributed in different knowledge bases. This will immensely enrich the information in the on-line knowledge bases and benefit many applications. In this paper, we study the problem of cross-lingual knowledge link-ing and present a linkage factor graph model. Features are defined according to some interesting observations. Exper-iments on the Wikipedia data set show that our approach can achieve a high precision of 85.8 % with a recall of 88.1%. The approach found 202,141 new cross-lingual links between English Wikipedia and Baidu Baike

    Interactive Contrastive Learning for Self-supervised Entity Alignment

    Full text link
    Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments. The current SOTA self-supervised EA method draws inspiration from contrastive learning, originally designed in computer vision based on instance discrimination and contrastive loss, and suffers from two shortcomings. Firstly, it puts unidirectional emphasis on pushing sampled negative entities far away rather than pulling positively aligned pairs close, as is done in the well-established supervised EA. Secondly, KGs contain rich side information (e.g., entity description), and how to effectively leverage those information has not been adequately investigated in self-supervised EA. In this paper, we propose an interactive contrastive learning model for self-supervised EA. The model encodes not only structures and semantics of entities (including entity name, entity description, and entity neighborhood), but also conducts cross-KG contrastive learning by building pseudo-aligned entity pairs. Experimental results show that our approach outperforms previous best self-supervised results by a large margin (over 9% average improvement) and performs on par with previous SOTA supervised counterparts, demonstrating the effectiveness of the interactive contrastive learning for self-supervised EA.Comment: Accepted by CIKM 202

    Ventricular Fibrillation and Tachycardia Detection Using Features Derived from Topological Data Analysis

    Get PDF
    A rapid and accurate detection of ventricular arrhythmias is essential to take appropriate therapeutic actions when cardiac arrhythmias occur. Furthermore, the accurate discrimination between arrhythmias is also important, provided that the required shocking therapy would not be the same. In this work, the main novelty is the use of the mathematical method known as Topological Data Analysis (TDA) to generate new types of features which can contribute to the improvement of the detection and classification performance of cardiac arrhythmias such as Ventricular Fibrillation (VF) and Ventricular Tachycardia (VT). The electrocardiographic (ECG) signals used for this evaluation were obtained from the standard MIT-BIH and AHA databases. Two input data to the classify are evaluated: TDA features, and Persistence Diagram Image (PDI). Using the reduced TDA-obtained features, a high average accuracy near 99% was observed when discriminating four types of rhythms (98.68% to VF; 99.05% to VT; 98.76% to normal sinus; and 99.09% to Other rhythms) with specificity values higher than 97.16% in all cases. In addition, a higher accuracy of 99.51% was obtained when discriminating between shockable (VT/VF) and non-shockable rhythms (99.03% sensitivity and 99.67% specificity). These results show that the use of TDA-derived geometric features, combined in this case this the k-Nearest Neighbor (kNN) classifier, raises the classification performance above results in previous works. Considering that these results have been achieved without preselection of ECG episodes, it can be concluded that these features may be successfully introduced in Automated External Defibrillation (AED) and Implantable Cardioverter Defibrillation (ICD) therapie

    Early Alert of At-Risk Students: An Ontology-Driven Framework

    Get PDF
    As higher education continues to adapt to the constantly shifting conditions that society places on institutions, the enigma of student attrition continues to trouble universities. Early alerts for students who are at-risk academically have been introduced as a method for solving student attrition at these institutions. Early alert systems are designed to provide students who are academically at-risk a prompt indication so that they may correct their performance and make progress towards successful semester completion. Many early alert systems have been introduced and implemented at various institutions with varying levels of success. Currently, early alert systems employ different techniques for identifying students that may be at-risk. These techniques range from using machine learning algorithms for predicting students that may become at-risk to more manual methods where the professors are responsible for assigning at-risk tags to students in order to notify the student. This thesis will introduce an ontology-driven framework for early alert reporting of students at-risk. To be more precise we will determine early alerts for students who are at-risk with an ontology-driven framework employing situational awareness. Ontology-driven frameworks allow us to formalize situations in a way that is similar to the human interpretation of situational awareness. The ontology presented will be constructed using OWL the Web Ontology Language. The use of this language will facilitate the description and reasoning of the situation as it is a commonly supported programming language with computable semantics. In this piece we will consider factors such as advisor notes, learning management system interaction, as well as other factors related to student attrition to assign at-risk tags to students who may be at academic risk

    Matcher Composition Methods for Automatic Schema Matching

    Full text link
    We address the problem of automating the process of deciding whether two data schema ele-ments match (that is, refer to the same actual object or concept), and propose several methods for combining evidence computed by multiple basic matchers. One class of methods uses Bayesian networks to account for the conditional dependency between the similarity values produced by individual matchers that use the same or similar information, so as to avoid overconfidence in match probability estimates and improve the accuracy of matching. Another class of methods relies on optimization switches that mitigate this dependency in a domain-independent manner. Experimental results under several testing protocols suggest that the matching accuracy of the Bayesian composite matchers can significantly exceed that of the individual component match-ers, and the careful selection of optimization switches can improve matching accuracy even further

    A gauss function based approach for unbalanced ontology matching

    Full text link
    Ontology matching, aiming to obtain semantic correspon-dences between two ontologies, has played a key role in data exchange, data integration and metadata management. Among numerous matching scenarios, especially the appli-cations cross multiple domains, we observe an important problem, denoted as unbalanced ontology matching which requires to find the matches between an ontology describing a local domain knowledge and another ontology covering the information over multiple domains, is not well studied in the community. In this paper, we propose a novel Gauss Function based ontology matching approach to deal with this unbalanced ontology matching issue. Given a relative lightweight on-tology which represents the local domain knowledge, we ex-tract a“similar ” sub-ontology from the corresponding heavy-weight ontology and then carry out the matching procedure between this lightweight ontology and the newly generated sub-ontology. The sub-ontology generation is based on the influences between concepts in the heavyweight ontology. We propose a Gauss Function based method to properly cal-culate the influence values between concepts. In addition, we perform an extensive experiment to verify the effective-ness and efficiency of our proposed approach by using OAEI 2007 tasks. Experimental results clearly demonstrate that our solution outperforms the existing methods in terms of precision, recall and elapsed time
    corecore