1,095 research outputs found
Algorithm for Adapting Cases Represented in a Tractable Description Logic
Case-based reasoning (CBR) based on description logics (DLs) has gained a lot
of attention lately. Adaptation is a basic task in the CBR inference that can
be modeled as the knowledge base revision problem and solved in propositional
logic. However, in DLs, it is still a challenge problem since existing revision
operators only work well for strictly restricted DLs of the \emph{DL-Lite}
family, and it is difficult to design a revision algorithm which is
syntax-independent and fine-grained. In this paper, we present a new method for
adaptation based on the DL . Following the idea of
adaptation as revision, we firstly extend the logical basis for describing
cases from propositional logic to the DL , and present a
formalism for adaptation based on . Then we present an
adaptation algorithm for this formalism and demonstrate that our algorithm is
syntax-independent and fine-grained. Our work provides a logical basis for
adaptation in CBR systems where cases and domain knowledge are described by the
tractable DL .Comment: 21 pages. ICCBR 201
Combinatorial Games with a Pass: A dynamical systems approach
By treating combinatorial games as dynamical systems, we are able to address
a longstanding open question in combinatorial game theory, namely, how the
introduction of a "pass" move into a game affects its behavior. We consider two
well known combinatorial games, 3-pile Nim and 3-row Chomp. In the case of Nim,
we observe that the introduction of the pass dramatically alters the game's
underlying structure, rendering it considerably more complex, while for Chomp,
the pass move is found to have relatively minimal impact. We show how these
results can be understood by recasting these games as dynamical systems
describable by dynamical recursion relations. From these recursion relations we
are able to identify underlying structural connections between these "games
with passes" and a recently introduced class of "generic (perturbed) games."
This connection, together with a (non-rigorous) numerical stability analysis,
allows one to understand and predict the effect of a pass on a game.Comment: 39 pages, 13 figures, published versio
ColNet: Embedding the Semantics of Web Tables for Column Type Prediction
Automatically annotating column types with knowledge base(KB) concepts is a critical task to gain a basic understandingof web tables. Current methods rely on either table metadatalike column name or entity correspondences of cells in theKB, and may fail to deal with growing web tables with in-complete meta information. In this paper we propose a neu-ral network based column type annotation framework namedColNetwhich is able to integrate KB reasoning and lookupwith machine learning and can automatically train Convolu-tional Neural Networks for prediction. The prediction modelnot only considers the contextual semantics within a cell us-ing word representation, but also embeds the semantics of acolumn by learning locality features from multiple cells. Themethod is evaluated with DBPedia and two different web ta-ble datasets, T2Dv2 from the general Web and Limaye fromWikipedia pages, and achieves higher performance than thestate-of-the-art approaches
Recommended from our members
Learning Semantic Annotations for Tabular Data
The usefulness of tabular data such as web tables critically depends on understanding their semantics. This study focuses on column type prediction for tables without any meta data. Unlike traditional lexical matching-based methods, we propose a deep prediction model that can fully exploit a table's contextual semantics, including table locality features learned by a Hybrid Neural Network (HNN), and inter-column semantics features learned by a knowledge base (KB) lookup and query answering algorithm.It exhibits good performance not only on individual table sets, but also when transferring from one table set to another
OWL2Vec*: Embedding of OWL Ontologies
Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies. In this paper, we propose a language model based ontology embedding method named OWL2Vec*, which encodes the semantics of an ontology by taking into account its graph structure, lexical information and logic constructors. Our empirical evaluation with three real world datasets suggests that OWL2Vec* benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, OWL2Vec* often significantly outperforms the state-of-the-art methods in our experiments
Implementing a 48 h EWTD-compliant rota for junior doctors in the UK does not compromise patientsâ safety : assessor-blind pilot comparison
Background: There are currently no field data about the effect of implementing European Working Time Directive (EWTD)-compliant rotas in a medical setting. Surveys of doctorsâ subjective opinions on shift work have not provided reliable objective data with which to evaluate its efficacy.
Aim: We therefore studied the effects on patient's safety and doctorsâ work-sleep patterns of implementing an EWTD-compliant 48 h work week in a single-blind intervention study carried out over a 12-week period at the University Hospitals Coventry & Warwickshire NHS Trust. We hypothesized that medical error rates would be reduced following the new rota.
Methods: Nineteen junior doctors, nine studied while working an intervention schedule of <48 h per week and 10 studied while working traditional weeks of <56 h scheduled hours in medical wards. Work hours and sleep duration were recorded daily. Rate of medical errors (per 1000 patient-days), identified using an established active surveillance methodology, were compared for the Intervention and Traditional wards. Two senior physicians blinded to rota independently rated all suspected errors.
Results: Average scheduled work hours were significantly lower on the intervention schedule [43.2 (SD 7.7) (range 26.0â60.0) vs. 52.4 (11.2) (30.0â77.0) h/week; P < 0.001], and there was a non-significant trend for increased total sleep time per day [7.26 (0.36) vs. 6.75 (0.40) h; P = 0.095]. During a total of 4782 patient-days involving 481 admissions, 32.7% fewer total medical errors occurred during the intervention than during the traditional rota (27.6 vs. 41.0 per 1000 patient-days, P = 0.006), including 82.6% fewer intercepted potential adverse events (1.2 vs. 6.9 per 1000 patient-days, P = 0.002) and 31.4% fewer non-intercepted potential adverse events (16.6 vs. 24.2 per 1000 patient-days, P = 0.067). Doctors reported worse educational opportunities on the intervention rota.
Conclusions: Whilst concerns remain regarding reduced educational opportunities, our study supports the hypothesis that a 48 h work week coupled with targeted efforts to improve sleep hygiene improves patient safety
Development and validation of a risk calculator for major mood disorders among the offspring of bipolar parents using information collected in routine clinical practice.
Family history is a significant risk factor for bipolar disorders (BD), but the magnitude of risk varies considerably between individuals within and across families. Accurate risk estimation may increase motivation to reduce modifiable risk exposures and identify individuals appropriate for monitoring over the peak risk period. Our objective was to develop and independently replicate an individual risk calculator for bipolar spectrum disorders among the offspring of BD parents using data collected in routine clinical practice.
Data from the longitudinal Canadian High-Risk Offspring cohort study collected from 1996 to 2020 informed the development of a 5 and 10-year risk calculator using parametric time-to-event models with a cure fraction and a generalized gamma distribution. The calculator was then externally validated using data from the Lausanne-Geneva High-Risk Offspring cohort study collected from 1996 to 2020. A time-varying C-index by age in years was used to estimate the probability that the model correctly classified risk. Bias corrected estimates and 95% confidence limits were derived using a jackknife resampling approach.
The primary outcome was age of onset of a major mood disorder. The risk calculator was most accurate at classifying risk in mid to late adolescence in the Canadian cohort (n = 285), and a similar pattern was replicated in the Swiss cohort (n = 128). Specifically, the time-varying C-index indicated that there was approximately a 70% chance that the model would correctly predict which of two 15-year-olds would be more likely to develop the outcome in the future. External validation within a smaller Swiss cohort showed mixed results.
Findings suggest that this model may be a useful clinical tool in routine practice for improved individualized risk estimation of bipolar spectrum disorders among the adolescent offspring of a BD parent; however, risk estimation in younger high-risk offspring is less accurate, perhaps reflecting the evolving nature of psychopathology in early childhood. Based on external validation with a Swiss cohort, the risk calculator may not be as predictive in more heterogenous high-risk populations.
The Canadian High-Risk Study has been funded by consecutive operating grants from the Canadian Institutes for Health Research, currently CIHR PJT Grant 152796 he Lausanne-Geneva high-risk study was and is supported by five grants from the Swiss National Foundation (#3200-040,677, #32003B-105,969, #32003B-118,326, #3200-049,746 and #3200-061,974), three grants from the Swiss National Foundation for the National Centres of Competence in Research project "The Synaptic Bases of Mental Diseases" (#125,759, #158,776, and #51NF40 - 185,897), and a grant from GlaxoSmithKline Clinical Genetics
Recommended from our members
An assertion and alignment correction framework for large scale knowledge bases
Various knowledge bases (KBs) have been constructed via information extraction from encyclopedias, text and tables, as well as alignment of multiple sources. Their usefulness and usability is often limited by quality issues. One common issue is the presence of erroneous assertions and alignments, often caused by lexical or semantic confusion. We study the problem of correcting such assertions and alignments, and present a general correction framework which combines lexical matching, contextaware sub-KB extraction, semantic embedding, soft constraint mining and semantic consistency checking. The framework is evaluated with one set of literal assertions from DBpedia, one set of entity assertions from an enterprise medical KB, and one set of mapping assertions from a music KB constructed by integrating Wikidata, Discogs and MusicBrainz. It has achieved promising results, with a correction rate (i.e., the ratio of the target assertions/alignments that are corrected with right substitutes) of 70.1%, 60.9% and 71.8%, respectively
- âŠ