253,845 research outputs found

    Web ontology representation and reasoning via fragments of set theory

    Full text link
    In this paper we use results from Computable Set Theory as a means to represent and reason about description logics and rule languages for the semantic web. Specifically, we introduce the description logic \mathcal{DL}\langle 4LQS^R\rangle(\D)--admitting features such as min/max cardinality constructs on the left-hand/right-hand side of inclusion axioms, role chain axioms, and datatypes--which turns out to be quite expressive if compared with \mathcal{SROIQ}(\D), the description logic underpinning the Web Ontology Language OWL. Then we show that the consistency problem for \mathcal{DL}\langle 4LQS^R\rangle(\D)-knowledge bases is decidable by reducing it, through a suitable translation process, to the satisfiability problem of the stratified fragment 4LQSR4LQS^R of set theory, involving variables of four sorts and a restricted form of quantification. We prove also that, under suitable not very restrictive constraints, the consistency problem for \mathcal{DL}\langle 4LQS^R\rangle(\D)-knowledge bases is \textbf{NP}-complete. Finally, we provide a 4LQSR4LQS^R-translation of rules belonging to the Semantic Web Rule Language (SWRL)

    Elementary construction of Lusztig's canonical basis

    Get PDF
    In this largely expository article we present an elementary construction of Lusztig's canonical basis in type ADE. The method, which is essentially Lusztig's original approach, is to use the braid group to reduce to rank two calculations. Some of the wonderful properties of the canonical basis are already visible; that it descends to a basis for every highest weight integrable representation, and that it is a crystal basis.Comment: 12 page

    DUDE-Seq: Fast, Flexible, and Robust Denoising for Targeted Amplicon Sequencing

    Full text link
    We consider the correction of errors from nucleotide sequences produced by next-generation targeted amplicon sequencing. The next-generation sequencing (NGS) platforms can provide a great deal of sequencing data thanks to their high throughput, but the associated error rates often tend to be high. Denoising in high-throughput sequencing has thus become a crucial process for boosting the reliability of downstream analyses. Our methodology, named DUDE-Seq, is derived from a general setting of reconstructing finite-valued source data corrupted by a discrete memoryless channel and effectively corrects substitution and homopolymer indel errors, the two major types of sequencing errors in most high-throughput targeted amplicon sequencing platforms. Our experimental studies with real and simulated datasets suggest that the proposed DUDE-Seq not only outperforms existing alternatives in terms of error-correction capability and time efficiency, but also boosts the reliability of downstream analyses. Further, the flexibility of DUDE-Seq enables its robust application to different sequencing platforms and analysis pipelines by simple updates of the noise model. DUDE-Seq is available at http://data.snu.ac.kr/pub/dude-seq

    Combinatorial descriptions of the crystal structure on certain PBW bases (extended abstract)

    Get PDF
    Lusztig's theory of PBW bases gives a way to realize the infinity crystal for any simple complex Lie algebra where the underlying set consists of Kostant partitions. In fact, there are many different such realizations, one for each reduced expression for the longest element of the Weyl group. There is an algorithm to calculate the actions of the crystal operators, but it can be quite complicated. For ADE types, we give conditions on the reduced expression which ensure that the corresponding crystal operators are given by simple combinatorial bracketing rules. We then give at least one reduced expression satisfying our conditions in every type except E8E_8, and discuss the resulting combinatorics. Finally, we describe the relationship with more standard tableaux combinatorics in types A and D.Comment: Extended abstract to appear in proceedings for FPSAC 201

    Discovering Implicational Knowledge in Wikidata

    Full text link
    Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Examples include the proprietary knowledge graphs of companies such as Google, Facebook, IBM, or Microsoft, but also freely available ones such as YAGO, DBpedia, and Wikidata. A distinguishing feature of Wikidata is that the knowledge is collaboratively edited and curated. While this greatly enhances the scope of Wikidata, it also makes it impossible for a single individual to grasp complex connections between properties or understand the global impact of edits in the graph. We apply Formal Concept Analysis to efficiently identify comprehensible implications that are implicitly present in the data. Although the complex structure of data modelling in Wikidata is not amenable to a direct approach, we overcome this limitation by extracting contextual representations of parts of Wikidata in a systematic fashion. We demonstrate the practical feasibility of our approach through several experiments and show that the results may lead to the discovery of interesting implicational knowledge. Besides providing a method for obtaining large real-world data sets for FCA, we sketch potential applications in offering semantic assistance for editing and curating Wikidata

    Young tableaux, canonical bases and the Gindikin-Karpelevich formula

    Full text link
    A combinatorial description of the crystal B(infinity) for finite-dimensional simple Lie algebras in terms of certain Young tableaux was developed by J. Hong and H. Lee. We establish an explicit bijection between these Young tableaux and canonical bases indexed by Lusztig's parametrization, and obtain a combinatorial rule for expressing the Gindikin-Karpelevich formula as a sum over the set of Young tableaux.Comment: 19 page
    • …
    corecore