11,214 research outputs found

    Context Recognition in TIL

    Get PDF
    Import 03/11/2016Cílem této diplomové práce je implementace systému rozpoznávání tří druhů kontextu, a to extensionálního, intensionálního a hyperintensionálního kontextu, ve kterém se může vyskytovat daná konstrukce systému Transparentní intensionální logiky (TIL). Implementace pro komputační variantu systému TIL, tj. funkcionální jazyk TIL-Script, je provedena v programovacím jazyce Prolog. Rozpoznání kontextu je základním předpokladem pro vývoj inferenčního stroje TIL-Script, neboť umožňuje správnou aplikaci všech extensionálních logických pravidel odvozování, a to v každém kontextu. V práci jsou nejprve představeny teoretické základy systému TIL, které slouží jako specifikace pro implementaci, a jazyk TIL-Script. Následuje popis lexikální a syntaktické analýzy jazyka TIL-Script a převod do jazyka Prolog. Vlastní algoritmus vychází z báze konstrukcí zapsaných ve formě klauzulí jazyka Prolog. Provádí typovou kontrolu konstrukcí a rozpoznání kontextu jejich výskytu. Výstupem je pak derivační strom konstrukce zapsaný v jazyce XML.The goal of this diploma thesis has been implementation of the algorithm for recognising three kinds of context in which constructions of Transparent Intensional Logic (TIL) can occur, to wit extensional, intensional or hyperintensional occurrence. The algorithm has been implemented in the Prolog programming language and realized for the computational variant of TIL, the TIL-Script functional programming language. Context recognition is the fundamental necessary condition for the development of the TIL-Script inference machine, because it makes it possible to correctly apply all the extensional logical rules of inference in any context. The first part of the thesis deals with theoretical foundations of TIL which in turn serve as the specification of the Prolog implementation. Here we also describe the TIL-Script language. In the second part we introduce the results of lexical and syntactic analysis of TIL-Script constructions and their transformation into Prolog knowledge base. The algorithm for context recognition is introduced in the third part. It operates on the base of constructions specified in the form of Prolog clauses, realizes their type-theoretical control and recognition of the context in which a given construction occurs. As a result the algorithm produces the derivation tree of each construction specified in the XML language.460 - Katedra informatikyvýborn

    Inferring knowledge from textual data by natural deduction

    Get PDF
    In this paper, we introduce the system for inferring implicit computable knowledge from textual data by natural deduction. Our background system is Transparent Intensional Logic (TIL) with its procedural semantics that assigns abstract procedures known as TIL constructions to terms of natural language as their context-invariant meanings. The input data for our method are produced by the so-called Normal Translation Algorithm (NTA). The algorithm processes natural-language texts and produces TIL constructions. In this way we have obtained a large corpus of TIL meaning procedures. These procedures are furthermore processed by our algorithms for type checking and context recognition, so that the rules of natural deduction for inferring computable knowledge can be afterwards applied.Web of Science241482

    Keystroke Biometrics in Response to Fake News Propagation in a Global Pandemic

    Full text link
    This work proposes and analyzes the use of keystroke biometrics for content de-anonymization. Fake news have become a powerful tool to manipulate public opinion, especially during major events. In particular, the massive spread of fake news during the COVID-19 pandemic has forced governments and companies to fight against missinformation. In this context, the ability to link multiple accounts or profiles that spread such malicious content on the Internet while hiding in anonymity would enable proactive identification and blacklisting. Behavioral biometrics can be powerful tools in this fight. In this work, we have analyzed how the latest advances in keystroke biometric recognition can help to link behavioral typing patterns in experiments involving 100,000 users and more than 1 million typed sequences. Our proposed system is based on Recurrent Neural Networks adapted to the context of content de-anonymization. Assuming the challenge to link the typed content of a target user in a pool of candidate profiles, our results show that keystroke recognition can be used to reduce the list of candidate profiles by more than 90%. In addition, when keystroke is combined with auxiliary data (such as location), our system achieves a Rank-1 identification performance equal to 52.6% and 10.9% for a background candidate list composed of 1K and 100K profiles, respectively.Comment: arXiv admin note: text overlap with arXiv:2004.0362

    The Speech-Language Interface in the Spoken Language Translator

    Full text link
    The Spoken Language Translator is a prototype for practically useful systems capable of translating continuous spoken language within restricted domains. The prototype system translates air travel (ATIS) queries from spoken English to spoken Swedish and to French. It is constructed, with as few modifications as possible, from existing pieces of speech and language processing software. The speech recognizer and language understander are connected by a fairly conventional pipelined N-best interface. This paper focuses on the ways in which the language processor makes intelligent use of the sentence hypotheses delivered by the recognizer. These ways include (1) producing modified hypotheses to reflect the possible presence of repairs in the uttered word sequence; (2) fast parsing with a version of the grammar automatically specialized to the more frequent constructions in the training corpus; and (3) allowing syntactic and semantic factors to interact with acoustic ones in the choice of a meaning structure for translation, so that the acoustically preferred hypothesis is not always selected even if it is within linguistic coverage.Comment: 9 pages, LaTeX. Published: Proceedings of TWLT-8, December 199

    Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images

    Get PDF
    Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumorinfiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment

    Blind Sensor Calibration using Approximate Message Passing

    Full text link
    The ubiquity of approximately sparse data has led a variety of com- munities to great interest in compressed sensing algorithms. Although these are very successful and well understood for linear measurements with additive noise, applying them on real data can be problematic if imperfect sensing devices introduce deviations from this ideal signal ac- quisition process, caused by sensor decalibration or failure. We propose a message passing algorithm called calibration approximate message passing (Cal-AMP) that can treat a variety of such sensor-induced imperfections. In addition to deriving the general form of the algorithm, we numerically investigate two particular settings. In the first, a fraction of the sensors is faulty, giving readings unrelated to the signal. In the second, sensors are decalibrated and each one introduces a different multiplicative gain to the measures. Cal-AMP shares the scalability of approximate message passing, allowing to treat big sized instances of these problems, and ex- perimentally exhibits a phase transition between domains of success and failure.Comment: 27 pages, 9 figure

    Adaptive Object Detection Using Adjacency and Zoom Prediction

    Full text link
    State-of-the-art object detection systems rely on an accurate set of region proposals. Several recent methods use a neural network architecture to hypothesize promising object locations. While these approaches are computationally efficient, they rely on fixed image regions as anchors for predictions. In this paper we propose to use a search strategy that adaptively directs computational resources to sub-regions likely to contain objects. Compared to methods based on fixed anchor locations, our approach naturally adapts to cases where object instances are sparse and small. Our approach is comparable in terms of accuracy to the state-of-the-art Faster R-CNN approach while using two orders of magnitude fewer anchors on average. Code is publicly available.Comment: Accepted to CVPR 201
    corecore