28 research outputs found

    Syntactic Verifier as a Filter to Compound Unit Recognizer

    Get PDF

    Maia and Mandos: Tools for Integrity Protection on Arbitrary Files

    Get PDF
    We present the results of our dissertation research, which focuses on practical means of protecting system data integrity. In particular, we present Maia, a language for describing integrity constraints on arbitrary file types, and Mandos, a Linux Security Module which uses verify-on-close to enforce mandatory integrity guarantees. We also provide details of a Maia-based verifier generator, demonstrate that Maia and Mandos introduce minimal delay in performing their tasks, and include a selection of sample Maia specifications

    An overview of artificial intelligence and robotics. Volume 1: Artificial intelligence. Part B: Applications

    Get PDF
    Artificial Intelligence (AI) is an emerging technology that has recently attracted considerable attention. Many applications are now under development. This report, Part B of a three part report on AI, presents overviews of the key application areas: Expert Systems, Computer Vision, Natural Language Processing, Speech Interfaces, and Problem Solving and Planning. The basic approaches to such systems, the state-of-the-art, existing systems and future trends and expectations are covered

    Aspects of Coherence for Entity Analysis

    Get PDF
    Natural language understanding is an important topic in natural language proces- sing. Given a text, a computer program should, at the very least, be able to under- stand what the text is about, and ideally also situate it in its extra-textual context and understand what purpose it serves. What exactly it means to understand what a text is about is an open question, but it is generally accepted that, at a minimum, un- derstanding involves being able to answer questions like “Who did what to whom? Where? When? How? And Why?”. Entity analysis, the computational analysis of entities mentioned in a text, aims to support answering the questions “Who?” and “Whom?” by identifying entities mentioned in a text. If the answers to “Where?” and “When?” are specific, named locations and events, entity analysis can also pro- vide these answers. Entity analysis aims to answer these questions by performing entity linking, that is, linking mentions of entities to their corresponding entry in a knowledge base, coreference resolution, that is, identifying all mentions in a text that refer to the same entity, and entity typing, that is, assigning a label such as Person to mentions of entities. In this thesis, we study how different aspects of coherence can be exploited to improve entity analysis. Our main contribution is a method that allows exploiting knowledge-rich, specific aspects of coherence, namely geographic, temporal, and entity type coherence. Geographic coherence expresses the intuition that entities mentioned in a text tend to be geographically close. Similarly, temporal coherence captures the intuition that entities mentioned in a text tend to be close in the tem- poral dimension. Entity type coherence is based in the observation that in a text about a certain topic, such as sports, the entities mentioned in it tend to have the same or related entity types, such as sports team or athlete. We show how to integrate features modeling these aspects of coherence into entity linking systems and esta- blish their utility in extensive experiments covering different datasets and systems. Since entity linking often requires computationally expensive joint, global optimi- zation, we propose a simple, but effective rule-based approach that enjoys some of the benefits of joint, global approaches, while avoiding some of their drawbacks. To enable convenient error analysis for system developers, we introduce a tool for visual analysis of entity linking system output. Investigating another aspect of co- herence, namely the coherence between a predicate and its arguments, we devise a distributed model of selectional preferences and assess its impact on a neural core- ference resolution system. Our final contribution examines how multilingual entity typing can be improved by incorporating subword information. We train and make publicly available subword embeddings in 275 languages and show their utility in a multilingual entity typing tas

    Reliable pattern recognition system with novel semi-supervised learning approach

    Get PDF
    Over the past decade, there has been considerable progress in the design of statistical machine learning strategies, including Semi-Supervised Learning (SSL) approaches. However, researchers still have difficulties in applying most of these learning strategies when two or more classes overlap, and/or when each class has a bimodal/multimodal distribution. In this thesis, an efficient, robust, and reliable recognition system with a novel SSL scheme has been developed to overcome overlapping problems between two classes and bimodal distribution within each class. This system was based on the nature of category learning and recognition to enhance the system's performance in relevant applications. In the training procedure, besides the supervised learning strategy, the unsupervised learning approach was applied to retrieve the "extra information" that could not be obtained from the images themselves. This approach was very helpful for the classification between two confusing classes. In this SSL scheme, both the training data and the test data were utilized in the final classification. In this thesis, the design of a promising supervised learning model with advanced state-of-the-art technologies is firstly presented, and a novel rejection measurement for verification of rejected samples, namely Linear Discriminant Analysis Measurement (LDAM), is defined. Experiments on CENPARMI's Hindu-Arabic Handwritten Numeral Database, CENPARMI's Numerals Database, and NIST's Numerals Database were conducted in order to evaluate the efficiency of LDAM. Moreover, multiple verification modules, including a Writing Style Verification (WSV) module, have been developed according to four newly defined error categories. The error categorization was based on the different costs of misclassification. The WSV module has been developed by the unsupervised learning approach to automatically retrieve the person's writing styles so that the rejected samples can be classified and verified accordingly. As a result, errors on CENPARMI's Hindu-Arabic Handwritten Numeral Database (24,784 training samples, 6,199 testing samples) were reduced drastically from 397 to 59, and the final recognition rate of this HAHNR reached 99.05%, a significantly higher rate compared to other experiments on the same database. When the rejection option was applied on this database, the recognition rate, error rate, and reliability were 97.89%, 0.63%, and 99.28%, respectivel

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers

    Get PDF
    A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic

    Enhancing the reliability of digital signatures as non-repudiation evidence under a holistic threat model

    Get PDF
    Traditional sensitive operations, like banking transactions, purchase processes, contract agreements etc. need to tie down the involved parties respecting the commitments made, avoiding a further repudiation of the responsibilities taken. Depending on the context, the commitment is made in one way or another, being handwritten signatures possibly the most common mechanism ever used. With the shift to digital communications, the same guarantees that exist in real world transactions are expected from electronic ones as well. Non-repudiation is thus a desired property of current electronic transactions, like those carried out in Internet banking, e-commerce or, in general, any electronic data interchange scenario. Digital evidence is generated, collected, maintained, made available and verified by non-repudiation services in order to resolve disputes about the occurrence of a certain event, protecting the parties involved in a transaction against the other's false denial about such an event. In particular, a digital signature is considered as non-repudiation evidence which can be used subsequently, by disputing parties or by an adjudicator, to arbitrate in disputes. The reliability of a digital signature should determine its capability to be used as valid evidence. The reliability depends on the trustworthiness of the whole life cycle of the signature, including the generation, transfer, verification and storage phases. Any vulnerability in it would undermine the reliability of the digital signature, making its applicability as non-repudiation evidence dificult to achieve. Unfortunately, technology is subject to vulnerabilities, always with the risk of an occurrence of security threats. Despite that, no rigorous mechanism addressing the reliability of digital signatures technology has been proposed so far. The main goal of this doctoral thesis is to enhance the reliability of digital signatures in order to enforce their non-repudiation property when acting as evidence. In the first instance, we have determined that current technology does not provide an acceptable level of trustworthiness to produce reliable nonrepudiation evidence that is based on digital signatures. The security threats suffered by current technology are suffice to prevent the applicability of digital signatures as non-repudiation evidence. This finding is also aggravated by the fact that digital signatures are granted legal effectiveness under current legislation, acting as evidence in legal proceedings regarding the commitment made by a signatory in the signed document. In our opinion, the security threats that subvert the reliability of digital signatures had to be formalized and categorized. For that purpose, a holistic taxonomy of potential attacks on digital signatures has been devised, allowing their systematic and rigorous classification. In addition, and assuming a realistic security risk, we have built a new approach more robust and trustworthy than the predecessors to enhance the reliability of digital signatures, enforcing their non-repudiation property. This new approach is supported by two novel mechanisms presented in this thesis: the signature environment division paradigm and the extended electronic signature policies. Finally, we have designed a new fair exchange protocol that makes use of our proposal, demonstrating the applicability in a concrete scenario. ----------------------------------------------------------------------------------------------------------------------------------------------------------------Las operaciones sensibles tradicionales, tales como transacciones bancarias, procesos de compra-venta, firma de contratos etc. necesitan que las partes implicadas queden sujetas a los compromisos realizados, evitando así un repudio posterior de las responsabilidades adquiridas. Dependiendo del contexto, el compromiso se llevaría a cabo de una manera u otra, siendo posiblemente la firma manuscrita el mecanismo más comúnmente empleado hasta la actualidad. Con el paso a las comunicaciones digitales, se espera que las mismas garantías que se encuentran en las transacciones tradicionales se proporcionen también en las electrónicas. El no repudio es, por tanto, una propiedad deseada a las actuales transacciones electrónicas, como aquellas que se llevan a cabo en la banca online, en el comercio electrónico o, en general, en cualquier intercambio de datos electrónico. La evidencia digital se genera, recoge, mantiene, publica y verifica mediante los servicios de no repudio con el fin de resolver disputas acerca de la ocurrencia de un determinado evento, protegiendo a las partes implicadas en una transacción frente al rechazo respecto a dicho evento que pudiera realizar cualquiera de las partes. En particular, una firma digital se considera una evidencia de no repudio que puede emplearse posteriormente por las partes enfrentadas o un tercero durante el arbitrio de la disputa. La fiabilidad de una firma digital debería determinar su capacidad para ser usada como evidencia válida. Dicha fiabilidad depende de la seguridad del ciclo de vida completo de la firma, incluyendo las fases de generación, transferencia, verificación, almacenamiento y custodia. Cualquier vulnerabilidad en dicho proceso podría socavar la fiabilidad de la firma digital, haciendo difícil su aplicación como evidencia de no repudio. Desafortunadamente, la tecnología está sujeta a vulnerabilidades, existiendo siempre una probabilidad no nula de ocurrencia de amenazas a su seguridad. A pesar de ello, hasta la fecha no se ha propuesto ningún mecanismo que aborde de manera rigurosa el estudio de la fiabilidad real de la tecnología de firma digital. El principal objetivo de esta tesis doctoral es mejorar la fiabilidad de las firmas digitales para que éstas puedan actuar como evidencia de no repudio con garantías suficientes
    corecore