31,819 research outputs found

    Public Key Infrastructure based on Authentication of Media Attestments

    Full text link
    Many users would prefer the privacy of end-to-end encryption in their online communications if it can be done without significant inconvenience. However, because existing key distribution methods cannot be fully trusted enough for automatic use, key management has remained a user problem. We propose a fundamentally new approach to the key distribution problem by empowering end-users with the capacity to independently verify the authenticity of public keys using an additional media attestment. This permits client software to automatically lookup public keys from a keyserver without trusting the keyserver, because any attempted MITM attacks can be detected by end-users. Thus, our protocol is designed to enable a new breed of messaging clients with true end-to-end encryption built in, without the hassle of requiring users to manually manage the public keys, that is verifiably secure against MITM attacks, and does not require trusting any third parties

    Software development: A paradigm for the future

    Get PDF
    A new paradigm for software development that treats software development as an experimental activity is presented. It provides built-in mechanisms for learning how to develop software better and reusing previous experience in the forms of knowledge, processes, and products. It uses models and measures to aid in the tasks of characterization, evaluation and motivation. An organization scheme is proposed for separating the project-specific focus from the organization's learning and reuse focuses of software development. The implications of this approach for corporations, research and education are discussed and some research activities currently underway at the University of Maryland that support this approach are presented

    An Algorithm for Generating Gap-Fill Multiple Choice Questions of an Expert System

    Get PDF
    This research is aimed to propose an artificial intelligence algorithm comprising an ontology-based design, text mining, and natural language processing for automatically generating gap-fill multiple choice questions (MCQs). The simulation of this research demonstrated an application of the algorithm in generating gap-fill MCQs about software testing. The simulation results revealed that by using 103 online documents as inputs, the algorithm could automatically produce more than 16 thousand valid gap-fill MCQs covering a variety of topics in the software testing domain. Finally, in the discussion section of this paper we suggest how the proposed algorithm should be applied to produce gap-fill MCQs being collected in a question pool used by a knowledge expert system

    Using CBR for portuguese question generation

    Get PDF
    In this paper, we propose a new architecture for Question Generation for the Portuguese Language. This architecture aims at the automatic generation of questions, to be used later, for instance, in automatic question answering by means of predictive question generation. Our approach combines a case-based reasoning system and a module for question generation. The question generation module uses manually built rules that are fed to the case-based reasoning engine for selecting which ones should be used. This is accomplished by comparing the answer and the sentence part-of-speech tag sequences. An identical tag sequence on sentences and answers usually implies a similar sequence on the corresponding questions. We discuss the details of this architecture, how it performs and the results obtained so far.info:eu-repo/semantics/publishedVersio

    Verification and validation of knowledge-based systems with an example from site selection.

    Get PDF
    In this paper, the verification and validation of Knowledge-Based Systems (KBS) using decision tables (DTs) is one of the central issues. It is illustrated using real-market data taken from industrial site selection problems.One of the main problems of KBS is that often there remain a lot of anomalies after the knowledge has been elicited. As a consequence, the quality of the KBS will degrade. This evaluation consists mainly of two parts: verification and validation (V&V). To make a distinction between verification and validation, the following phrase is regularly used: Verification deals with 'building the system right', while validation involves 'building the right system'. In the context of DTs, it has been claimed from the early years of DT research onwards that DTs are very suited for V&V purposes. Therefore, it will be explained how V&V of the modelled knowledge can be performed. In this respect, use is made of stated response modelling designs techniques to select decision rules from a DT. Our approach is illustrated using a case-study dealing with the locational problem of a (petro)chemical company in a port environment. The KBS developed has been named Matisse, which is an acronym of Matching Algorithm, a Technique for Industrial Site Selection and Evaluation.Selection; Systems;

    Acquiring Correct Knowledge for Natural Language Generation

    Full text link
    Natural language generation (NLG) systems are computer software systems that produce texts in English and other human languages, often from non-linguistic input data. NLG systems, like most AI systems, need substantial amounts of knowledge. However, our experience in two NLG projects suggests that it is difficult to acquire correct knowledge for NLG systems; indeed, every knowledge acquisition (KA) technique we tried had significant problems. In general terms, these problems were due to the complexity, novelty, and poorly understood nature of the tasks our systems attempted, and were worsened by the fact that people write so differently. This meant in particular that corpus-based KA approaches suffered because it was impossible to assemble a sizable corpus of high-quality consistent manually written texts in our domains; and structured expert-oriented KA techniques suffered because experts disagreed and because we could not get enough information about special and unusual cases to build robust systems. We believe that such problems are likely to affect many other NLG systems as well. In the long term, we hope that new KA techniques may emerge to help NLG system builders. In the shorter term, we believe that understanding how individual KA techniques can fail, and using a mixture of different KA techniques with different strengths and weaknesses, can help developers acquire NLG knowledge that is mostly correct
    corecore