11,166 research outputs found

    Substitution-based approach for linguistic steganography using antonym

    Get PDF
    Steganography has been a part of information technology security since a long time ago. The study of steganography is getting attention from researchers because it helps to strengthen the security in protecting content message during this era of Information Technology. In this study, the use of substitution-based approach for linguistic steganography using antonym is proposed where it is expected to be an alternative to the existing substitution approach that using synonym. This approach still hides the message as existing approach but its will change the semantic of the stego text from cover text. A tool has been developed to test the proposed approach and it has been verified and validated. This proposed approach has been verified based on its character length stego text towards the cover text, bit size types of the secret text towards the stego text and bit size types of the cover text towards the stego text. It has also been validated using four parameters, which are precision, recall, f-measure, and accuracy. All the results showed that the proposed approach was very effective and comparable to the existing synonym-based substitution approach

    Domain transfer for deep natural language generation from abstract meaning representations

    Get PDF
    Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%

    Code generation using a backtracking LR parser

    Get PDF
    Although the parsing phase of the modern compiler has been automated in a machine independent fashion, the diversity of computer architectures inhibits automating the code generation phase. During code generation, some intermediate representation of a source program is transformed into actual machine instructions. The need for portable compilers has driven research towards the automatic generation of code generators.;This research investigates the use of a backtracking LR parser that treats code generation as a series of tree transformations

    Syntactic Computation as Labelled Deduction: WH a case study

    Get PDF
    This paper addresses the question "Why do WH phenomena occur with the particular cluster of properties observed across languages -- long-distance dependencies, WH-in situ, partial movement constructions, reconstruction, crossover etc." These phenomena have been analysed by invoking a number of discrete principles and categories, but have so far resisted a unified treatment. The explanation proposed is set within a model of natural language understanding in context, where the task of understanding is taken to be the incremental building of a structure over which the semantic content is defined. The formal model is a composite of a labelled type-deduction system, a modal tree logic, and a set of rules for describing the process of interpreting the string as a set of transition states. A dynamic concept of syntax results, in which in addition to an output structure associated with each string (analogous to the level of LF), there is in addition an explicit meta-level description of the process whereby this incremental process takes place. This paper argues that WH-related phenomena can be unified by adopting this dynamic perspective. The main focus of the paper is on WH-initial structures, WH in situ structures, partial movement phenomena, and crossover phenomena. In each case, an analysis is proposed which emerges from the general characterisatioan of WH structures without construction-specific stipulation.Articl

    Code Generation = A* + BURS

    Get PDF
    A system called BURS that is based on term rewrite systems and a search algorithm A* are combined to produce a code generator that generates optimal code. The theory underlying BURS is re-developed, formalised and explained in this work. The search algorithm uses a cost heuristic that is derived from the termrewrite system to direct the search. The advantage of using a search algorithm is that we need to compute only those costs that may be part of an optimal rewrite sequence

    Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    Get PDF
    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities
    corecore