125,486 research outputs found

    Automated user documentation generation based on the Eclipse application model

    Full text link
    An application's user documentation, also referred to as the user manual, is one of the core elements required in application distribution. While there exist many tools to aid an application's developer in creating and maintaining documentation on and for the code itself, there are no tools that complement code development with user documentation for modern graphical applications. Approaches like literate programming are not applicable to this scenario, as not a library, but a full application is to be documented to an end-user. Documentation generation on applications up to now was only partially feasible due to the gap between the code and its semantics. The new generation of Eclipse rich client platform developed applications is based on an application model, closing a broad semantic gap between code and visible interface. We use this application model to provide a semantic description for the contained elements. Combined with the internal relationships of the application model, these semantic descriptions are aggregated to well-structured user documentations that comply to the ISO/IEC 26514. This paper delivers a report on the Ecrit research project, where the potentials and limitations of user documentation generation based on the Eclipse application model were investigated.Comment: 9 pages, 9 figure

    Controlled generation in example-based machine translation

    Get PDF
    The theme of controlled translation is currently in vogue in the area of MT. Recent research (Sch¨aler et al., 2003; Carl, 2003) hypothesises that EBMT systems are perhaps best suited to this challenging task. In this paper, we present an EBMT system where the generation of the target string is filtered by data written according to controlled language specifications. As far as we are aware, this is the only research available on this topic. In the field of controlled language applications, it is more usual to constrain the source language in this way rather than the target. We translate a small corpus of controlled English into French using the on-line MT system Logomedia, and seed the memories of our EBMT system with a set of automatically induced lexical resources using the Marker Hypothesis as a segmentation tool. We test our system on a large set of sentences extracted from a Sun Translation Memory, and provide both an automatic and a human evaluation. For comparative purposes, we also provide results for Logomedia itself

    The logic and linguistic model for automatic extraction of collocation similarity

    Get PDF
    The article discusses the process of automatic identification of collocation similarity. The semantic analysis is one of the most advanced as well as the most difficult NLP task. The main problem of semantic processing is the determination of polysemy and synonymy of linguistic units. In addition, the task becomes complicated in case of word collocations. The paper suggests a logical and linguistic model for automatic determining semantic similarity between colocations in Ukraine and English languages. The proposed model formalizes semantic equivalence of collocations by means of semantic and grammatical characteristics of collocates. The basic idea of this approach is that morphological, syntactic and semantic characteristics of lexical units are to be taken into account for the identification of collocation similarity. Basic mathematical means of our model are logical-algebraic equations of the finite predicates algebra. Verb-noun and noun-adjective collocations in Ukrainian and English languages consist of words belonged to main parts of speech. These collocations are examined in the model. The model allows extracting semantically equivalent collocations from semi-structured and non-structured texts. Implementations of the model will allow to automatically recognize semantically equivalent collocations. Usage of the model allows increasing the effectiveness of natural language processing tasks such as information extraction, ontology generation, sentiment analysis and some others

    Industrial-Strength Documentation for ACL2

    Full text link
    The ACL2 theorem prover is a complex system. Its libraries are vast. Industrial verification efforts may extend this base with hundreds of thousands of lines of additional modeling tools, specifications, and proof scripts. High quality documentation is vital for teams that are working together on projects of this scale. We have developed XDOC, a flexible, scalable documentation tool for ACL2 that can incorporate the documentation for ACL2 itself, the Community Books, and an organization's internal formal verification projects, and which has many features that help to keep the resulting manuals up to date. Using this tool, we have produced a comprehensive, publicly available ACL2+Books Manual that brings better documentation to all ACL2 users. We have also developed an extended manual for use within Centaur Technology that extends the public manual to cover Centaur's internal books. We expect that other organizations using ACL2 will wish to develop similarly extended manuals.Comment: In Proceedings ACL2 2014, arXiv:1406.123

    Analytical modelling in Dynamo

    Get PDF
    BIM is applied as modern database for civil engineering. Its recent development allows to preserve both structure geometrical and analytical information. The analytical model described in the paper is derived directly from BIM model of a structure automatically but in most cases it requires manual improvements before being sent to FEM software. Dynamo visual programming language was used to handle the analytical data. Authors developed a program which corrects faulty analytical model obtained from BIM geometry, thus providing better automation for preparing FEM model. Program logic is explained and test cases shown

    Example-based controlled translation

    Get PDF
    The first research on integrating controlled language data in an Example-Based Machine Translation (EBMT) system was published in [Gough & Way, 2003]. We improve on their sub-sentential alignment algorithm to populate the system’s databases with more than six times as many potentially useful fragments. Together with two simple novel improvements—correcting mistranslations in the lexicon, and allowing multiple translations in the lexicon—translation quality improves considerably when target language translations are constrained. We also develop the first EBMT system which attempts to filter the source language data using controlled language specifications. We provide detailed automatic and human evaluations of a number of experiments carried out to test the quality of the system. We observe that our system outperforms Logomedia in a number of tests. Finally, despite conflicting results from different automatic evaluation metrics, we observe a preference for controlling the source data rather than the target translations
    corecore