47,841 research outputs found

    Automated user documentation generation based on the Eclipse application model

    Full text link
    An application's user documentation, also referred to as the user manual, is one of the core elements required in application distribution. While there exist many tools to aid an application's developer in creating and maintaining documentation on and for the code itself, there are no tools that complement code development with user documentation for modern graphical applications. Approaches like literate programming are not applicable to this scenario, as not a library, but a full application is to be documented to an end-user. Documentation generation on applications up to now was only partially feasible due to the gap between the code and its semantics. The new generation of Eclipse rich client platform developed applications is based on an application model, closing a broad semantic gap between code and visible interface. We use this application model to provide a semantic description for the contained elements. Combined with the internal relationships of the application model, these semantic descriptions are aggregated to well-structured user documentations that comply to the ISO/IEC 26514. This paper delivers a report on the Ecrit research project, where the potentials and limitations of user documentation generation based on the Eclipse application model were investigated.Comment: 9 pages, 9 figure

    Past, present and future of information and knowledge sharing in the construction industry: Towards semantic service-based e-construction

    Get PDF
    The paper reviews product data technology initiatives in the construction sector and provides a synthesis of related ICT industry needs. A comparison between (a) the data centric characteristics of Product Data Technology (PDT) and (b) ontology with a focus on semantics, is given, highlighting the pros and cons of each approach. The paper advocates the migration from data-centric application integration to ontology-based business process support, and proposes inter-enterprise collaboration architectures and frameworks based on semantic services, underpinned by ontology-based knowledge structures. The paper discusses the main reasons behind the low industry take up of product data technology, and proposes a preliminary roadmap for the wide industry diffusion of the proposed approach. In this respect, the paper stresses the value of adopting alliance-based modes of operation

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    A Semantic Collaboration Method Based on Uniform Knowledge Graph

    Get PDF
    The Semantic Internet of Things is the extension of the Internet of Things and the Semantic Web, which aims to build an interoperable collaborative system to solve the heterogeneous problems in the Internet of Things. However, the Semantic Internet of Things has the characteristics of both the Internet of Things and the Semantic Web environment, and the corresponding semantic data presents many new data features. In this study, we analyze the characteristics of semantic data and propose the concept of a uniform knowledge graph, allowing us to be applied to the environment of the Semantic Internet of Things better. Here, we design a semantic collaboration method based on a uniform knowledge graph. It can take the uniform knowledge graph as the form of knowledge organization and representation, and provide a useful data basis for semantic collaboration by constructing semantic links to complete semantic relation between different data sets, to achieve the semantic collaboration in the Semantic Internet of Things. Our experiments show that the proposed method can analyze and understand the semantics of user requirements better and provide more satisfactory outcomes

    Putting formal specifications under the magnifying glass: Model-based testing for validation

    Get PDF
    A software development process is effectively an abstract form of model transformation, starting from an end-user model of requirements, through to a system model for which code can be automatically generated. The success (or failure) of such a transformation depends substantially on obtaining a correct, well-formed initial model that captures user concerns. Model-based testing automates black box testing based on the model of the system under analysis. This paper proposes and evaluates a novel model-based testing technique that aims to reveal specification/requirement-related errors by generating test cases from a test model and exercising them on the design model. The case study outlined in the paper shows that a separate test model not only increases the level of objectivity of the requirements, but also supports the validation of the system under test through test case generation. The results obtained from the case study support the hypothesis that there may be discrepancies between the formal specification of the system modeled at developer end and the problem to be solved, and using solely formal verification methods may not be sufficient to reveal these. The approach presented in this paper aims at providing means to obtain greater confidence in the design model that is used as the basis for code generation
    corecore