34 research outputs found

    A Model For Collaborative Creation And Ownership Of Digital Products

    Get PDF
    This thesis presents Axone, a system that enables decentralized collaborative creation of digital products through interconnected digital content blocks. Axone provides the provenance of a digital product by storing its history since creation in an immutable Directed Acyclic Graph (DAG) data structure. This history comprises digital content blocks used in its creation, including how they referenced each other in the development of the final digital product. Through referencing, credit attribution is achieved and royalty fees due to the referenced content block are recorded and enforced. Content creators can concurrently work on a succeeding content block to produce various versions of unique digital products from the same original content block. Axone focuses on written work enabling different authors to contribute to a book (the digital product) in the form of chapters (digital content blocks), until its completion. Axone uses blockchain technology and web monetization to provide provenance for each chapter and to stream payments to authors

    Integrating institutional repositories into the Semantic Web

    Get PDF
    The Web has changed the face of scientific communication; and the Semantic Web promises new ways of adding value to research material by making it more accessible to automatic discovery, linking, and analysis. Institutional repositories contain a wealth of information which could benefit from the application of this technology. In this thesis I describe the problems inherent in the informality of traditional repository metadata, and propose a data model based on the Semantic Web which will support more efficient use of this data, with the aim of streamlining scientific communication and promoting efficient use of institutional research output

    Supporting requirement analysis through requirement rationale capture and traceability

    Get PDF
    Manufacturers of complex engineering systems are increasingly recognising the importance of identifying, understanding and satisfying stakeholders’ needs in order to produce high-quality products. The analysis of these needs into a formal requirement specification is a time consuming and complex process for which little support is offered to design engineers. This can result in requirements being poorly documented and with little or no traceability to their origins. This dissertation reports an investigation to understand the process of requirement analysis and develop computational support for this important phase of the engineering design process. The key argument of this research is that the existing practice of requirement analysis can be improved by providing better support for requirement rationale capture and enabling greater requirement traceability. The research consisted of three main phases. In the first phase, literature related to the requirement analysis was reviewed and led to the creation of a requirement analysis model. In the second phase, the practices of a global engineering organisation were investigated using document analysis as well as interviews with and shadowing of company engineers. The research found that requirement analysis lacks support for requirement rationale capture and traceability. On the basis of this result, a workflow for requirement analysis was proposed. The workflow involves the use of the Decision Rationale editor tool to capture requirement rationale and enable requirement traceability. In the third phase, four studies were undertaken to validate the workflow. These studies investigated: 1) application of the workflow to requirements generated through reverse-engineering a low-complexity consumer product; 2) requirements extracted from documents produced by a graduate engineering team during a twelve-week project; 3) the requirement analysis process undertaken by two graduate engineering teams during twelve-week projects; and 4) requirements for a new aircraft engine development programme. The studies showed that the proposed workflow is feasible, practical, and scalable when applied to engineering projects. Requirement rationales were classified into categories, namely product design and use, pre-existing rationale, and project management. In order to fully support requirement traceability, it was found that it is important to make traceable four types of requirement transformations: newly introduced, copied, updated, and deleted requirements. The research demonstrated that the proposed workflow is a successful proof-of-concept and can lead to improved quality of requirement documentation and requirement traceability.Open Acces

    Personal Knowledge Models with Semantic Technologies

    Get PDF
    Conceptual Data Structures (CDS) is a unified meta-model for representing knowledge cues in varying degrees of granularity, structuredness, and formality. CDS consists of: (1) A simple, expressive data-model; (2) A relation ontology which unifies the relations found in cognitive models of personal knowledge management tools, e. g., documents, mind-maps, hypertext, or semantic wikis. (3) An interchange format for structured text. Implemented prototypes have been evaluated

    Lost in the archive: vision, artefact and loss in the evolution of hypertext

    Full text link
    How does one write the history of a technical machine? Can we say that technical machines have their own genealogies, their own evolutionary dynamic? The technical artefact constitutes a series of objects, a lineage or a line. At a cursory level, we can see this in the fact that technical machines come in generations - they adapt and adopt characteristics over time, one suppressing the other as it becomes obsolete. It is argued that technics has its own evolutionary dynamic, and that this dynamic stems neither from biology nor from human societies. Yet 'it is impossible to deny the role of human thought in the creation of technical artefacts' (Guattari 1995, p. 37). Stones do not automatically rise up into a wall - humans 'invent' technical objects. This, then, raises the question of technical memory. Is it humans that remember previous generations of machines and transfer their characteristics to new machines? If so, how and where do they remember them? It is suggested that humans learn techniques from technical artefacts, and transfer these between machines. This theory of technical evolution is then used to understand the genealogy of hypertext. The historical differentiations of hypertext in different technical systems is traced. Hypertext is defined as both a technical artefact and also a set of techniques: both are a part of this third milieu, technics. The difference between technical artefact and technical vision is highlighted, and it is suggested that technique and vision change when they are externalised as material artefact. The primary technique traced is association, the organisational principle behind the hypertext systems explored in the manuscript. In conclusion, invention is shown to be an act of exhumation, the transfer and retroactiviation of techniques from the past. This thesis presents an argument for a new model of technical evolution, a model which claims that technics constitutes its own dynamic, and that this dynamic exceeds human evolution. It traces the genealogy of hypertext as a set of techniques and as series of material artefacts. To create this geneaology I draw on interviews conducted with Douglas Engelbart, Ted Nelson and Andries van Dam, as well as a wide variety of primary and secondary resources

    Mapping the Focal Points of WordPress: A Software and Critical Code Analysis

    Get PDF
    Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods
    corecore