1,511 research outputs found
Critique of Architectures for Long-Term Digital Preservation
Evolving technology and fading human memory threaten the long-term intelligibility of many kinds of documents. Furthermore, some records are susceptible to improper alterations that make them untrustworthy. Trusted Digital Repositories (TDRs) and Trustworthy Digital Objects (TDOs) seem to be the only broadly applicable digital preservation methodologies proposed. We argue that the TDR approach has shortfalls as a method for long-term digital preservation of sensitive information. Comparison of TDR and TDO methodologies suggests differentiating near-term preservation measures from what is needed for the long term.
TDO methodology addresses these needs, providing for making digital documents durably intelligible. It uses EDP standards for a few file formats and XML structures for text documents. For other information formats, intelligibility is assured by using a virtual computer. To protect sensitive information—content whose inappropriate alteration might mislead its readers, the integrity and authenticity of each TDO is made testable by embedded public-key cryptographic message digests and signatures. Key authenticity is protected recursively in a social hierarchy. The proper focus for long-term preservation technology is signed packages that each combine a record collection with its metadata and that also bind context—Trustworthy Digital Objects.
Connected Information Management
Society is currently inundated with more information than ever, making efficient management
a necessity. Alas, most of current information management suffers from several
levels of disconnectedness: Applications partition data into segregated islands,
small notes don’t fit into traditional application categories, navigating the data is different
for each kind of data; data is either available at a certain computer or only online,
but rarely both. Connected information management (CoIM) is an approach to information
management that avoids these ways of disconnectedness. The core idea of
CoIM is to keep all information in a central repository, with generic means for organization
such as tagging. The heterogeneity of data is taken into account by offering
specialized editors.
The central repository eliminates the islands of application-specific data and is formally
grounded by a CoIM model. The foundation for structured data is an RDF repository.
The RDF editing meta-model (REMM) enables form-based editing of this data,
similar to database applications such as MS access. Further kinds of data are supported
by extending RDF, as follows. Wiki text is stored as RDF and can both contain
structured text and be combined with structured data. Files are also supported by the
CoIM model and are kept externally. Notes can be quickly captured and annotated with
meta-data. Generic means for organization and navigation apply to all kinds of data.
Ubiquitous availability of data is ensured via two CoIM implementations, the web application
HYENA/Web and the desktop application HYENA/Eclipse. All data can be
synchronized between these applications. The applications were used to validate the
CoIM ideas
Experimental Object-Oriented Modelling
This thesis examines object-oriented modelling in experimental system development. Object-oriented modelling aims at representing concepts and phenomena of a problem domain in terms of classes and objects. Experimental system development seeks active experimentation in a system development project through, e.g., technical prototyping and active user involvement. We introduce and examine "experimental object-oriented modelling" as the intersection of these practices
Semantic technologies: from niche to the mainstream of Web 3? A comprehensive framework for web Information modelling and semantic annotation
Context: Web information technologies developed and applied in the last decade
have considerably changed the way web applications operate and have
revolutionised information management and knowledge discovery. Social
technologies, user-generated classification schemes and formal semantics have a
far-reaching sphere of influence. They promote collective intelligence, support
interoperability, enhance sustainability and instigate innovation.
Contribution: The research carried out and consequent publications follow the
various paradigms of semantic technologies, assess each approach, evaluate its
efficiency, identify the challenges involved and propose a comprehensive framework for web information modelling and semantic annotation, which is the thesis’ original contribution to knowledge. The proposed framework assists web information
modelling, facilitates semantic annotation and information retrieval, enables system interoperability and enhances information quality.
Implications: Semantic technologies coupled with social media and end-user
involvement can instigate innovative influence with wide organisational implications that can benefit a considerable range of industries. The scalable and sustainable business models of social computing and the collective intelligence of organisational social media can be resourcefully paired with internal research and knowledge from interoperable information repositories, back-end databases and legacy systems.
Semantified information assets can free human resources so that they can be used to better serve business development, support innovation and increase productivity
A framework to support semantic interoperability in product design and manufacture
It has been recognised that the ability to communicate the meaning of concepts and their intent within and across system boundaries, for supporting key decisions in product design and manufacture, is impaired by the semantic interoperability issues that are presently encountered. This work contributes to the field of semantic interoperability in product design and manufacture. An attribution is made to the understanding and application of relevant concepts coming from the computer science world, notably ontology-based approaches, to help resolve semantic interoperability problems. A novel ontological approach, identified as the Semantic Manufacturing Interoperability Framework (SMIF), has been proposed following an exploration of the important requirements to be satisfied. The framework, built on top of a Common Logic-based ontological formalism, consists of a manufacturing foundation to capture the semantics of core feature-based design and manufacture concepts, over which the specialisation of domain models can take place. Furthermore, the framework supports the mechanisms for allowing the reconciliation of semantics, thereby improving the knowledge sharing capability between heterogeneous domains that need to interoperate and have been based on the same manufacturing foundation. This work also analyses a number of test case scenarios, where the framework has been deployed for fostering knowledge representation and reconciliation of models involving products with standard hole features and their related machining process sequences. The test cases have shown that the Semantic Manufacturing Interoperability Framework (SMIF) provides effective support towards achieving semantic interoperability in product design and manufacture. Proposed extensions to the framework are additionally identified so as to provide a view on imminent future work.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
From Physical Aggression to Verbal Behavior: Language Evolution and Self-Domestication Feedback Loop
Towards a crowdsourced solution for the authoring bottleneck in interactive narratives
Interactive Storytelling research has produced a wealth of technologies that can be
employed to create personalised narrative experiences, in which the audience takes
a participating rather than observing role. But so far this technology has not led
to the production of large scale playable interactive story experiences that realise
the ambitions of the field. One main reason for this state of affairs is the difficulty
of authoring interactive stories, a task that requires describing a huge amount of
story building blocks in a machine friendly fashion. This is not only technically
and conceptually more challenging than traditional narrative authoring but also a
scalability problem.
This thesis examines the authoring bottleneck through a case study and a literature
survey and advocates a solution based on crowdsourcing. Prior work has already
shown that combining a large number of example stories collected from crowd workers
with a system that merges these contributions into a single interactive story can be
an effective way to reduce the authorial burden. As a refinement of such an approach,
this thesis introduces the novel concept of Crowd Task Adaptation. It argues that in
order to maximise the usefulness of the collected stories, a system should dynamically
and intelligently analyse the corpus of collected stories and based on this analysis
modify the tasks handed out to crowd workers.
Two authoring systems, ENIGMA and CROSCAT, which show two radically different
approaches of using the Crowd Task Adaptation paradigm have been implemented and
are described in this thesis. While ENIGMA adapts tasks through a realtime dialog
between crowd workers and the system that is based on what has been learned from
previously collected stories, CROSCAT modifies the backstory given to crowd workers
in order to optimise the distribution of branching points in the tree structure that
combines all collected stories. Two experimental studies of crowdsourced authoring
are also presented. They lead to guidelines on how to employ crowdsourced authoring
effectively, but more importantly the results of one of the studies demonstrate the
effectiveness of the Crowd Task Adaptation approach
- …