179 research outputs found

    Hybrid semantic-document models

    Get PDF
    This thesis presents the concept of hybrid semantic-document models to aid information management when using standards for complex technical domains such as military data communication. These standards are traditionally text based documents for human interpretation, but prose sections can often be ambiguous and can lead to discrepancies and subsequent implementation problems. Many organisations produce semantic representations of the material to ensure common understanding and to exploit computer aided development. In developing these semantic representations, no relationship is maintained to the original prose. Maintaining relationships between the original prose and the semantic model has key benefits, including assessing conformance at a semantic level, and enabling original content authors to explicitly define their intentions, thus reducing ambiguity and facilitating computer aided functionality. Through the use of a case study method based on the military standard MIL-STD-6016C, a framework of relationships is proposed. These relationships can integrate with common document modelling techniques and provide the necessary functionality to allow semantic content to be mapped into document views. These relationships are then generalised for applicability to a wider context. Additionally, this framework is coupled with a templating approach which, for repeating sections, can improve consistency and further enhance quality. A reflective approach to model driven web rendering is presented and evaluated. This reflective approach uses self-inspection at runtime to read directly from the model, thus eliminating the need for any generative processes which result in data duplication across source used for different purpose

    Finding Your Way: Navigating Online News and Opinions

    Get PDF
    This study investigates how young people navigate through a number of hyperlinked online news on a specific topic and how this effects, and is affected by, their opinions. Navigating though non-linear hypertext forces readers to integrate information from different sources and make more decisions about what to read, which is more difficult than reading information presented in a linear format, but might also promote deeper engagement with that material. This study used a combination of participant observation, think-aloud protocols, and semi-structured interviews to investigate these issues as participants navigated through a curated collection of articles about the Canadian Oil Sands. Findings about how participants engage with the material, and how the pathways they create while navigating impact their opinions, are discussed

    Towards investigating the validity of measurement of self-regulated learning based on trace data

    Get PDF
    Contains fulltext : 250033.pdf (Publisher’s version ) (Open Access)Contemporary research that looks at self-regulated learning (SRL) as processes of learning events derived from trace data has attracted increasing interest over the past decade. However, limited research has been conducted that looks into the validity of trace-based measurement protocols. In order to fill this gap in the literature, we propose a novel validation approach that combines theory-driven and data-driven perspectives to increase the validity of interpretations of SRL processes extracted from trace-data. The main contribution of this approach consists of three alignments between trace data and think aloud data to improve measurement validity. In addition, we define the match rate between SRL processes extracted from trace data and think aloud as a quantitative indicator together with other three indicators (sensitivity, specificity and trace coverage), to evaluate the "degree" of validity. We tested this validation approach in a laboratory study that involved 44 learners who learned individually about the topic of artificial intelligence in education with the use of a technology-enhanced learning environment for 45 minutes. Following this new validation approach, we achieved an improved match rate between SRL processes extracted from trace-data and think aloud data (training set: 54.24%; testing set: 55.09%) compared to the match rate before applying the validation approach (training set: 38.97%; test set: 34.54%). By considering think aloud data as "reference point", this improvement of the match rate quantified the extent to which validity can be improved by using our validation approach. In conclusion, the novel validation approach presented in this study used both empirical evidence from think aloud data and rationale from our theoretical framework of SRL, which now, allows testing and improvement of the validity of trace-based SRL measurements.39 p

    Conferentie informatiewetenschap 2000, de Doelen, Utrecht, 5 april 2000

    Get PDF

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    Template Based Semantic Integration: From Legacy Archaeological Datasets to Linked Data

    Get PDF
    The online dissemination of datasets to accompany site monographs and summary documentation is becoming common practice within the archaeology domain. Since the legacy database schemas involved are often created on a per-site basis, cross searching or reusing this data remains difficult. Employing an integrating ontology, such as the CIDOC CRM, is one step towards resolving these issues. However, this has tended to require computing specialists with detailed knowledge of the ontologies involved. Results are presented from a collaborative project between computer scientists and archaeologists that provided light weight tools to make it easier for non-specialists to publish Linked Data. Applications developed for the STELLAR project were applied by archaeologists to major excavation datasets and the resulting output was published as Linked Data, conforming to the CIDOC CRM ontology. The template-based Extract Transform Load method is described. Reflections on the experience of using the template-based tools are discussed, together with practical issues including the need for terminology alignment and licensing consideration

    Making Representations Matter: Understanding Practitioner Experience in Participatory Sensemaking

    Get PDF
    Appropriating new technologies in order to foster collaboration and participatory engagement is a focus for many fields, but there is relatively little research on the experience of practitioners who do so. The role of technology-use mediators is to help make such technologies amenable and of value to the people who interact with them and each other. When the nature of the technology is to provide textual and visual representations of ideas and discussions, issues of form and shaping arise, along with questions of professional ethics. This thesis examines such participatory representational practice, specifically how practitioners make participatory visual representations (pictures, diagrams, knowledge maps) coherent, engaging and useful for groups tackling complex societal and organizational challenges. This thesis develops and applies a method to analyze, characterize, and compare instances of participatory representational practice in such a way as to highlight experiential aspects such as aesthetics, narrative, improvisation, sensemaking, and ethics. It extends taxonomies of such practices found in related research, and contributes to a critique of functionalist or techno-rationalist approaches to studying professional practice. It studies how fourteen practitioners using a visual hypermedia tool engaged participants with the hypermedia representations, and the ways they made the representations matter to the participants. It focuses on the sensemaking challenges that the practitioners encountered in their sessions, and on the ways that the form they gave the visual representations (aesthetics) related to the service they were trying to provide to their participants. Qualitative research methods such as grounded theory are employed to analyze video recordings of the participatory representational sessions. Analytical tools were developed to provide a multi-perspective view on each session. Conceptual and normative frameworks for understanding the practitioner experience in participatory representational practice in context, especially in terms of aesthetics, ethics, narrative, sensemaking, and improvisation, are proposed. The thesis places these concerns in context of other kinds of facilitative and mediation practices as well as research on reflective practice, aesthetic experience, critical HCI, and participatory design
    • …
    corecore