2,265 research outputs found

    A Survey on User Interaction with Linked Data

    Get PDF
    Since the beginning of the Semantic Web and the coining of the term Linked Data in 2006, more than one thousand datasets with over sixteen thousand links have been published to the Linked Open Data Cloud. This rising interest is fuelled by the benefits that semantically annotated and machine-readable information can have in many systems. Alongside this growth we also observe a rise in humans creating and consuming Linked Data, and the opportunity to study and develop guidelines for tackling the new user interaction problems that arise with it. To gather information on the current solutions for modelling user interaction for these applications, we conducted a study surveying the interaction techniques provided in the state of the art of Linked Data tools and applications developed for users with no experience with Semantic Web technologies. The 18 tools reviewed are described and compared according to the interaction features provided, techniques used for visualising one instance and a set of instances, search solutions implemented, and the evaluation methods used to evaluate the proposed interaction solutions. From this review, we can conclude that researchers have started to deviate from more traditional visualisation techniques, like graph visualisations, when developing for lay users. This shows a current effort in developing Semantic Web tools to be used by lay users and motivates the documentation and formalisation of the solutions encountered in the studied tools. Copyright (c) 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

    Spatial Consistency and Contextual Cues for Incidental Learning in Browser Design

    No full text
    This paper introduces the Backward Highlighting technique for mitigating an identified flaw in directional column-faceted browsers like iTunes. Further, the technique significantly enhances the information that can be learned from the columns and encourages further interaction with facet items that were previously restricted from use. After giving a detailed overview of faceted browsing approaches, the Backward Highlighting technique is described along with possible implementations. Two of these possible implementations are compared to a control condition to statistically prove the value of Backward Highlighting. The analysis produces design recommendations for implementing the Backward Highlighting technique within faceted browsers that choose the directional column approach. The paper concludes with future work on how to further improve on the statistically proven advantages provided by the Backward Highlighting technique

    Coherence compilation: applying AIED techniques to the reuse of educational resources

    Get PDF
    The HomeWork project is building an exemplar system to provide individualised experiences for individual and groups of children aged 6-7 years, their parents, teachers and classmates at school. It employs an existing set of broadcast video media and associated resources that tackle both numeracy and literacy at Key Stage 1. The system employs a learner model and a pedagogical model to identify what resource is best used with an individual child or group of children collaboratively at a particular learning point and at a particular location. The Coherence Compiler is that component of the system which is designed to impose an overall narrative coherence on the materials that any particular child is exposed to. This paper presents a high level vision of the design of the Coherence Compiler and sets its design within the overall framework of the HomeWork project and its learner and pedagogical models

    DiSCmap : digitisation of special collections mapping, assessment, prioritisation. Final project report

    Get PDF
    Traditionally, digitisation has been led by supply rather than demand. While end users are seen as a priority they are not directly consulted about which collections they would like to have made available digitally or why. This can be seen in a wide range of policy documents throughout the cultural heritage sector, where users are positioned as central but where their preferences are assumed rather than solicited. Post-digitisation consultation with end users isequally rare. How are we to know that digitisation is serving the needs of the Higher Education community and is sustainable in the long-term? The 'Digitisation in Special Collections: mapping, assessment and prioritisation' (DiSCmap) project, funded by the Joint Information Systems Committee (JISC) and the Research Information Network (RIN), aimed to:- Identify priority collections for potential digitisation housed within UK Higher Education's libraries, archives and museums as well as faculties and departments.- Assess users' needs and demand for Special Collections to be digitised across all disciplines.- Produce a synthesis of available knowledge about users' needs with regard to usability and format of digitised resources.- Provide recommendations for a strategic approach to digitisation within the wider context and activity of leading players both in the public and commercial sector.The project was carried out jointly by the Centre for Digital Library Research (CDLR) and the Centre for Research in Library and Information Management (CERLIM) and has taken a collaborative approach to the creation of a user-driven digitisation prioritisation framework, encouraging participation and collective engagement between communities.Between September 2008 and March 2009 the DiSCmap project team asked over 1,000 users, including intermediaries (vocational users who take care of collections) and end users (university teachers, researchers and students) a variety of questions about which physical and digital Special Collections they make use of and what criteria they feel must be considered when selecting materials for digitisation. This was achieved through workshops, interviews and two online questionnaires. Although the data gathered from these activities has the limitation of reflecting only a partial view on priorities for digitisation - the view expressed by those institutions who volunteered to take part in the study - DiSCmap was able to develop:- a 'long list' of 945 collections nominated for digitisation both by intermediaries andend-users from 70 HE institutions (see p. 21);- a framework of user-driven prioritisation criteria which could be used to inform current and future digitisation priorities; (see p. 45)- a set of 'short lists' of collections which exemplify the application of user-driven criteria from the prioritisation framework to the long list (see Appendix X):o Collections nominated more than once by various groups of users.o Collections related to a specific policy framework, eg HEFCE's strategically important and vulnerable subjects for Mathematics, Chemistry and Physics.o Collections on specific thematic clusters.o Collections with highest number of reasons for digitisation

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems

    Documenting, Interpreting, Publishing, and Reusing : Linking archaeological reports and excavation archives in the virtual space

    Get PDF
    This PhD thesis examines how application of 3D visualization and related digital analytical tools is having a transformative impact on archaeological practice via improvement of visual-spatial thinking and the strengthening of conceptual understanding. However, the deployment of these new digital methods is essentially still at an experimental stage. Therefore, the thesis undertakes a critical evaluation of current progress, identifying both shortcomings and opportunities. It argues that more work is needed to systematically identify and resolve current operational challenges in order to create improved digital frameworks that can strengthen future performance across the wider discipline.The PhD research is based on four “parallel experiments” designed to facilitate mutual enrichment and on-going refinement. Each individual experiment generated research articles, which investigate how particular 3D and digital methods can be adapted to diverse kinds of archaeological sites and features,each with unique characteristics. The articles demonstrate how particular methods can be deployed to constantly refine and improve documentation procedures, and to review and adjust interpretation during the excavation process. In total, the thesis produced five research articles and three new web-based publishing systems.Overall, the thesis demonstrates that application, proactive evaluation and constant improvement of new 3D visualization and digital analytical tools will play an increasingly significant role in strengthening and better integrating future archaeological methods and practice. The research also generates original insights and new digital platforms that together underline the importance of applying these new digital tools across the wider archaeological discipline. Finally, the thesis cautions that digital innovation needs to be anchored in an "open science" culture, including strong ethical frameworks and commitment to FAIR principles (i.e. Findability, Accessibility, Interoperability, and Reusability) of data archiving as a key component of research design and wider societal engagement

    NeuroProv - A visualisation system to enhance the utility of provenance Data for neuroimaging analysis

    Get PDF
    E-Science platforms such as myGRID and NeuGRID for Users are growing at an amazing rate. One of the key barriers to their widespread use in practice is the lack of provenance data to support the reasoning and verification of experimental or analysis results. Clinical researchers use workflows to orchestrate the data present in e-science platforms in order to facilitate processing. Even though most systems capture provenance data and store it, systems rarely make use of it, thus limiting the exploitation of the true potential of such provenance. This thesis investigates mechanisms to visualise provenance data for neuroimaging analysis and to provide means to exploit the true potential of provenance data. In order to achieve this, a visualisation system has been implemented based on the use-cases that have been designed following requirements elicited for neuroimaging analysis. In this research a technique has been used to address the requirements of provenance visualisation for neuroimaging analysis. The prototype system has been tested against the provenance generated by NeuGRID for Users (N4U) as a proof of concept for our research. Different workflows have been visualised to study the efficacy of the proposed solution. Furthermore, evaluation metrics have been defined to determine whether the proposed solution is suitable for the purpose of the research conducted. The results show that the proposed visualisation system enhances the utility of provenance data for neuroimaging analysis and therefore the proposed research can be used to provide value to provenance data for neuroimaging analyses

    Production of semi real time media-GIS contents using MODIS imagery

    Get PDF
    [Abstract]: Delivering environmental disaster information, swiftly, attractively, meaningfully, and accurately, to public is becoming a competitive task among spatial data visualizing experts. Basically, the data visualization process has to follow basics of spatial data visualization to maintain the academic quality and the spatial accuracy of the content. Here, “Media-GIS”, can be promoted as a one of the latest sub-forms of GIS, which targets mass media. Under Media-GIS, “Present” or the fist component of three roles of data visualization takes the major workload compare to other two, “Analysis” and “Explore”. When present contents, optimizing the main graphical variables like, size, value, texture, hue, orientation, and shape, is vital with regard to the target market (age group, social group) and the medium (print, TV, WEB, mobile). This study emphasizes on application of freely available MODIS true colour images to produce near real time contents on environmental disasters, while minimizing the production cost. With the brake of first news of a significant environmental disaster, relevant MODIS (250m) images can be extracted in GeoTIFF and KLM (Keyhole Markup Language) formats from MODIS website. This original KML file can be overlayed on Google Earth, to collect more spatial information of the disaster site. Then, in ArcGIS environment, GeoTIFF file can be transferred into Photoshop for production of the graphics of the target spot. This media-friendly Photoshop file can be used as an independent content without geo-references or imported into ArcGIS to convert into KLM format, which has geo-references. The KLM file, which is graphically enhanced content with extra information on environmental disaster, can be used in TV and WEB through Google Earth. Also, sub productions can be directed into print and mobile contents. If the data processing can be automated, system will be able to produce media contents in a faster manner. A case study on the recent undersea oil spill occurred in Gulf of Mexico included in the report to highlight main aspects discussed in the methodology
    • …
    corecore