3,562 research outputs found
GEORDi: Supporting lightweight end-user authoring and exploration of Linked Data
The US and UK governments have recently made much of the data created by their various departments available as data sets (often as csv files) available on the web. Known as âopen dataâ while these are valuable assets, much of this data remains useless because it is effectively inaccessible for citizens to access for the following reasons: (1) it is often a tedious, many step process for citizens simply to find data relevant to a query. Once the data candidate is located, it often must be downloaded and opened in a separate application simply to see if the data that may satisfy the query is contained in it. (2) It is difficult to join related data sets to create richer integrated information (3) it is particularly difficult to query either a single data set, and even harder to query across related data sets. (4) To date, one has had to be well versed in semantic web protocols like SPARQL, RDF and URI formation to integrate and query such sources as reusable linked data. Our goal has been to develop tools that will let regular, non-programmer web citizens make use of this Web of Data. To this end, we present GEORDi, a set of integrated tools and services that lets citizen users identify, explore, query and represent these open data sources over the web via Linked Data mechanisms. In this paper we describe the GEORDi process of authoring new and translating existing open data in a linkable format, GEORDiâs lens mechanism for rendering rich, plain language descriptions and views of resources, and the GEORDI link-sliding paradigm for data exploration. With these tools we demonstrate that it is possible to make the Web of open (and linked) data accessible for ordinary web citizen users
Rethinking Map Legends with Visualization
This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set of guidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. These are demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographic literature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches to legend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamism and design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphic and The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specific needs, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements of fact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence of this work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization
Recommended from our members
Explanatory debugging: Supporting end-user debugging of machine-learned programs
Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a âprogramâ that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present our progress toward enabling end users to debug these learned programs via a Natural Programming methodology. We began with a formative study exploring how users reason about and correct a text-classification program. From the results, we derived and prototyped a concept based on âexplanatory debuggingâ, then empirically evaluated it. Our results contribute methods for exposing a learned program's logic to end users and for eliciting user corrections to improve the program's predictions
Privacy-aware Linked Widgets
The European General Data Protection Regulation (GDPR) brings
new challenges for companies, who must demonstrate that their
systems and business processes comply with usage constraints
specified by data subjects. However, due to the lack of standards,
tools, and best practices, many organizations struggle to adapt their
infrastructure and processes to ensure and demonstrate that all
data processing is in compliance with users' given consent. The
SPECIAL EU H2020 project has developed vocabularies that can
formally describe data subjects' given consent as well as methods
that use this description to automatically determine whether
processing of the data according to a given policy is compliant
with the given consent. Whereas this makes it possible to determine
whether processing was compliant or not, integration of the
approach into existing line of business applications and ex-ante
compliance checking remains an open challenge. In this short paper,
we demonstrate how the SPECIAL consent and compliance framework
can be integrated into Linked Widgets, a mashup platform, in
order to support privacy-aware ad-hoc integration of personal data.
The resulting environment makes it possible to create data integration
and processing workflows out of components that inherently
respect usage policies of the data that is being processed and are
able to demonstrate compliance. We provide an overview of the
necessary meta data and orchestration towards a privacy-aware
linked data mashup platform that automatically respects subjects'
given consents. The evaluation results show the potential of our
approach for ex-ante usage policy compliance checking within the
Linked Widgets Platforms and beyond
Will this work for Susan? Challenges for delivering usable and useful generic linked data browsers
While we witness an explosion of exploration tools for simple datasets on Web 2.0 designed for use by ordinary citizens, the goal of a usable interface for supporting navigation and sense-making over arbitrary linked data has remained elusive. The purpose of this paper is to analyse why - what makes exploring linked data so hard? Through a user-centered use case scenario, we work through requirements for sense making with data to extract functional requirements and to compare these against our tools to see what challenges emerge to deliver a useful, usable knowledge building experience with linked data. We present presentation layer and heterogeneous data integration challenges and offer practical considerations for moving forward to effective linked data sensemaking tools
Doctor of Philosophy
dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations
BlogForever D2.6: Data Extraction Methodology
This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform
Recommended from our members
Making 'The Daily Me': Technology, economics and habit in the mainstream assimilation of personalized news
The mechanisms of personalization deployed by news websites are resulting in an increasing number of editorial decisions being taken by computer algorithms â many of which are under the control of external companies â and by end users. Despite its prevalence, personalization has yet to be addressed fully by the journalism studies literature. This study defines personalization as a distinct form of interactivity and classifies its explicit and implicit forms. Using this taxonomy, it surveys the use of personalization at 11 national news websites in the UK and USA. Research interviews bring a qualitative dimension to the analysis, acknowledging the influence that institutional contexts and journalistsâ attitudes have on the adoption of technology. The study shows how: personalization informs debates on news consumption, content diversity, and the economic context for journalism; and how it challenges the continuing relevance of established theories of journalistic gate-keeping
Early evaluation of Unistats: user experiences
This paper sets out the findings of the user evaluation of Unistats.UK Higher Education Funding Bodie
- âŠ