50 research outputs found

    A tool for creating and visualizing semantic annotations on relational tables

    Get PDF
    Semantically annotating content from relational tables on the Web is a crucial task towards realizing the vision of the Semantic Web. However, there is a lack of open source, user-friendly tools to facilitate this. This paper describes an extension of the TableMiner+ system, an open source Semantic Table Interpretation system that automatically annotates Web tables using Linked Data in an effective and effi�cient approach. It adds a graphical user interface to TableMiner+, to facilitate the visualization and correction of automatically generated annotations. This makes TableMiner+ an ideal tool for the semi-automatic creation of high-quality semantic annotations on relational tables, which facilitates the publication of Linked Data on the Web

    Exploring user and system requirements of linked data visualization through a visual dashboard approach

    Get PDF
    One of the open problems in SemanticWeb research is which tools should be provided to users to explore linked data. This is even more urgent now that massive amount of linked data is being released by governments worldwide. The development of single dedicated visualization applications is increasing, but the problem of exploring unknown linked data to gain a good understanding of what is contained is still open. An effective generic solution must take into account the user’s point of view, their tasks and interaction, as well as the system’s capabilities and the technical constraints the technology imposes. This paper is a first step in understanding the implications of both, user and system by evaluating our dashboard-based approach. Though we observe a high user acceptance of the dashboard approach, our paper also highlights technical challenges arising out of complexities involving current infrastructure that need to be addressed while visualising linked data. In light of the findings, guidelines for the development of linked data visualization (and manipulation) are provided

    Visualizing semantic table annotations with TableMiner+

    Get PDF
    This paper describes an extension of the TableMiner+ sys- tem, an open source Semantic Table Interpretation system that annotates Web tables using Linked Data in an effective and e�fficient approach. It adds a graphical user interface to TableMiner+, to facilitate the visualization and correction of automatically generated annotations. This makes TableMiner+ an ideal tool for the semi-automatic creation of high-quality semantic annotations on tabular data, which facilitates the publication of Linked Data on the Web

    Visualizing and animating large-scale spatiotemporal data with ELBAR explorer

    Get PDF
    Visual exploration of data enables users and analysts observe interesting patterns that can trigger new research for further investigation. With the increasing availability of Linked Data, facilitating support for making sense of the data via visual exploration tools for hypothesis generation is critical. Time and space play important roles in this because of their ability to illustrate dynamicity, from a spatial context. Yet, Linked Data visualization approaches typically have not made efficient use of time and space together, apart from typical rather static multivisualization approaches and mashups. In this paper we demonstrate ELBAR explorer that visualizes a vast amount of scientific observational data about the Brazilian Amazon Rainforest. Our core contribution is a novel mechanism for animating between the di↵erent observed values, thus illustrating the observed changes themselves

    User driven information extraction with LODIE

    Get PDF
    Information Extraction (IE) is the technique for transforming unstructured or semi-structured data into structured representation that can be understood by machines. In this paper we use a user-driven Information Extraction technique to wrap entity-centric Web pages. The user can select concepts and properties of interest from available Linked Data. Given a number of websites containing pages about the concepts of interest, the method will exploit (i) recurrent structures in the Web pages and (ii) available knowledge in Linked data to extract the information of interest from the Web pages

    Visual design recommendations for situation awareness in social media

    Get PDF
    The use of online Social Media is increasingly popular amongst emergency services to support Situational Awareness (i.e. accurate, complete and real-time information about an event). Whilst many software solutions have been developed to monitor and analyse Social Media, little attention has been paid on how to visually design for Situational Awareness for this large-scale data space. We describe an approach where levels of SA have been matched to corresponding visual design recommendations using participatory design techniques with Emergency Responders in the UK. We conclude by presenting visualisation prototypes developed to satisfy the design recommendations, and how they contribute to Emergency Responders’ Situational Awareness in an example scenario. We end by highlighting research issues that emerged during the initial evaluation

    Large scale, long-term, high granularity measurement of active travel using smartphones apps

    Get PDF
    Accurate, long-term data are needed in order to determine trends in active travel, to examine the effectiveness of any interventions and to quantify the health, social and economic consequences of active travel. However, most studies of individual travel behaviour have either used self-report (which is limited in detail and open to bias), or provided logging devices for short periods, so lack the ability to monitor long-term trends. We have developed apps using participants’ own smartphones (Android or iOS) that monitor and feed-back individual user’s physical activity whilst the phone is carried or worn. The nature, time and location of any physical activity are uploaded to a secure survey and allow researchers to identify large scale behaviour. Pilot data from almost 2000 users have been logged and are reported. This constitutes a natural experiment, collecting long-term physical activity, transport mode and route choice information across a large cross-section of users

    Using machine learning techniques and brain MRI scans for detection of Alzheimer’s disease

    Get PDF
    Dementia is a clinical syndrome characterized by cognitive and behavioral impairment: it mostly affects people who are aged 65 years and over. Dementia results from several diseases, of which Alzheimer’s disease (AD) accounts for up to 80% of all dementia diagnoses. Magnetic Resonance Imaging (MRI) is one of the most widely used methods to diagnose AD but due to low efficiency of manual analysis, machine learning algorithms have been developed to diagnose AD using medical imaging data. In this study, unsupervised learning strategies were used to cluster the two diagnostic status, a healthy status called cognitively normal (CN), and AD, using brain structural MRI scans. First, we detected the abnormal regions between CN and AD using two-sample t-tests, and then employed an unsupervised learning neural network to extract features from brain MRI images. In the final stage, unsupervised learning (clustering) was implemented to discriminate between CN and AD data based on the extracted features. The approach was tested on 429 individuals from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) who had baseline brain structural MRI scans: 231 CN and 198 AD. In the study, we found that the abnormal regions around the hippocampus were indicated based on two-sample t-test (p<0.0001), and the proposed methods using the abnormal regions yield the clustering results for CN vs. AD (accuracy=0.8163, specificity=0.7863, sensitivity=0.8436, and precision=0.8411 [mean values based on 10 runs])

    Citizen science on twitter: Using data analytics to understand conversations and networks

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This paper presents a long-term study on how the public engage with discussions around citizen science and crowdsourcing topics. With progress in sensor technologies and IoT,our cities and neighbourhoods are increasingly sensed, measured and observed. While such data are often used to inform citizen science projects, it is still difficult to understand how citizens and communities discuss citizen science activities and engage with citizen science projects. Understanding these engagements in greater depth will provide citizen scientists, project owners, practitioners and the generic public with insights around how social media can be used to share citizen science related topics, particularly to help increase visibility, influence change and in general and raise awareness on topics. To the knowledge of the authors, this is the first large-scale study on understanding how such information is discussed on Twitter, particularly outside the scope of individual projects. The paper reports on the wide variety of topics (e.g., politics, news, ecological observations) being discussed on social media and a wide variety of network types and the varied roles played by users in sharing information in Twitter. Based on these findings, the paper highlights recommendations for stakeholders for engaging with citizen science topics

    A novel application of deep learning with image cropping: a smart city use case for flood monitoring

    Get PDF
    © 2020, The Author(s). Event monitoring is an essential application of Smart City platforms. Real-time monitoring of gully and drainage blockage is an important part of flood monitoring applications. Building viable IoT sensors for detecting blockage is a complex task due to the limitations of deploying such sensors in situ. Image classification with deep learning is a potential alternative solution. However, there are no image datasets of gullies and drainages. We were faced with such challenges as part of developing a flood monitoring application in a European Union-funded project. To address these issues, we propose a novel image classification approach based on deep learning with an IoT-enabled camera to monitor gullies and drainages. This approach utilises deep learning to develop an effective image classification model to classify blockage images into different class labels based on the severity. In order to handle the complexity of video-based images, and subsequent poor classification accuracy of the model, we have carried out experiments with the removal of image edges by applying image cropping. The process of cropping in our proposed experimentation is aimed to concentrate only on the regions of interest within images, hence leaving out some proportion of image edges. An image dataset from crowd-sourced publicly accessible images has been curated to train and test the proposed model. For validation, model accuracies were compared considering model with and without image cropping. The cropping-based image classification showed improvement in the classification accuracy. This paper outlines the lessons from our experimentation that have a wider impact on many similar use cases involving IoT-based cameras as part of smart city event monitoring platforms
    corecore