182 research outputs found

    Collecting and Visualizing Real-Time Urban Data through City Dashboards

    Get PDF
    Dashboards which collect and display real-time streamed data from a variety of rudimentary sensors positioned in the built environment provide an immediate portal for decision-makers to get some sense of their city and environment. These devices are linked to previous renditions of control and management of real-time services in cities, particularly transport, in control-room like settings but they are more flexible and do not require massive investment in hardware. At one level they are simply screens linked to some sort of computational device whose displays are focused in web page like formats. Here we catalogue the experience of building such dashboards for large cities in Great Britain. In particular, we link these to the emergence of open data, particularly reflecting the experience of the London Datastore. We then show how such dashboards can be configured in many different ways: as data tables which give some sort of physical presence to such data delivery, to purpose-built dashboards for schools, and to various moveable displays that have artistic as well as informative merit. To an extent as real-time streamed data become less of a novelty, we expect these dashboards to merge into more generic portals but for the moment they represent one very public face of the smart city and its big data

    A Platform for the Analysis of Qualitative and Quantitative Data about the Built Environment and its Users

    Get PDF
    There are many scenarios in which it is necessary to collect data from multiple sources in order to evaluate a system, including the collection of both quantitative data - from sensors and smart devices - and qualitative data - such as observations and interview results. However, there are currently very few systems that enable both of these data types to be combined in such a way that they can be analysed side-by-side. This paper describes an end-to-end system for the collection, analysis, storage and visualisation of qualitative and quantitative data, developed using the e-Science Central cloud analytics platform. We describe the experience of developing the system, based on a case study that involved collecting data about the built environment and its users. In this case study, data is collected from older adults living in residential care. Sensors were placed throughout the care home and smart devices were issued to the residents. This sensor data is uploaded to the analytics platform and the processed results are stored in a data warehouse, where it is integrated with qualitative data collected by healthcare and architecture researchers. Visualisations are also presented which were intended to allow the data to be explored and for potential correlations between the quantitative and qualitative data to be investigated

    Use of synthetic health data in prototyping for developing dental implant registry services

    Get PDF
    Developing novel applications in healthcare and dentistry can be challenging due to lack of application requirements, uncertain stakeholders, and no available test data. Such conditions exist in tooth implant dentistry, where innovative services are needed to record and communicate data. This study investigates how manually generated synthetic data can support the development and demonstration of new services for a dental implant registry. Furthermore, other objectives are to evaluate the usefulness of the developed services and to determine whether the development process has contributed to improving the data model used by a dental implant register. To answer these objectives, we have through the use of design science methodology developed a high-fidelity dashboard prototype and a synthetic dataset in parallel. The development process was carried out in iterations, involving stakeholders as early as possible. The results indicate that the use of synthetic data to demonstrate possible future services was an essential component of the development process, facilitating early active participation of stakeholders. In particular, data with some realistic qualities were the most valuable in this process. Furthermore, the development process we used resulted in some contributions to the registry's data model, but fewer than expected. In summary, the services we developed were deemed useful by stakeholders. These results suggest that synthetic data generated manually, together with a high-fidelity prototype, may contribute to involving stakeholders early in the development process. This participation may ease the process of identifying application requirements and engaging stakeholders, potentially producing useful features.Masteroppgave i Programutvikling samarbeid med HVLPROG399MAMN-PRO

    Tailored information dashboards: A systematic mapping of the literature

    Get PDF
    Information dashboards are extremely useful tools to exploit knowledge. Dashboards enable users to reach insights and to identify patterns within data at-a-glance. However, dashboards present a series of characteristics and configurations that could not be optimal for every user, thus requiring the modification or variation of its features to fulfill specific user requirements. This variation process is usually referred to as customization, personalization or adaptation, depending on how this variation process is achieved. Given the great number of users and the exponential growth of data sources, tailoring an information dashboard is not a trivial task, as several solutions and configurations could arise. To analyze and understand the current state-of-the-art regarding tailored information dashboards, a systematic mapping has been performed. This mapping focus on answering questions regarding how existing dashboard solutions in the literature manage the customization, personalization and/or adaptation of its elements to produce tailored displays

    Persuasive technology: A systematic review on the role of computers in awareness study

    Get PDF
    This paper reviews an empirical research of persuasive technology (PT) with the aim are to: (i) examine the result of the 25 persuasive technology studies related to awareness as the intended outcome, (ii) investigate the effects of persuasive technology usage to target users (iii) to examine computer roles in creating awareness to users and the effects of persuasive technology to the domain of studies.The main aim of this review is to assist researchers developing a reference in setting a future research in a persuasive technology particularly in awareness domain. Result from the review indicates that persuasive technology has the ability to increase user awareness toward certain context or issues.Most of the studies shows that the computer as a media and social actor gives more impact to increase the awareness compared to the computer as a tool.It can be concluded that understanding the appropriate persuasive strategy is important in helping researchers developing effective applications towards the intended outcome. This paper also has an implications towards designing persuasive system and as a references for future research

    A data-driven decision-making model for the third-party logistics industry in Africa

    Get PDF
    Third-party logistics (3PL) providers have continued to be key players in the supply chain network and have witnessed a growth in the usage of information technology. This growth has enhanced the volume of structured and unstructured data that is collected at a high velocity, and is of rich variety, sometimes described as “Big Data”. Leaders in the 3PL industry are constantly seeking to effectively and efficiently mature their abilities to exploit this data to gain business value through data-driven decision-making (DDDM). DDDM helps the leaders to reduce the reliance they place on observations and intuition to make crucial business decisions in a volatile business environment. The aim of this research was to develop a prescriptive model for DDDM in 3PLs. The model consists of iterative elements that prescribe guidelines to decision-makers in the 3PL industry on how to adopt DDDM. A literature review of existing theoretical frameworks and models for DDDM was conducted to determine the extent to which they contribute towards DDDM for 3PLs. The Design-Science Research Methodology (DSRM) was followed to address the aim of the research and applied to pragmatically and iteratively develop and evaluate the artefact (the model for DDDM) in the real-world context of a 3PL. The literature findings revealed that the challenges with DDDM in organisations include three main categories of challenges related to data quality, data management, vision and capabilities. Once the challenges with DDDM were established, a prescriptive model was designed and developed for DDDM in 3PLs. Qualitative data was collected from semi-structured interviews to gain an understanding of the problems and possible solutions in the real-world context of 3PLs. An As-Is Analysis in the real-world case 3PL company confirmed the challenges identified in literature, and that data is still used in the 3PL company for descriptive and diagnostic analytics to aid with the decision-making processes. This highlights that there is still room for maturity into using data for predictive and prescriptive analytics that will, in turn, improve the decision-making process. An improved second version of the model was demonstrated to the participants (the targeted users), who had the opportunity to evaluate the model. The findings revealed that the model provided clear guidelines on how to make data-driven decisions and that the feedback loop and the data culture aspects highlighted in the design were some of the important features of the model. Some improvements were suggested by participants. A field study of three data analytics tools was conducted to identify the advantages and disadvantages of each as well as to highlight the status of DDDM at the real-world case 3PL. The limitations of the second version of the model, together with the recommendations from the participants were used to inform the improved and revised third version of the model.Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 202

    A data-driven decision-making model for the third-party logistics industry in Africa

    Get PDF
    Third-party logistics (3PL) providers have continued to be key players in the supply chain network and have witnessed a growth in the usage of information technology. This growth has enhanced the volume of structured and unstructured data that is collected at a high velocity, and is of rich variety, sometimes described as “Big Data”. Leaders in the 3PL industry are constantly seeking to effectively and efficiently mature their abilities to exploit this data to gain business value through data-driven decision-making (DDDM). DDDM helps the leaders to reduce the reliance they place on observations and intuition to make crucial business decisions in a volatile business environment. The aim of this research was to develop a prescriptive model for DDDM in 3PLs. The model consists of iterative elements that prescribe guidelines to decision-makers in the 3PL industry on how to adopt DDDM. A literature review of existing theoretical frameworks and models for DDDM was conducted to determine the extent to which they contribute towards DDDM for 3PLs. The Design-Science Research Methodology (DSRM) was followed to address the aim of the research and applied to pragmatically and iteratively develop and evaluate the artefact (the model for DDDM) in the real-world context of a 3PL. The literature findings revealed that the challenges with DDDM in organisations include three main categories of challenges related to data quality, data management, vision and capabilities. Once the challenges with DDDM were established, a prescriptive model was designed and developed for DDDM in 3PLs. Qualitative data was collected from semi-structured interviews to gain an understanding of the problems and possible solutions in the real-world context of 3PLs. An As-Is Analysis in the real-world case 3PL company confirmed the challenges identified in literature, and that data is still used in the 3PL company for descriptive and diagnostic analytics to aid with the decision-making processes. This highlights that there is still room for maturity into using data for predictive and prescriptive analytics that will, in turn, improve the decision-making process. An improved second version of the model was demonstrated to the participants (the targeted users), who had the opportunity to evaluate the model. The findings revealed that the model provided clear guidelines on how to make data-driven decisions and that the feedback loop and the data culture aspects highlighted in the design were some of the important features of the model. Some improvements were suggested by participants. A field study of three data analytics tools was conducted to identify the advantages and disadvantages of each as well as to highlight the status of DDDM at the real-world case 3PL. The limitations of the second version of the model, together with the recommendations from the participants were used to inform the improved and revised third version of the model.Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 202

    An investigation into visualisation and forecasting of real-time electrical consumption based on smart grid data

    Get PDF
    The smart grid, and in particular smart meters, is a growing world-wide phenomenon which has allowed for the availability of detailed real time usage data to the user in ways that were not possible in the past. South Africa has been slow-moving in adapting smart meters, but in the past two years this has changed and smart meters are becoming the new standard. This has given rise to the need for software applications to help both the South African consumer and local power utilities get the most out of the smart meter data. The purpose of this research is to investigate the possibilities offered by smart grid data obtained from advanced metering infrastructures, with particular emphasis on real time energy usage visualisation and peak load forecasting. Previously, detailed energy usage data has not been available to consumers hence there has not been much research focusing on utilising this data for direct consumer benefit. The focus of most research has mainly been on the power utilities supply side where attention has been on visualising their consumers’ usage and forecasting consumer demand in order to supply them with electricity continuously and efficiently. In this dissertation a benchmarking model for developing smart grid data visualisation dashboards is proposed and this model is used to present and prototype a consumer side dashboard. The prototype implements real time data visualisation techniques, as well as a Multiple Linear Regression model based forecasting algorithm for half hourly peak load forecasting using data collected from the University of the Witwatersrand’s advanced metering infrastructure. In this study the Multiple Linear Regression model is built through a comprehensive analysis of 2 years’ worth of energy usage data from the University of the Witwatersrand and 3 years’ worth of hourly temperature data from the South African Weather Services. The prototype’s performance is evaluated with reference to the proposed benchmark and a user technology acceptance evaluation done by the University’s Property and Infrastructure Management division. The dashboard is found to be a useful and acceptable tool in energy monitoring at the University. The forecasting model performs well with a mean absolute percentage error of 3.69%. The inclusion of a forecasting functionality within the energy management dashboard is shown to have the ability to help the university reduce its electricity bill by being able to shave their peak loads. The analysis highlights the importance of better data archiving and smart meter monitoring thereby ensuring that the meters are always online and no data goes missing which is vital for accurate forecasting results

    Integration and visualisation of clinical-omics datasets for medical knowledge discovery

    Get PDF
    In recent decades, the rise of various omics fields has flooded life sciences with unprecedented amounts of high-throughput data, which have transformed the way biomedical research is conducted. This trend will only intensify in the coming decades, as the cost of data acquisition will continue to decrease. Therefore, there is a pressing need to find novel ways to turn this ocean of raw data into waves of information and finally distil those into drops of translational medical knowledge. This is particularly challenging because of the incredible richness of these datasets, the humbling complexity of biological systems and the growing abundance of clinical metadata, which makes the integration of disparate data sources even more difficult. Data integration has proven to be a promising avenue for knowledge discovery in biomedical research. Multi-omics studies allow us to examine a biological problem through different lenses using more than one analytical platform. These studies not only present tremendous opportunities for the deep and systematic understanding of health and disease, but they also pose new statistical and computational challenges. The work presented in this thesis aims to alleviate this problem with a novel pipeline for omics data integration. Modern omics datasets are extremely feature rich and in multi-omics studies this complexity is compounded by a second or even third dataset. However, many of these features might be completely irrelevant to the studied biological problem or redundant in the context of others. Therefore, in this thesis, clinical metadata driven feature selection is proposed as a viable option for narrowing down the focus of analyses in biomedical research. Our visual cortex has been fine-tuned through millions of years to become an outstanding pattern recognition machine. To leverage this incredible resource of the human brain, we need to develop advanced visualisation software that enables researchers to explore these vast biological datasets through illuminating charts and interactivity. Accordingly, a substantial portion of this PhD was dedicated to implementing truly novel visualisation methods for multi-omics studies.Open Acces
    • …
    corecore