1,578 research outputs found
Recommended from our members
System Architecture of A European Platform for Health Policy Decision Making: MIDAS
Background: Healthcare data is a rich yet underutilized resource due to its disconnected, heterogeneous nature. A means of connecting healthcare data and integrating it with additional open and social data in a secure way can support the monumental challenge policy-makers face in safely accessing all relevant data to assist in managing the health and wellbeing of all. The goal of this study was to develop a novel health data platform within the MIDAS (Meaningful Integration of Data Analytics and Services) project, that harnesses the potential of latent healthcare data in combination with open and social data to support evidence-based health policy decision-making in a privacy-preserving manner. Methods: The MIDAS platform was developed in an iterative and collaborative way with close involvement of academia, industry, healthcare staff and policy-makers, to solve tasks including data storage, data harmonization, data analytics and visualizations, and open and social data analytics. The platform has been piloted and tested by health departments in four European countries, each focusing on different region-specific health challenges and related data sources. Results: A novel health data platform solving the needs of Public Health decision-makers was successfully implemented within the four pilot regions connecting heterogeneous healthcare datasets and open datasets and turning large amounts of previously isolated data into actionable information allowing for evidence-based health policy-making and risk stratification through the application and visualization of advanced analytics. Conclusions: The MIDAS platform delivers a secure, effective and integrated solution to deal with health data, providing support for health policy decision-making, planning of public health activities and the implementation of the Health in All Policies approach. The platform has proven transferable, sustainable and scalable across policies, data and regions
Triangulum City Dashboard: An Interactive Data Analytic Platform for Visualizing Smart City Performance
Cities are becoming smarter by incorporating hardware technology, software systems, and network infrastructure that provide Information Technology (IT) systems with real-time awareness of the real world. What makes a “smart city” functional is the combined use of advanced infrastructure technologies to deliver its core services to the public in a remarkably efficient manner. City dashboards have drawn increasing interest from both city operators and citizens. Dashboards can gather, visualize, analyze, and inform regional performance to support the sustainable development of smart cities. They provide useful tools for evaluating and facilitating urban infrastructure components and services. This work proposes an interactive web-based data visualization and data analytics toolkit supported by big data aggregation tools. The system proposed is a cloud-based prototype that supports visualization and real-time monitoring of city trends while processing and displaying large data sets on a standard web browser. However, it is capable of supporting online analysis processing by answering analytical queries and producing graphics from multiple resources. The aim of this platform is to improve communication between users and urban service providers and to give citizens an overall view of the city’s state. The conceptual framework and architecture of the proposed platform are explored, highlighting design challenges and providing insight into the development of smart cities. Moreover, results and the potential statistical analysis of important city services offered by the system are introduced. Finally, we present some challenges and opportunities identified through the development of the city data platform.publishedVersio
IMPRESS: Improving Engagement in Software Engineering Courses through Gamification
Software Engineering courses play an important role for preparing students
with the right knowledge and attitude for software development in practice. The
implication is far reaching, as the quality of the software that we use
ultimately depends on the quality of the people that make them. Educating
Software Engineering, however, is quite challenging, as the subject is not
considered as most exciting by students, while teachers often have to deal with
exploding number of students. The EU project IMPRESS seeks to explore the use
of gamification in educating software engineering at the university level to
improve students' engagement and hence their appreciation for the taught
subjects. This paper presents the project, its objectives, and its current
progress
The Use of Learning Analytics Interactive Dashboards in Serious Games: A Review of the Literature
The learning analytics in serious games, corresponds to a subject in increasing demand in the educational field. In this context, there is a need to study how data visualizations found in the literature are adopted in learning analytics in serious games. This paper presents a Systematic Literature Review (SLR) on how the evolution of studies associated with the use of learning analytics interactive dashboards in serious games is processed, seeking to investigate the characteristics of using dashboards for viewing educational data. A bibliometric analysis was carried out in which 75 relevant studies were selected from the Scopus, Web of Science, and IEEExplore databases. From the data analysis, it was observed that in the current literature there is a reduced number of studies containing the main actors in the learning process, as follows: teachers/instructors, students/participants, game developers/designers, and managers/researchers. In the vast majority of investigated studies, data visualization algorithms are used, where the main focus takes into account only actors, such as teachers/instructors and students/participants
Fraud and Performance Monitoring of Credit Card Tokenization Using Business Intelligence
This project major objective is to gather all the necessary data to analyze and deliver a best analytical reporting platform. This product developed for the analysts is expected to extensively use for insights on the token provisioning and its varied utilization with the banks and merchants. Also to monitor fraudulent occurring patterns and initiate necessary steps to avoid facing any adversities in the future.
The reports are generated using the principles supporting descriptive analytics. Using many different KPIs, metrics and scorecards, etc., to support the analysis has given an advantage for better yield. These analytical dashboard has given a deep dive insight for the analysts.
This project has been used by many analysts to come to an agreement on different patterns noticed by each individual. Also for the Senior Executives to get a profound understanding of how the widely different tokenization are used and its different attribute wise segregation
Relatório de Estágio - Solução de BI Roaming Data Science (RoaDS) em ambiente Vodafone
A telecom company (Vodafone), had the need to implement a Business Intelligence solution for
Roaming data across a wide set of different data sources. Based on the data visualization of this
solution, its key users with decision power, can make a business analysis and needs of infrastructure
and software expansion. This document aims to expose the scientific papers produced with the various
stages of production of the solution (state of the art, architecture design and implementation results),
this Business Intelligence solution was designed and implemented with OLAP methodologies and
technologies in a Data Warehouse composed of Data Marts arranged in constellation, the visualization
layer was custom made in JavaScript (VueJS). As a base for the results a questionnaire was created to
be filled in by the key users of the solution. Based on this questionnaire it was possible to ascertain
that user acceptance was satisfactory. The proposed objectives for the implementation of the BI
solution with all the requirements was achieved with the infrastructure itself created from scratch in
Kubernetes. This BI platform can be expanded using column storage databases created specifically
with OLAP workloads in mind, removing the need for an OLAP cube layer. Based on Machine
Learning algorithms, the platform will be able to perform the predictions needed to make decisions
about Vodafone's Roaming infrastructure
Towards Intelligent Chatbots for Customer Care - Practice-Based Requirements for a Research Agenda
Chatbots bare a great potential to save efforts and costs in customer care through service automation. Current results are however still at an early stage in functionality and not widely attainable. Here, developing a new form of intelligent chatbots is a current challenge still under review. While there have been numerous proposals for future work, virtually all agenda-setting contributions are solely based on scientific literature. This is unsatisfactory from both an academic and practical perspective, as the industrial view on the future of chatbots seems to be neglected. Therefore, this work explores how professional experts see the future of intelligent chatbots for customer care and suggests how practice can guide research based on an expert panel with 17 industrial partners. Our work identifies research opportunities based on the demands and views of key practitioners by pin-pointing expected trends. Furthermore, based on the expert opinions, we derive guidelines for organizations which state key factors that should be considered in the development or adoption of chatbots in customer care
Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning
[EN] Data analysis is a key process to foster knowledge generation in particular domains
or fields of study. With a strong informative foundation derived from the analysis of
collected data, decision-makers can make strategic choices with the aim of obtaining
valuable benefits in their specific areas of action. However, given the steady growth
of data volumes, data analysis needs to rely on powerful tools to enable knowledge
extraction.
Information dashboards offer a software solution to analyze large volumes of
data visually to identify patterns and relations and make decisions according to the
presented information. But decision-makers may have different goals and,
consequently, different necessities regarding their dashboards. Moreover, the variety
of data sources, structures, and domains can hamper the design and implementation
of these tools.
This Ph.D. Thesis tackles the challenge of improving the development process of
information dashboards and data visualizations while enhancing their quality and
features in terms of personalization, usability, and flexibility, among others.
Several research activities have been carried out to support this thesis. First, a
systematic literature mapping and review was performed to analyze different
methodologies and solutions related to the automatic generation of tailored
information dashboards. The outcomes of the review led to the selection of a modeldriven
approach in combination with the software product line paradigm to deal with
the automatic generation of information dashboards.
In this context, a meta-model was developed following a domain engineering
approach. This meta-model represents the skeleton of information dashboards and
data visualizations through the abstraction of their components and features and has
been the backbone of the subsequent generative pipeline of these tools.
The meta-model and generative pipeline have been tested through their
integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully
integrated with other meta-model to support knowledge generation in learning
ecosystems, and as a framework to conceptualize and instantiate information
dashboards in different domains.
In terms of the practical applications, the focus has been put on how to transform
the meta-model into an instance adapted to a specific context, and how to finally
transform this later model into code, i.e., the final, functional product. These practical
scenarios involved the automatic generation of dashboards in the context of a Ph.D.
Programme, the application of Artificial Intelligence algorithms in the process, and
the development of a graphical instantiation platform that combines the meta-model
and the generative pipeline into a visual generation system.
Finally, different case studies have been conducted in the employment and
employability, health, and education domains. The number of applications of the
meta-model in theoretical and practical dimensions and domains is also a result itself.
Every outcome associated to this thesis is driven by the dashboard meta-model, which
also proves its versatility and flexibility when it comes to conceptualize, generate, and
capture knowledge related to dashboards and data visualizations
- …