18,333 research outputs found
A unified view of data-intensive flows in business intelligence systems : a survey
Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft
Recommended from our members
A linked data-driven & service-oriented architecture for sharing educational resources
The two fundamental aims of managing educational resources are to enable resources to be reusable and interoperable and to enable Web-scale sharing of resources across learning communities. Currently, a variety of approaches have been proposed to expose and manage educational resources and their metadata on the Web. These are usually based on heterogeneous metadata standards and schemas, such as IEEE LOM or ADL SCORM, and diverse repository interfaces such as OAI-PMH or SQI. Also, there is still a lack of usage of controlled vocabularies and available data sets that could replace the widespread use of unstructured text for describing resources. On the other hand, the Linked Data approach has proven that it offers a set of successful principles that have the potential to alleviate the aforementioned issues. In this paper, we introduce an architecture and prototype which is fundamentally based on (a) Linked Data principles and (b) Service-orientation to resolve the integration issues for sharing educational resources
Mediated data integration and transformation for web service-based software architectures
Service-oriented architecture using XML-based web services has been widely accepted by many organisations as the standard infrastructure to integrate heterogeneous and autonomous data sources. As a result, many Web service providers are built up on top of the data sources to share the data by supporting provided and required interfaces and methods of data access in a unified manner. In the context of data integration, problems arise when Web services are assembled to deliver an integrated view of data, adaptable to the specific needs of individual clients and providers. Traditional approaches of data integration and transformation are not suitable to automate the construction of connectors dedicated to connect selected Web services to render integrated and tailored views of data. We propose a declarative approach that addresses the oftenneglected data integration and adaptivity aspects of serviceoriented
architecture
Linked education: interlinking educational resources and the web of data
Research on interoperability of technology-enhanced learning (TEL) repositories throughout the last decade has led to a fragmented landscape of competing approaches, such as metadata schemas and interface mechanisms. However, so far Web-scale integration of resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. On the other hand, the Linked Data approach has emerged as the de-facto standard for sharing data on the Web and offers a large potential to solve interoperability issues in the field of TEL. In this paper, we describe a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. This approach has been implemented in the context of the mEducator project where data from a number of open TEL data repositories has been integrated, exposed and enriched by following Linked Data principles
Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models
The approaches taken to describe and develop spatial discretisations of the
domains required for geophysical simulation models are commonly ad hoc, model
or application specific and under-documented. This is particularly acute for
simulation models that are flexible in their use of multi-scale, anisotropic,
fully unstructured meshes where a relatively large number of heterogeneous
parameters are required to constrain their full description. As a consequence,
it can be difficult to reproduce simulations, ensure a provenance in model data
handling and initialisation, and a challenge to conduct model intercomparisons
rigorously. This paper takes a novel approach to spatial discretisation,
considering it much like a numerical simulation model problem of its own. It
introduces a generalised, extensible, self-documenting approach to carefully
describe, and necessarily fully, the constraints over the heterogeneous
parameter space that determine how a domain is spatially discretised. This
additionally provides a method to accurately record these constraints, using
high-level natural language based abstractions, that enables full accounts of
provenance, sharing and distribution. Together with this description, a
generalised consistent approach to unstructured mesh generation for geophysical
models is developed, that is automated, robust and repeatable, quick-to-draft,
rigorously verified and consistent to the source data throughout. This
interprets the description above to execute a self-consistent spatial
discretisation process, which is automatically validated to expected discrete
characteristics and metrics.Comment: 18 pages, 10 figures, 1 table. Submitted for publication and under
revie
Data integration through service-based mediation for web-enabled information systems
The Web and its underlying platform technologies have often been used to integrate existing software and information systems. Traditional techniques for data representation and transformations between documents are not sufficient to support a flexible and maintainable data integration solution that meets the requirements of modern complex Web-enabled software and information systems. The difficulty
arises from the high degree of complexity of data structures, for example in business and technology applications, and from the constant change of data and its
representation. In the Web context, where the Web platform is used to integrate different organisations or software systems, additionally the problem of heterogeneity
arises. We introduce a specific data integration solution for Web applications such as Web-enabled information systems. Our contribution is an integration technology
framework for Web-enabled information systems comprising, firstly, a data integration technique based on the declarative specification of transformation rules and the construction of connectors that handle the integration and, secondly, a mediator architecture based on information services and the constructed connectors to handle the integration process
Linked Data - the story so far
The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward
Cloud service localisation
The essence of cloud computing is the provision of software
and hardware services to a range of users in dierent locations. The aim of cloud service localisation is to facilitate the internationalisation and localisation of cloud services by allowing their adaption to dierent locales.
We address the lingual localisation by providing service-level language translation techniques to adopt services to dierent languages and regulatory localisation by providing standards-based mappings to achieve regulatory compliance with regionally varying laws, standards and regulations. The aim is to support and enforce the explicit modelling of
aspects particularly relevant to localisation and runtime support consisting of tools and middleware services to automating the deployment based on models of locales, driven by the two localisation dimensions.
We focus here on an ontology-based conceptual information model that integrates locale specication in a coherent way
XML for Domain Viewpoints
Within research institutions like CERN (European Organization for Nuclear
Research) there are often disparate databases (different in format, type and
structure) that users need to access in a domain-specific manner. Users may
want to access a simple unit of information without having to understand detail
of the underlying schema or they may want to access the same information from
several different sources. It is neither desirable nor feasible to require
users to have knowledge of these schemas. Instead it would be advantageous if a
user could query these sources using his or her own domain models and
abstractions of the data. This paper describes the basis of an XML (eXtended
Markup Language) framework that provides this functionality and is currently
being developed at CERN. The goal of the first prototype was to explore the
possibilities of XML for data integration and model management. It shows how
XML can be used to integrate data sources. The framework is not only applicable
to CERN data sources but other environments too.Comment: 9 pages, 6 figures, conference report from SCI'2001 Multiconference
on Systemics & Informatics, Florid
- …