16,608 research outputs found
Modeling views in the layered view model for XML using UML
In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction
A case study on the transformation of context-aware domain data onto XML schemas
In order to accelerate the development of context-aware applications, it would be convenient to have a smooth path between the context models and the automated services that support these models. This paper discusses how MDA technology (metamodelling and the QVT standard) can support the transformation of high-level models of context-aware services onto the implementation of these services using web services. The total transformation process from context-aware services onto web services involves the following aspects: 1. service signatures, which should be translated onto WSDL definitions; 2. context-aware domain data used as input and output data in service operations, which should be translated onto XML schemas; and 3. service behaviours, which should be used to generate the service implementation. This paper concentrates on the modelling and transformation of the context-aware domain data. The results of this paper are generally applicable to the transformation of elements of any domain-specific language expressed in terms of a metamodel onto XML Schema data
Programming patterns and development guidelines for Semantic Sensor Grids (SemSorGrid4Env)
The web of Linked Data holds great potential for the creation of semantic applications that can combine self-describing structured data from many sources including sensor networks. Such applications build upon the success of an earlier generation of 'rapidly developed' applications that utilised RESTful APIs. This deliverable details experience, best practice, and design patterns for developing high-level web-based APIs in support of semantic web applications and mashups for sensor grids. Its main contributions are a proposal for combining Linked Data with RESTful application development summarised through a set of design principles; and the application of these design principles to Semantic Sensor Grids through the development of a High-Level API for Observations. These are supported by implementations of the High-Level API for Observations in software, and example semantic mashups that utilise the API
Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models
The approaches taken to describe and develop spatial discretisations of the
domains required for geophysical simulation models are commonly ad hoc, model
or application specific and under-documented. This is particularly acute for
simulation models that are flexible in their use of multi-scale, anisotropic,
fully unstructured meshes where a relatively large number of heterogeneous
parameters are required to constrain their full description. As a consequence,
it can be difficult to reproduce simulations, ensure a provenance in model data
handling and initialisation, and a challenge to conduct model intercomparisons
rigorously. This paper takes a novel approach to spatial discretisation,
considering it much like a numerical simulation model problem of its own. It
introduces a generalised, extensible, self-documenting approach to carefully
describe, and necessarily fully, the constraints over the heterogeneous
parameter space that determine how a domain is spatially discretised. This
additionally provides a method to accurately record these constraints, using
high-level natural language based abstractions, that enables full accounts of
provenance, sharing and distribution. Together with this description, a
generalised consistent approach to unstructured mesh generation for geophysical
models is developed, that is automated, robust and repeatable, quick-to-draft,
rigorously verified and consistent to the source data throughout. This
interprets the description above to execute a self-consistent spatial
discretisation process, which is automatically validated to expected discrete
characteristics and metrics.Comment: 18 pages, 10 figures, 1 table. Submitted for publication and under
revie
Encoding models for scholarly literature
We examine the issue of digital formats for document encoding, archiving and
publishing, through the specific example of "born-digital" scholarly journal
articles. We will begin by looking at the traditional workflow of journal
editing and publication, and how these practices have made the transition into
the online domain. We will examine the range of different file formats in which
electronic articles are currently stored and published. We will argue strongly
that, despite the prevalence of binary and proprietary formats such as PDF and
MS Word, XML is a far superior encoding choice for journal articles. Next, we
look at the range of XML document structures (DTDs, Schemas) which are in
common use for encoding journal articles, and consider some of their strengths
and weaknesses. We will suggest that, despite the existence of specialized
schemas intended specifically for journal articles (such as NLM), and more
broadly-used publication-oriented schemas such as DocBook, there are strong
arguments in favour of developing a subset or customization of the Text
Encoding Initiative (TEI) schema for the purpose of journal-article encoding;
TEI is already in use in a number of journal publication projects, and the
scale and precision of the TEI tagset makes it particularly appropriate for
encoding scholarly articles. We will outline the document structure of a
TEI-encoded journal article, and look in detail at suggested markup patterns
for specific features of journal articles
Shape Expressions Schemas
We present Shape Expressions (ShEx), an expressive schema language for RDF
designed to provide a high-level, user friendly syntax with intuitive
semantics. ShEx allows to describe the vocabulary and the structure of an RDF
graph, and to constrain the allowed values for the properties of a node. It
includes an algebraic grouping operator, a choice operator, cardinalitiy
constraints for the number of allowed occurrences of a property, and negation.
We define the semantics of the language and illustrate it with examples. We
then present a validation algorithm that, given a node in an RDF graph and a
constraint defined by the ShEx schema, allows to check whether the node
satisfies that constraint. The algorithm outputs a proof that contains
trivially verifiable associations of nodes and the constraints that they
satisfy. The structure can be used for complex post-processing tasks, such as
transforming the RDF graph to other graph or tree structures, verifying more
complex constraints, or debugging (w.r.t. the schema). We also show the
inherent difficulty of error identification of ShEx
Potentially Polluting Marine Sites GeoDB: An S-100 Geospatial Database as an Effective Contribution to the Protection of the Marine Environment
Potentially Polluting Marine Sites (PPMS) are objects on, or areas of, the seabed that may release pollution in the future. A rationale for, and design of, a geospatial database to inventory and manipu-late PPMS is presented. Built as an S-100 Product Specification, it is specified through human-readable UML diagrams and implemented through machine-readable GML files, and includes auxiliary information such as pollution-control resources and potentially vulnerable sites in order to support analyses of the core data. The design and some aspects of implementation are presented, along with metadata requirements and structure, and a perspective on potential uses of the database
- âŠ