68,478 research outputs found
SFDL: MVC Applied to Workflow Design
Process management based on workflow systems is a growing trend in collaborative environments. One of the most notorious areas of improvement is that of user interfaces, especially since business process definition languages do not address efficiently the point of contact between workflow engines and human interactions. With that in focus, we propose the MVC pattern design to workflow systems. To accomplish this, we have designed a new dynamic view definition language called SFDL, oriented towards the easy interoperability with the different workflow definition languages, while maintaining enough flexibility to be represented in different formats and being adaptable to several environments. To validate our approach, we have carried out an implementation in a real banking scenario, which has provided continuous feedback and enabled us to refine the proposal. The work is fully based on widely accepted and used web standards (XML, YAML, JSON, Atom and REST). Some guidelines are given to facilitate the adoption of our solution
Recommended from our members
Using SVG and XSLT for graphic representation
Using SVG and XSLT for graphic representation
In this paper we will present an XML based framework that can be used to produce graphical visualisation of scientific data. The approach rather than producing ordinary histogram and function diagaram graphs, tries to represent the information in a more graphical appealing and easy to understand way. For examples the approach will give the ability to represent the temperature as the level of coulored fluid in a thermometer.
The proposed framework is able to maintain the value of the datas strictly separated from the visual form of its representation (positions of element, colours, visual representation etc.).
By defining appropriate data structures and expressing them using XML, the framework gives the user the ability to create graphic representations using standard SVG and XSLT.
Since XML can be used for describing complex data information, we represent every level of the graphic representation with an XML structure.
To describe our architecture we defined the following XML dialects, each one with different markup tags, reflecting the semantical values of the elements.
Data definition level. Used to define the value of the datas that can be used in the graphic representation
Data representation level. Used to define the graphic representation, it defines how the values expressed by the data definition level are represented.
Both data representation and data definition files are based on a DTD to impose the constraints.
Data representation level is the core of the system, and defines a powerful language for representation.
Source primitives. Used to define for the source of the graphic elements, for example static file or SVG code.
Modification primitives. Used to define the modifications that can affect a graphic element, for example rotation, scaling or repetition.
Disposition primitives. Used to define the possible dispositions along x, y and z axes, for example to impose a order in the representation of elements.
Action primitives. Used to define the possible actions that canbe activated by graphic elements for different user behaviours. For example a mouse action can activate a link to a different resource, or can change the value of any of the other primitives of the data structure, as image source or disposition, or can show a tooltip .
XSLT is used to output a SVG file derived from the two files describing the graphic representation.
Our aim is to provide an abstract language to be used to represent in different ways the same concept. In fact, we can link a data definition file with different data representation levels, providing different kinds and levels of complexity for the same concept. An example use could be the representation of the temperature described before, where the temperature itself could be represented either as the level of mercury in the termomether, or as the rotation of an arrow in a gauge.
The transformation process is made from an XML source tree into an XML result tree, using XPath to define patterns. XSLT transformation process is based on templates, that define some actions (like adding or removing elements, or sorting them) to be performed when a part of the document matches a template.
To implement some of the complex graphics operations we are using XSLT extensions that allow to perform mathematical operations.
These XSLT extensions are not yet standard and require specific compliant parser, as Apache Xalan, that allows the developer to interface with Java classes in order to increase XSLT areas of application, from simple node transformations to quite complex operations
Shuttle-Data-Tape XML Translator
JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers
XML schemas for parallel corpora
Parallel corpora are resources used in Natural Language Processing and Computational Linguistics. They are defined as a set of texts, in different languages, that are translations of each other. Note that these translations do not need to cover the full document, as we might have sentences translated just on some of the languages. When dealing with the process of sharing resources, recent years have bet on the use of XML formats. This is no different when talking about parallel corpora sharing. When visiting different projects in the web that release parallel corpora for download, we can find at least three different formats. In fact, this abundance of formats has led some projects to adopt all the three formats. This article discusses these three main formats: XML Corpus Encoding Standard, Translation Memory Exchange format and the Text Encoding Initiative. We will compare their formal definition and their XML schema
A Method for Mapping XML DTD to Relational Schemas In The Presence Of Functional Dependencies
The eXtensible Markup Language (XML) has recently emerged as a standard for
data representation and interchange on the web. As a lot of XML data in the web,
now the pressure is to manage the data efficiently. Given the fact that relational
databases are the most widely used technology for managing and storing XML,
therefore XML needs to map to relations and this process is one that occurs
frequently. There are many different ways to map and many approaches exist in the
literature especially considering the flexible nesting structures that XML allows. This
gives rise to the following important problem: Are some mappings ‘better’ than the
others? To approach this problem, the classical relational database design through
normalization technique that based on known functional dependency concept is
referred. This concept is used to specify the constraints that may exist in the relations
and guide the design while removing semantic data redundancies. This approach
leads to a good normalized relational schema without data redundancy. To achieve a
good normalized relational schema for XML, there is a need to extend the concept of
functional dependency in relations to XML and use this concept as guidance for the
design. Even though there exist functional dependency definitions for XML, but these definitions are not standard yet and still having several limitation. Due to the
limitations of the existing definitions, constraints in the presence of shared and local
elements that exist in XML document cannot be specified. In this study a new
definition of functional dependency constraints for XML is proposed that are general
enough to specify constraints and to discover semantic redundancies in XML
documents.
The focus of this study is on how to produce an optimal mapping approach in the
presence of XML functional dependencies (XFD), keys and Data Type Definition
(DTD) constraints, as a guidance to generate a good relational schema. To approach
the mapping problem, three different components are explored: the mapping
algorithm, functional dependency for XML, and implication process. The study of
XML implication is important to imply what other dependencies that are guaranteed
to hold in a relational representation of XML, given that a set of functional
dependencies holds in the XML document. This leads to the needs of deriving a set
of inference rules for the implication process. In the presence of DTD and userdefined
XFD, other set of XFDs that are guaranteed to hold in XML can be
generated using the set of inference rules. This mapping algorithm has been
developed within the tool called XtoR. The quality of the mapping approach has
been analyzed, and the result shows that the mapping approach (XtoR) significantly
improve in terms of generating a good relational schema for XML with respect to
reduce data and relation redundancy, remove dangling relations and remove
association problems. The findings suggest that if one wants to use RDBMS to
manage XML data, the mapping from XML document to relations must based be on
functional dependency constraints
Multilists applied to an XML parser implementation in Objective C for iOS for XPDL 2.2 standard
With the creation of the standard language BPMN(Business Process Modeling Notation) used to represent business processes, the XPDL(XML Process Definition Language) is generated, which describes the data flow information of the process using a XML(Extensible Markup Language) schema. This document shows the implementation of data structures on the development of a parser which allows the interpretation of XPDL files in the 2.2 version; using together multi-lists and the XPDL meta-model, the interpretation of the XML schema problematic is pretended to be solved, allowing a correct storage of the elements. As an additional contribution, the development of the functional parser is made under the Objective C language for iOS, which is pretended to innovate in the mobile platform field.En este trabajo se expone la implementación de estructuras de datos en el desarrollo de un parseador que permite la interpretación de archivos XPDL (XML ProcessDefinitionLanguage) en su versión 2.2, mediante multilistas. Junto con el metamodelo propio del XPDL se busca solucionar una problemática en la interpretación del esquema XML, permitiendo un correcto almacenamiento de los elementos bajo el lenguaje Objective C para iOS, con el cual se pretende innovar en el campo de las plataformas móviles que hacen uso del lenguaje estándar BPMN (BusinessProcessModeling Notation) para la representación de procesos de negocio y que generan el XPDL. El objetivo principal de un XPDL es describir la información del flujo de datos del proceso mediante un esquema XML (Extensible MarkupLanguage)
EquiX---A Search and Query Language for XML
EquiX is a search language for XML that combines the power of querying with
the simplicity of searching. Requirements for such languages are discussed and
it is shown that EquiX meets the necessary criteria. Both a graphical abstract
syntax and a formal concrete syntax are presented for EquiX queries. In
addition, the semantics is defined and an evaluation algorithm is presented.
The evaluation algorithm is polynomial under combined complexity.
EquiX combines pattern matching, quantification and logical expressions to
query both the data and meta-data of XML documents. The result of a query in
EquiX is a set of XML documents. A DTD describing the result documents is
derived automatically from the query.Comment: technical report of Hebrew University Jerusalem Israe
Modeling views in the layered view model for XML using UML
In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction
- …