1,055 research outputs found

    Application of Extensive Markup Language (XML) for shipping companies

    Get PDF

    SWI-Prolog and the Web

    Get PDF
    Where Prolog is commonly seen as a component in a Web application that is either embedded or communicates using a proprietary protocol, we propose an architecture where Prolog communicates to other components in a Web application using the standard HTTP protocol. By avoiding embedding in external Web servers development and deployment become much easier. To support this architecture, in addition to the transfer protocol, we must also support parsing, representing and generating the key Web document types such as HTML, XML and RDF. This paper motivates the design decisions in the libraries and extensions to Prolog for handling Web documents and protocols. The design has been guided by the requirement to handle large documents efficiently. The described libraries support a wide range of Web applications ranging from HTML and XML documents to Semantic Web RDF processing. To appear in Theory and Practice of Logic Programming (TPLP)Comment: 31 pages, 24 figures and 2 tables. To appear in Theory and Practice of Logic Programming (TPLP

    AsterixDB: A Scalable, Open Source BDMS

    Full text link
    AsterixDB is a new, full-function BDMS (Big Data Management System) with a feature set that distinguishes it from other platforms in today's open source Big Data ecosystem. Its features make it well-suited to applications like web data warehousing, social data storage and analysis, and other use cases related to Big Data. AsterixDB has a flexible NoSQL style data model; a query language that supports a wide range of queries; a scalable runtime; partitioned, LSM-based data storage and indexing (including B+-tree, R-tree, and text indexes); support for external as well as natively stored data; a rich set of built-in types; support for fuzzy, spatial, and temporal types and queries; a built-in notion of data feeds for ingestion of data; and transaction support akin to that of a NoSQL store. Development of AsterixDB began in 2009 and led to a mid-2013 initial open source release. This paper is the first complete description of the resulting open source AsterixDB system. Covered herein are the system's data model, its query language, and its software architecture. Also included are a summary of the current status of the project and a first glimpse into how AsterixDB performs when compared to alternative technologies, including a parallel relational DBMS, a popular NoSQL store, and a popular Hadoop-based SQL data analytics platform, for things that both technologies can do. Also included is a brief description of some initial trials that the system has undergone and the lessons learned (and plans laid) based on those early "customer" engagements

    On the performance of markup language compression

    Get PDF
    Data compression is used in our everyday life to improve computer interaction or simply for storage purposes. Lossless data compression refers to those techniques that are able to compress a file in such ways that the decompressed format is the replica of the original. These techniques, which differ from the lossy data compression, are necessary and heavily used in order to reduce resource usage and improve storage and transmission speeds. Prior research led to huge improvements in compression performance and efficiency for general purpose tools which are mainly based on statistical and dictionary encoding techniques. Extensible Markup Language (XML) is based on redundant data which is parsed as normal text by general-purpose compressors. Several tools for compressing XML data have been developed, resulting in improvements for compression size and speed using different compression techniques. These tools are mostly based on algorithms that rely on variable length encoding. XML Schema is a language used to define the structure and data types of an XML document. As a result of this, it provides XML compression tools additional information that can be used to improve compression efficiency. In addition, XML Schema is also used for validating XML data. For document compression there is a need to generate the schema dynamically for each XML file. This solution can be applied to improve the efficiency of XML compressors. This research investigates a dynamic approach to compress XML data using a hybrid compression tool. This model allows the compression of XML data using variable and fixed length encoding techniques when their best use cases are triggered. The aim of this research is to investigate the use of fixed length encoding techniques to support general-purpose XML compressors. The results demonstrate the possibility of improving on compression size when a fixed length encoder is used to compressed most XML data types

    Web application of physiological data based on FHIR

    Get PDF
    This paper works toward implementing a prototype demonstrating some of the capabilities of the FHIR specification. The specification requires a clear understanding of its different components in order to be successfully implemented, therefore the primary concern of this work is to understand and analyse FHIR’s concepts. The research conducted in this work revealed that FHIR is a well-designed specification, based on a powerful data model and technologies. Therefore, it sould help solving the interoperability issues of the healthcare eco-system. It has also been pointed that since FHIR is a recent standard, many of its uses and benefits are still to be discovered. Moreover, FHIR integrates well in the current health information technology context since it can be used in addition to existing standards

    The encapsulation of legacy binaries using and XML-based approach with applications in ocean forecasting

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 85-87).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.This thesis presents an XML-based approach for the encapsulation of legacy binaries. A method that utilizes XML documents to describe the various parameters and settings for the compilation and execution of an encapsulated binary is discussed. The binary is treated as a black-box component and the XML description for that binary contains relevant restrictions, such as input and output files and runtime parameters read in from the standard input stream. The proposed XML schema design constrains the aforementioned XML descriptions of binaries. The usage parameters for the binaries are expressed by such XML documents. A prototype system is then able to take any of these schema-conforming XML descriptions and display the relevant user controls in a graphical user interface (GUI). Instead of editing obscure script files, the user can make changes to build-time and runtime parameters for a binary using the presented system interface. After validating the user inputs, the system generates the required script files automatically and proceeds to compile and/or execute the binary. The Primary Equation Model binary of the Harvard Ocean Prediction System (HOPS) was successfully encapsulated using the presented approach. The customization and control of the binary's compilation and execution through a GUI was achieved.by Robert C. Chang.M.Eng

    Formalization of the neuro-biological models for spike neurons.

    Get PDF
    When modelizing cortical neuronal maps (here spiking neuronal networks) within the scope of the FACETS project, researchers in neuro-science and computer-science use NeuroML, a XML language, to specify biological neuronal networks. These networks could be simulated either using analogue or event-based techniques. Specifications include : - parametric model specification - model equation symbolic definition - formalization of related semantic aspects (paradigms, ..) and they are used by "non-computer-scientists". In this context XML is used to specify data structures, not documents. The first version of NeuroML uses Java to map XML biological data which can be later simulated within GENESIS, NEURON, etc. The second version uses tools for handling XML data, as XSL, to transform an XML file. To allow NeuroML to be used intensively within the scope of the FACETS project, we will entirely analyse the software. First we are going to evaluate this software deeply in the Technical Report section. Then we will propose a prototype to write down NeuroML code easily

    Chapter 59: Web Services

    Get PDF
    Web services are a cornerstone of the distributed computing infrastructure that the VO is built upon yet to the newcomer, they can appear to be a black art. This perception is not helped by the miasma of technobabble that pervades the subject and the seemingly impenetrable high priesthood of actual users. In truth, however, there is nothing conceptually difficult about web services (unsurprisingly any complexities will lie in the implementation details) nor indeed anything particularly new. A web service is a piece of software available over a network with a formal description of how it is called and what it returns that a computer can understand. Note that entities such as web servers, ftp servers and database servers do not generally qualify as they lack the standardized description of their inputs and outputs. There are prior technologies, such as RMI, CORBA, and DCOM, that have employed a similar approach but the success of web services lies predominantly in its use of standardized XML to provide a language-neutral way for representing data. In fact, the standardization goes further as web services are traditionally (or as traditionally as five years will allow) tied to a specific set of technologies (WSDL and SOAP conveyed using HTTP with an XML serialization). Alternative implementations are becoming increasingly common and we will cover some of these here. One important thing to remember in all of this, though, is that web services are meant for use by computers and not humans (unlike web pages) and this is why so much of it seems incomprehensible gobbledegook. In this chapter, we will start with an overview of the web services current in the VO and present a short guide on how to use and deploy a web service. We will then review the different approaches to web services, particularly REST and SOAP, and alternatives to XML as a data format. We will consider how web services can be formally described and discuss how advanced features such as security, state and asynchrony can be provided. Note that much of this material is not yet used in the VO but features heavily in IVOA discussions on advanced services and capabilities

    Automatic visualization and control of arbitrary numerical simulations

    Get PDF
    Authors’ preprint version as submitted to ECCOMAS Congress 2016, Minisymposium 505 - Interactive Simulations in Computational Engineering. Abstract: Visualization of numerical simulation data has become a cornerstone for many industries and research areas today. There exists a large amount of software support, which is usually tied to specific problem domains or simulation platforms. However, numerical simulations have commonalities in the building blocks of their descriptions (e. g., dimensionality, range constraints, sample frequency). Instead of encoding these descriptions and their meaning into software architecures we propose to base their interpretation and evaluation on a data-centric model. This approach draws much inspiration from work of the IEEE Simulation Interoperability Standards Group as currently applied in distributed (military) training and simulation scenarios and seeks to extend those ideas. By using an extensible self-describing protocol format, simulation users as well as simulation-code providers would be able to express the meaning of their data even if no access to the underlying source code was available or if new and unforseen use cases emerge. A protocol definition will allow simulation-domain experts to describe constraints that can be used for automatically creating appropriate visualizations of simulation data and control interfaces. Potentially, this will enable leveraging innovations on both the simulation and visualization side of the problem continuum. We envision the design and development of algorithms and software tools for the automatic visualization of complex data from numerical simulations executed on a wide variety of platforms (e. g., remote HPC systems, local many-core or GPU-based systems). We also envisage using this automatically gathered information to control (or steer) the simulation while it is running, as well as providing the ability for fine-tuning representational aspects of the visualizations produced
    • …
    corecore