1,258 research outputs found

    Automatic visualization and control of arbitrary numerical simulations

    Get PDF
    Authors’ preprint version as submitted to ECCOMAS Congress 2016, Minisymposium 505 - Interactive Simulations in Computational Engineering. Abstract: Visualization of numerical simulation data has become a cornerstone for many industries and research areas today. There exists a large amount of software support, which is usually tied to specific problem domains or simulation platforms. However, numerical simulations have commonalities in the building blocks of their descriptions (e. g., dimensionality, range constraints, sample frequency). Instead of encoding these descriptions and their meaning into software architecures we propose to base their interpretation and evaluation on a data-centric model. This approach draws much inspiration from work of the IEEE Simulation Interoperability Standards Group as currently applied in distributed (military) training and simulation scenarios and seeks to extend those ideas. By using an extensible self-describing protocol format, simulation users as well as simulation-code providers would be able to express the meaning of their data even if no access to the underlying source code was available or if new and unforseen use cases emerge. A protocol definition will allow simulation-domain experts to describe constraints that can be used for automatically creating appropriate visualizations of simulation data and control interfaces. Potentially, this will enable leveraging innovations on both the simulation and visualization side of the problem continuum. We envision the design and development of algorithms and software tools for the automatic visualization of complex data from numerical simulations executed on a wide variety of platforms (e. g., remote HPC systems, local many-core or GPU-based systems). We also envisage using this automatically gathered information to control (or steer) the simulation while it is running, as well as providing the ability for fine-tuning representational aspects of the visualizations produced

    Automatic visualization and control of arbitrary numerical simulations

    Get PDF
    Authors’ preprint version as submitted to ECCOMAS Congress 2016, Minisymposium 505 - Interactive Simulations in Computational Engineering. Abstract: Visualization of numerical simulation data has become a cornerstone for many industries and research areas today. There exists a large amount of software support, which is usually tied to specific problem domains or simulation platforms. However, numerical simulations have commonalities in the building blocks of their descriptions (e. g., dimensionality, range constraints, sample frequency). Instead of encoding these descriptions and their meaning into software architecures we propose to base their interpretation and evaluation on a data-centric model. This approach draws much inspiration from work of the IEEE Simulation Interoperability Standards Group as currently applied in distributed (military) training and simulation scenarios and seeks to extend those ideas. By using an extensible self-describing protocol format, simulation users as well as simulation-code providers would be able to express the meaning of their data even if no access to the underlying source code was available or if new and unforseen use cases emerge. A protocol definition will allow simulation-domain experts to describe constraints that can be used for automatically creating appropriate visualizations of simulation data and control interfaces. Potentially, this will enable leveraging innovations on both the simulation and visualization side of the problem continuum. We envision the design and development of algorithms and software tools for the automatic visualization of complex data from numerical simulations executed on a wide variety of platforms (e. g., remote HPC systems, local many-core or GPU-based systems). We also envisage using this automatically gathered information to control (or steer) the simulation while it is running, as well as providing the ability for fine-tuning representational aspects of the visualizations produced

    The DSD Schema Language and its Applications

    Get PDF
    XML (eXtensible Markup Language), a linear syntax for trees, has gathered a remarkable amount of interest in industry. The acceptance of XML opens new venues for the application of formal methods such as specification of abstract syntax tree sets and tree transformations. A user domain may be specified as a set of trees. For example, XHTML is a user domain corresponding to the set of XML documents that make sense asHTML. A notation for defining such a set of XML trees is called a schema language. We believe that a useful schema notation must identify most of the syntacticrequirements that the documents in the user domain follow; allow efficient parsing; be readable to the user; allow a declarative default notation `a la CSS; and bemodular and extensible to support evolving classes of XML documents. In the present paper, we give a tutorial introduction to the DSD (Document Structure Description) notation as our bid on how to meet these requirements. TheDSD notation was inspired by industrial needs, and we show how DSDs help manage aspects of complex XML software through a case study about interactive voiceresponse systems (automated telephone answering systems, where input is through the telephone keypad or speech recognition). The expressiveness of DSDs goes beyond the DTD schema concept that is alreadypart of XML. We advocate the use of nonterminals in a top-down manner, coupled with boolean logic and regular expressions to describe how constraints on tree nodes depend on their context. We also support a general, declarative mechanism for inserting default elements and attributes that is reminiscent of CascadingStyle Sheets (CSS), a way of manipulating formatting instructions in HTML that is built into all modern browsers. Finally, we include a simple technique for evolving DSDs through selective redefinitions. DSDs are in many ways much more expressive than XML Schema (the schema language proposed by the W3C), but their syntactic and semantic definition in English is only 1/8th the size. Also, the DSD notation is self-describable: the syntax of legal DSD documents and all static semantic requirements can be captured in a DSD document, called the meta-DSD

    Extensible metadata repository for information systems

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceInformation Systems are, usually, systems that have a strong integration component and some of those systems rely on integration solutions that are based on metadata (data that describes data). In that situation, there’s a need to deal with metadata as if it were “normal”information. For that matter, the existence of a metadata repository that deals with the integrity, storage, validity and eases the processes of information integration in the information system is a wise choice. There are several metadata repositories available in the market, but none of them is prepared to deal with the needs of information systems or is generic enough to deal with the multitude of situations/domains of information and with the necessary integration features. In the SESS project (an European Space Agency project), a generic metadata repository was developed, based on XML technologies. This repository provided the tools for information integration, validity, storage, share, import, as well as system and data integration, but it required the use of fix syntactic rules that were stored in the content of the XML files. This situation causes severe problems when trying to import documents from external data sources (sources unaware of these syntactic rules). In this thesis a metadata repository that provided the same mechanisms of storage, integrity, validity, etc, but is specially focused on easy integration of metadata from any type of external source (in XML format) and provides an environment that simplifies the reuse of already existing types of metadata to build new types of metadata, all this without having to modify the documents it stores was developed. The repository stores XML documents (known as Instances), which are instances of a Concept, that Concept defines a XML structure that validates its Instances. To deal with reuse, a special unit named Fragment, which allows defining a XML structure (which can be created by composing other Fragments) that can be reused by Concepts when defining their own structure. Elements of the repository (Instances,Concepts and Fragment) have an identifier based on (and compatible with) URIs, named Metadata Repository Identifier (MRI). Those identifiers, as well as management information(including relations) are managed by the repository, without the need to use fix syntactic rules, easing integration. A set of tests using documents from the SESS project and from software-house ITDS was used to successfully validate the repository against the thesis objectives of easy integration and promotion of reuse

    The Evolution, current status, and future direction of XML

    Get PDF
    The Extensible Markup Language (XML) is now established as a multifaceted open-ended markup language and continues to increase in popularity. The major players that have shaped its development include the United States government, several key corporate entities, and the World Wide Web Consortium (W3C). This paper will examine these influences on XML and will address the emergence, the current status, and the future direction of this language. In addition, it will review best practices and research that have contributed to the continued development and advancement of XML

    Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach.

    Get PDF
    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a "containerized" approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data "Levels," each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org)

    Integrated XML andGML in Geographical Information System

    Get PDF
    This project basically concentrated on the study of extensible Markup Language (XML) and Geography Markup Language (GML) in Geographical Information System (GIS). The objective of the project is to convert the spatial data (e.g.: coordinates, area, etc) by using the XML and GML and then coding will be integrated and viewed in the web browser by using the Scalable Vector Graphic (SVG)technology. Basically, this project is done to find a new way to overcomethe weaknesses of map digitizing and taking advantage of the GML technology in Geographical Information System. The project scope is concentrate on the usage of XML andGML in GIS. Research is done onXML technologies, which are provided for GML. The technologies included technology for encoding and data modeling (Data Type Definition, XML Schema), technology for transforming (XSLT) and technology for graphic rendering (SVG). Research on GML is focused on manipulation of spatial data to convert to simple features such as point, line and polygon. This project combines XML, GML and SVG technologies in order to meet the project objectives. In completing this project, waterfall model is use as the methodology for the system development. The project is developed according tothe four phases of system development, which are planning, analysis, design and implementation. The discussion ofthis project will be more on GML compatibility and the advantages of using SVG to view the map. The simple display of map created will be able to show thatGML is suits for handling geo-spatial data overthe Internet. The user would be able to view the map and zooming feature is provided by SVG

    An Overview of the Semantic Web

    Get PDF
    The phenomenal growth of content on the Web has made it difficult to locate, organize, and retrieve information. One way to cope is to automate these tasks. But because of the complexity of natural-language processing, we still do not have machines that can understand and analyze the content on the Web as humans do. The main idea behind the Semantic Web is to develop technologies and applications that will make the machines to understand information semantically and perform the desired task. This paper discusses some of the technologies like URI, XML, RDF and ontologies which are the core technologies being used now to develop the Semantic Web

    The XBabelPhish MAGE-ML and XML Translator

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large – too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML.</p> <p>Results</p> <p>We have developed XBabelPhish – an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML.</p> <p>Conclusion</p> <p>XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.</p
    • …
    corecore