3,009 research outputs found

    Geoscience after IT: Part L. Adjusting the emerging information system to new technology

    Get PDF
    Coherent development depends on following widely used standards that respect our vast legacy of existing entries in the geoscience record. Middleware ensures that we see a coherent view from our desktops of diverse sources of information. Developments specific to managing the written word, map content, and structured data come together in shared metadata linking topics and information types

    Geoscience after IT: Part J. Human requirements that shape the evolving geoscience information system

    Get PDF
    The geoscience record is constrained by the limitations of human thought and of the technology for handling information. IT can lead us away from the tyranny of older technology, but to find the right path, we need to understand our own limitations. Language, images, data and mathematical models, are tools for expressing and recording our ideas. Backed by intuition, they enable us to think in various modes, to build knowledge from information and create models as artificial views of a real world. Markup languages may accommodate more flexible and better connected records, and the object-oriented approach may help to match IT more closely to our thought processes

    Next generation assisting clinical applications by using semantic-aware electronic health records

    Get PDF
    The health care sector is no longer imaginable without electronic health records. However; since the original idea of electronic health records was focused on data storage and not on data processing, a lot of current implementations do not take full advantage of the opportunities provided by computerization. This paper introduces the Patient Summary Ontology for the representation of electronic health records and demonstrates the possibility to create next generation assisting clinical applications based on these semantic-aware electronic health records. Also, an architecture to interoperate with electronic health records formatted using other standards is presented

    Spud 1.0: generalising and automating the user interfaces of scientific computer models

    No full text
    The interfaces by which users specify the scenarios to be simulated by scientific computer models are frequently primitive, under-documented and ad-hoc text files which make using the model in question difficult and error-prone and significantly increase the development cost of the model. In this paper, we present a model-independent system, Spud, which formalises the specification of model input formats in terms of formal grammars. This is combined with an automated graphical user interface which guides users to create valid model inputs based on the grammar provided, and a generic options reading module, libspud, which minimises the development cost of adding model options. <br><br> Together, this provides a user friendly, well documented, self validating user interface which is applicable to a wide range of scientific models and which minimises the developer input required to maintain and extend the model interface

    Interactive visual exploration of a large spatio-temporal dataset: Reflections on a geovisualization mashup

    Get PDF
    Exploratory visual analysis is useful for the preliminary investigation of large structured, multifaceted spatio-temporal datasets. This process requires the selection and aggregation of records by time, space and attribute, the ability to transform data and the flexibility to apply appropriate visual encodings and interactions. We propose an approach inspired by geographical 'mashups' in which freely-available functionality and data are loosely but flexibly combined using de facto exchange standards. Our case study combines MySQL, PHP and the LandSerf GIS to allow Google Earth to be used for visual synthesis and interaction with encodings described in KML. This approach is applied to the exploration of a log of 1.42 million requests made of a mobile directory service. Novel combinations of interaction and visual encoding are developed including spatial 'tag clouds', 'tag maps', 'data dials' and multi-scale density surfaces. Four aspects of the approach are informally evaluated: the visual encodings employed, their success in the visual exploration of the clataset, the specific tools used and the 'rnashup' approach. Preliminary findings will be beneficial to others considering using mashups for visualization. The specific techniques developed may be more widely applied to offer insights into the structure of multifarious spatio-temporal data of the type explored here

    Huddl: the Hydrographic Universal Data Description Language

    Get PDF
    Since many of the attempts to introduce a universal hydrographic data format have failed or have been only partially successful, a different approach is proposed. Our solution is the Hydrographic Universal Data Description Language (HUDDL), a descriptive XML-based language that permits the creation of a standardized description of (past, present, and future) data formats, and allows for applications like HUDDLER, a compiler that automatically creates drivers for data access and manipulation. HUDDL also represents a powerful solution for archiving data along with their structural description, as well as for cataloguing existing format specifications and their version control. HUDDL is intended to be an open, community-led initiative to simplify the issues involved in hydrographic data access

    HUDDL for description and archive of hydrographic binary data

    Get PDF
    Many of the attempts to introduce a universal hydrographic binary data format have failed or have been only partially successful. In essence, this is because such formats either have to simplify the data to such an extent that they only support the lowest common subset of all the formats covered, or they attempt to be a superset of all formats and quickly become cumbersome. Neither choice works well in practice. This paper presents a different approach: a standardized description of (past, present, and future) data formats using the Hydrographic Universal Data Description Language (HUDDL), a descriptive language implemented using the Extensible Markup Language (XML). That is, XML is used to provide a structural and physical description of a data format, rather than the content of a particular file. Done correctly, this opens the possibility of automatically generating both multi-language data parsers and documentation for format specification based on their HUDDL descriptions, as well as providing easy version control of them. This solution also provides a powerful approach for archiving a structural description of data along with the data, so that binary data will be easy to access in the future. Intending to provide a relatively low-effort solution to index the wide range of existing formats, we suggest the creation of a catalogue of format descriptions, each of them capturing the logical and physical specifications for a given data format (with its subsequent upgrades). A C/C++ parser code generator is used as an example prototype of one of the possible advantages of the adoption of such a hydrographic data format catalogue

    Designing Web-enabled services to provide damage estimation maps caused by natural hazards

    Get PDF
    The availability of building stock inventory data and demographic information is an important requirement for risk assessment studies when attempting to predict and estimate losses due to natural hazards such as earthquakes, storms, floods or tsunamis. The better this information is provided, the more accurate are predictions on damage to structures and lifelines and the better can expected impacts on the population be estimated. When a disaster strikes, a map is often one of the first requirements for answering questions related to location, casualties and damage zones caused by the event. Maps of appropriate scale that represent relative and absolute damage distributions may be of great importance for rescuing lives and properties, and for providing relief. However, this type of maps is often difficult to obtain during the first hours or even days after the occurrence of a natural disaster. The Open Geospatial Consortium Web Services (OWS) Specifications enable access to datasets and services using shared, distributed and interoperable environments through web-enabled services. In this paper we propose the use of OWS in view of these advantages as a possible solution for issues related to suitable dataset acquisition for risk assessment studies. The design of web-enabled services was carried out using the municipality of Managua (Nicaragua) and the development of damage and loss estimation maps caused by earthquakes as a first case study. Four organizations located in different places are involved in this proposal and connected through web services, each one with a specific role

    Standardization of Seismic Tomographic Models and Earthquake Focal Mechanisms Datasets Based on Web Technologies, Visualization with Keyhole Markup Language

    Get PDF
    We present two projects in seismology that have been ported to web technologies, which provide results in Keyhole Markup Language (KML) visualization layers. These use the Google Earth geo-browser as the flexible platform that can substitute specialized graphical tools to perform qualitative visual data analyses and comparisons. The Network of Research Infrastructures for European Seismology (NERIES) Tomographic Earth Model Repository contains datasets from over 20 models from the literature. A hierarchical structure of folders that represent the sets of depths for each model is implemented in KML, and this immediately results into an intuitive interface for users to navigate freely and to compare tomographic plots. The KML layer for the European-Mediterranean Regional Centroid-Moment Tensor Catalog displays the focal mechanism solutions or moderate magnitude Earthquakes from 1997 to the present. Our aim in both projects was to also propose standard representations of scientific datasets. Here, the general semantic approach of XML has an important impact that must be further explored, although we find the KML syntax to be more shifted towards detailed visualization aspects. We have thus used, and propose the use of, Javascript Object Notation (JSON), another semantic notation that stems from the web-development community that provides a compact, general-purpose, data-exchange format
    corecore