3,130 research outputs found
Investigation into Indexing XML Data Techniques
The rapid development of XML technology improves the WWW, since the XML data has many advantages and has become a common technology for transferring data cross the internet. Therefore, the objective of this research is to investigate and study the XML indexing techniques in terms of their structures. The main goal of this investigation is to identify the main limitations of these techniques and any other open issues.
Furthermore, this research considers most common XML indexing techniques and performs a comparison between them. Subsequently, this work makes an argument to find out these limitations. To conclude, the main problem of all the XML indexing techniques is the trade-off between the
size and the efficiency of the indexes. So, all the indexes become large in order to perform well, and none of them is suitable for all users’ requirements. However, each one of these techniques has some advantages in somehow
IVOA Recommendation: Simple Spectral Access Protocol Version 1.1
The Simple Spectral Access (SSA) Protocol (SSAP) defines a uniform interface
to remotely discover and access one dimensional spectra. SSA is a member of an
integrated family of data access interfaces altogether comprising the Data
Access Layer (DAL) of the IVOA. SSA is based on a more general data model
capable of describing most tabular spectrophotometric data, including time
series and spectral energy distributions (SEDs) as well as 1-D spectra; however
the scope of the SSA interface as specified in this document is limited to
simple 1-D spectra, including simple aggregations of 1-D spectra. The form of
the SSA interface is simple: clients first query the global resource registry
to find services of interest and then issue a data discovery query to selected
services to determine what relevant data is available from each service; the
candidate datasets available are described uniformly in a VOTable format
document which is returned in response to the query. Finally, the client may
retrieve selected datasets for analysis. Spectrum datasets returned by an SSA
spectrum service may be either precomputed, archival datasets, or they may be
virtual data which is computed on the fly to respond to a client request.
Spectrum datasets may conform to a standard data model defined by SSA, or may
be native spectra with custom project-defined content. Spectra may be returned
in any of a number of standard data formats. Spectral data is generally stored
externally to the VO in a format specific to each spectral data collection;
currently there is no standard way to represent astronomical spectra, and
virtually every project does it differently. Hence spectra may be actively
mediated to the standard SSA-defined data model at access time by the service,
so that client analysis programs do not have to be familiar with the
idiosyncratic details of each data collection to be accessed
IVOA Recommendation: Data Model for Astronomical DataSet Characterisation
This document defines the high level metadata necessary to describe the
physical parameter space of observed or simulated astronomical data sets, such
as 2D-images, data cubes, X-ray event lists, IFU data, etc.. The
Characterisation data model is an abstraction which can be used to derive a
structured description of any relevant data and thus to facilitate its
discovery and scientific interpretation. The model aims at facilitating the
manipulation of heterogeneous data in any VO framework or portal. A VO
Characterisation instance can include descriptions of the data axes, the range
of coordinates covered by the data, and details of the data sampling and
resolution on each axis. These descriptions should be in terms of physical
variables, independent of instrumental signatures as far as possible.
Implementations of this model has been described in the IVOA Note available
at: http://www.ivoa.net/Documents/latest/ImplementationCharacterisation.html
Utypes derived from this version of the UML model are listed and commented in
the following IVOA Note:
http://www.ivoa.net/Documents/latest/UtypeListCharacterisationDM.html
An XML schema has been build up from the UML model and is available at:
http://www.ivoa.net/xml/Characterisation/Characterisation-v1.11.xsdComment: http://www.ivoa.ne
The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images
The Montage Image Mosaic Engine was designed as a scalable toolkit, written
in C for performance and portability across *nix platforms, that assembles FITS
images into mosaics. The code is freely available and has been widely used in
the astronomy and IT communities for research, product generation and for
developing next-generation cyber-infrastructure. Recently, it has begun to
finding applicability in the field of visualization. This has come about
because the toolkit design allows easy integration into scalable systems that
process data for subsequent visualization in a browser or client. And it
includes a visualization tool suitable for automation and for integration into
Python: mViewer creates, with a single command, complex multi-color images
overlaid with coordinate displays, labels, and observation footprints, and
includes an adaptive image histogram equalization method that preserves the
structure of a stretched image over its dynamic range. The Montage toolkit
contains functionality originally developed to support the creation and
management of mosaics but which also offers value to visualization: a
background rectification algorithm that reveals the faint structure in an
image; and tools for creating cutout and down-sampled versions of large images.
Version 5 of Montage offers support for visualizing data written in HEALPix
sky-tessellation scheme, and functionality for processing and organizing images
to comply with the TOAST sky-tessellation scheme required for consumption by
the World Wide Telescope (WWT). Four online tutorials enable readers to
reproduce and extend all the visualizations presented in this paper.Comment: 16 pages, 9 figures; accepted for publication in the PASP Special
Focus Issue: Techniques and Methods for Astrophysical Data Visualizatio
Desirable properties for XML update mechanisms
The adoption of XML as the default data interchange format and the standardisation of the XPath and XQuery languages has resulted in significant research in the development and implementation of XML databases capable of processing queries efficiently. The ever-increasing deployment of XML in industry and the real-world requirement to support efficient updates to XML documents has more recently prompted research in dynamic XML labelling schemes. In this paper, we provide an overview of the recent research in dynamic XML labelling schemes. Our motivation is to define a set of properties that represent a more holistic dynamic labelling scheme and present our findings through an evaluation matrix for most of the existing schemes that provide update functionality
A High Performance XML Querying Architecture
Data exchange on the Internet plays an essential role in electronic business (e-business). A recent trend in e-business is to create distributed databases to facilitate data exchange. In most cases, the distributed databases are developed by integrating existing systems, which may be in different database models, and on different hardware and/or software platforms. Heterogeneity may cause many difficulties. A solution to the difficulties is XML (the Extensible Markup Language). XML is becoming the dominant language for exchanging data on the Internet. To develop XML systems for practical applications, developers have to addresses the performance issues. In this paper, we describe a new XML querying architecture that can be used to build high performance systems. Experiments indicate that the architecture performs better than Oracle XML DB, which is one of the most commonly used commercial DBMSs for XML
GeohashTile: Vector Geographic Data Display Method Based on Geohash
© 2020 MDPI AG. All rights reserved. In the development of geographic information-based applications for mobile devices, achieving better access speed and visual effects is the main research aim. In this paper, we propose a new geographic data display method based on Geohash, namely GeohashTile, to improve the performance of traditional geographic data display methods in data indexing, data compression, and the projection of different granularities. First, we use the Geohash encoding system to represent coordinates, as well as to partition and index large-scale geographic data. The data compression and tile encoding is accomplished by Geohash. Second, to realize a direct conversion between Geohash and screen-pixel coordinates, we adopt the relative position projection method. Finally, we improve the calculation and rendering efficiency by using the intermediate result caching method. To evaluate the GeohashTile method, we have implemented the client and the server of the GeohashTile system, which is also evaluated in a real-world environment. The results show that Geohash encoding can accurately represent latitude and longitude coordinates in vector maps, while the GeohashTile framework has obvious advantages when requesting data volume and average load time compared to the state-of-the-art GeoTile system
An MPEG-7 scheme for semantic content modelling and filtering of digital video
Abstract Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users
Adaptive content mapping for internet navigation
The Internet as the biggest human library ever assembled keeps on growing. Although all kinds of information carriers (e.g. audio/video/hybrid file formats) are available, text based documents dominate. It is estimated that about 80% of all information worldwide stored electronically exists in (or can be converted into) text form. More and more, all kinds of documents are generated by means of a text processing system and are therefore available electronically. Nowadays, many printed journals are also published online and may even discontinue to appear in print form tomorrow. This development has many convincing advantages: the documents are both available faster (cf. prepress services) and cheaper, they can be searched more easily, the physical storage only needs a fraction of the space previously necessary and the medium will not age. For most people, fast and easy access is the most interesting feature of the new age; computer-aided search for specific documents or Web pages becomes the basic tool for information-oriented work. But this tool has problems. The current keyword based search machines available on the Internet are not really appropriate for such a task; either there are (way) too many documents matching the specified keywords are presented or none at all. The problem lies in the fact that it is often very difficult to choose appropriate terms describing the desired topic in the first place. This contribution discusses the current state-of-the-art techniques in content-based searching (along with common visualization/browsing approaches) and proposes a particular adaptive solution for intuitive Internet document navigation, which not only enables the user to provide full texts instead of manually selected keywords (if available), but also allows him/her to explore the whole database
- …