708 research outputs found

    Developing efficient web-based GIS applications

    Get PDF
    There is an increase in the number of web-based GIS applications over the recent years. This paper describes different mapping technologies, database standards, and web application development standards that are relevant to the development of web-based GIS applications. Different mapping technologies for displaying geo-referenced data are available and can be used in different situations. This paper also explains why Oracle is the system of choice for geospatial applications that need to handle large amounts of data. Wireframing and design patterns have been shown to be useful in making GIS web applications efficient, scalable and usable, and should be an important part of every web-based GIS application. A range of different development technologies are available, and their use in different operating environments has been discussed here in some detail

    Design of a framework for database indexes

    Get PDF
    Database system performance depends greatly on the performance of the indexes used to lookup and update the database, therefore it is important to have efficient indexes to the database. Specialized application indexes developed by experts have "specialized source code" for each kind of database application. The time and cost to develop an index specific to the kind of application could be very high, making it unaffordable or even unavailable in many cases. Object-oriented framework technology has been used to produce index frameworks that can be applied to develop indexes, reducing the development cost. An index framework allows one to adapt to different key/data types, different queries and different access methods. In this thesis, we focus on balanced tree indexes, and develop a framework in the style of the STL. We focus on the early stages of analysis, architecture, and design in an object-oriented methodology in order to design the framework. We achieve a modular framework with decoupled modules for data, data references, containers, indexes, iterators, and algorithms. This allows for developing new applications by replacing some of these modules and reflecting the changes from one model to the next, without affecting the other modules. This would result in easier developing process with less steep learning curve, and produces applications that have their own "specialized" architecture, design and source code

    Simple Algorithm to Maintain Dynamic Suffix Array for Text Indexes

    Full text link
    Dynamic suffix array is a suffix data structure that reflects various patterns in a mutable string. Dynamic suffix array is rather convenient for performing substring search queries over database indexes that are frequently modified. We are to introduce an O(nlog2n) algorithm that builds suffix array for any string and to show how to implement dynamic suffix array using this algorithm under certain constraints. We propose that this algorithm could be useful in real-life database applications

    Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey

    Full text link
    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.Comment: 21 pages, 3 figures, accepted in JATI

    Kitap İndeksleri

    Get PDF
    This article is a review of Book Indexes from a variety of points, which are in fact the oldest indexes used in the world. They are different than journal indexes and database indexes which are ongoing projects. Book indexes, on the other hand, are unique in their own frameworks, as each one is a completed and finished unit. Construction of book indexes, types of indexes (according to subject headings and proper names), synthesis and analytic methods; and formats of indexes (indented and run-in formats) are described. There is a list of important conventions relating to book indexes at the end of the article

    The Aerospace Database data element dictionary with issues and recommendations from the meetings of July 24-25, August 13-14, and September 24-25, 1991

    Get PDF
    The present volume contains descriptions of the individual fields (data elements) which comprise the bibliographic records of the Aerospace Database. Indexes by field name and field mnemonic are provided. In addition, the issues and recommendations defined by the NASA STI Database Upgrade Working Group are included as annotations to the individual field descriptions and are listed at the end of the volume. The activities of the Working Group were initiated by the NASA STI Program Coordinating Council as part of an effort to improve overall database quality

    Index to Legal Periodicals Retrospective: 1908-1981

    Get PDF

    Generating adaptive hypertext content from the semantic web

    Get PDF
    Accessing and extracting knowledge from online documents is crucial for therealisation of the Semantic Web and the provision of advanced knowledge services. The Artequakt project is an ongoing investigation tackling these issues to facilitate the creation of tailored biographies from information harvested from the web. In this paper we will present the methods we currently use to model, consolidate and store knowledge extracted from the web so that it can be re-purposed as adaptive content. We look at how Semantic Web technology could be used within this process and also how such techniques might be used to provide content to be published via the Semantic Web
    corecore