1,058 research outputs found

    Designing a resource-efficient data structure for mobile data systems

    Get PDF
    Designing data structures for use in mobile devices requires attention on optimising data volumes with associated benefits for data transmission, storage space and battery use. For semi-structured data, tree summarisation techniques can be used to reduce the volume of structured elements while dictionary compression can efficiently deal with value-based predicates. This project seeks to investigate and evaluate an integration of the two approaches. The key strength of this technique is that both structural and value predicates could be resolved within one graph while further allowing for compression of the resulting data structure. As the current trend is towards the requirement for working with larger semi-structured data sets this work would allow for the utilisation of much larger data sets whilst reducing requirements on bandwidth and minimising the memory necessary both for the storage and querying of the data

    Cost Simulation and Performance Optimization of Web-based Applications on Mobile Channels

    Get PDF
    When considering the addition of a mobile presentation channel to an existing web-based application, a key question that has to be answered even before development begins is how the mobile channel's characteristics will impact the user experience and the cost of using the application. If either of these factors is outside acceptable limits, economical considerations may forbid adding the channels, even if it would be feasible from a purely technical perspective. Both of these factors depend considerably on two metrics: The time required to transmit data over the mobile network, and the volume transmitted. The PETTICOAT method presented in this paper uses the dialog flow model and web server log files of an existing application to identify typical interaction sequences and to compile volume statistics, which are then run through a tool that simulates the volume and time that would be incurred by executing the interaction sequences on a mobile channel. From the simulated volume and time data, we can then calculate the cost of accessing the application on a mobile channel, and derive suitable approaches for optimizing cost and response times

    GeohashTile: Vector Geographic Data Display Method Based on Geohash

    Get PDF
    © 2020 MDPI AG. All rights reserved. In the development of geographic information-based applications for mobile devices, achieving better access speed and visual effects is the main research aim. In this paper, we propose a new geographic data display method based on Geohash, namely GeohashTile, to improve the performance of traditional geographic data display methods in data indexing, data compression, and the projection of different granularities. First, we use the Geohash encoding system to represent coordinates, as well as to partition and index large-scale geographic data. The data compression and tile encoding is accomplished by Geohash. Second, to realize a direct conversion between Geohash and screen-pixel coordinates, we adopt the relative position projection method. Finally, we improve the calculation and rendering efficiency by using the intermediate result caching method. To evaluate the GeohashTile method, we have implemented the client and the server of the GeohashTile system, which is also evaluated in a real-world environment. The results show that Geohash encoding can accurately represent latitude and longitude coordinates in vector maps, while the GeohashTile framework has obvious advantages when requesting data volume and average load time compared to the state-of-the-art GeoTile system

    Efficient I/O for Computational Grid Applications

    Get PDF
    High-performance computing increasingly occurs on computational grids composed of heterogeneous and geographically distributed systems of computers, networks, and storage devices that collectively act as a single virtual computer. A key challenge in this environment is to provide efficient access to data distributed across remote data servers. This dissertation explores some of the issues associated with I/O for wide-area distributed computing and describes an I/O system, called Armada, with the following features: a framework to allow application and dataset providers to flexibly compose graphs of processing modules that describe the distribution, application interfaces, and processing required of the dataset before or after computation; an algorithm to restructure application graphs to increase parallelism and to improve network performance in a wide-area network; and a hierarchical graph-partitioning scheme that deploys components of the application graph in a way that is both beneficial to the application and sensitive to the administrative policies of the different administrative domains. Experiments show that applications using Armada perform well in both low- and high-bandwidth environments, and that our approach does an exceptional job of hiding the network latency inherent in grid computing

    RESTful Wireless Sensor Networks

    Get PDF
    Sensor networks have diverse structures and generally employ proprietary protocols to gather useful information about the physical world. This diversity generates problems to interact with these sensors since custom APIs are needed which are tedious, error prone and have steep learning curve. In this thesis, I present RESThing, a lightweight REST framework for wireless sensor networks to ease the process of interacting with these sensors by making them accessible over the Web. I evaluate the system and show that it is feasible to support widely used and standard Web protocols in wireless sensor networks. Being able to integrate these tiny devices seamlessly into the global information medium, we can achieve the Web of Things

    Dynamic Hilbert clustering based on convex set for web services aggregation

    Get PDF
    In recent years, web services run by big corporations and different application-specific data centers have all been embraced by several companies worldwide. Web services provide several benefits when compared to other communication technologies. However, it still suffers from congestion and bottlenecks as well as a significant delay due to the tremendous load caused by a large number of web service requests from end users. Clustering and then aggregating similar web services as one compressed message can potentially achieve network traffic reduction. This paper proposes a dynamic Hilbert clustering as a new model for clustering web services based on convex set similarity. Mathematically, the suggested models compute the degree of similarity between simple object access protocol (SOAP) messages and then cluster them into groups with high similarity. Next, each cluster is aggregated as a compact message that is finally encoded by fixed-length or Huffman. The experiment results have shown the suggested model performs better than the conventional clustering techniques in terms of compression ratio. The suggested model has produced the best results, reaching up to 15 with fixed-length and up to 20 with Huffma

    Web Content Delivery Optimization

    Get PDF
    Milliseconds matters, when they’re counted. If we consider the life of the universe into one single year, then on 31 December at 11:59:59.5 PM, “speed” was transportation’s concern, and now after 500 milliseconds it is web’s, and no one knows whose concern it would be in coming milliseconds, but at this very moment; this thesis proposes an optimization method, mainly for content delivery on slow connections. The method utilizes a proxy as a middle box to fetch the content; requested by a client, from a single or multiple web servers, and bundles all of the fetched image content types that fits into the bundling policy; inside a JavaScript file in Base64 format. This optimization method reduces the number of HTTP requests between the client and multiple web servers as a result of its proposed bundling solution, and at the same time optimizes the HTTP compression efficiency as a result of its proposed method of aggregative textual content compression. Page loading time results of the test web pages; which were specially designed and developed to capture the optimum benefits of the proposed method; proved up to 81% faster page loading time for all connection types. However, other tests in non-optimal situations such as webpages which use “Lazy Loading” techniques, showed just 35% to 50% benefits, that is only achievable on 2G and 3G connections (0.2 Mbps – 15 Mbps downlink) and not faster connections

    Design-time performance testing

    Get PDF
    Software designers make decisions between alternate approaches early in the development of a software application and these decisions can be difficult to change later. Designers make these decisions based on estimates of how alternatives affect software qualities. One software quality that can be difficult to predict is performance, that is, the efficient use of resources in the system. It is particularly challenging to estimate the performance of large, interconnected software systems composed of components. With the proliferation of class libraries, middle-ware systems, web services, and third party components, many software projects rely on third party services to meet their requirements. Often choosing between services involves considering both the functionality and performance of the services. To help software developers compare their designs and third-party services, I propose using performance prototypes of alternatives and test suites to estimate performance trade-offs early in the development cycle, a process called Design-Time Performance Testing (DTPT). Providing software designers with performance evidence based on prototypes will allow designers to make informed decisions regarding performance trade-offs. To show how DTPT can help inform real design decisions. In particular: a process for DTPT, a framework implementation written in Java, and experiments to verify and validate the process and implementation. The implemented framework assists when designing, running, and documenting performance test suites, allowing designers to make accurate comparisons between alternate approaches. Performance metrics are captured by instrumenting and running prototypes. This thesis describes the process and framework for gathering software performance estimates at design-time using prototypes and test suites

    XML Integrated Environment for Service-Oriented Data Management

    Get PDF
    The proliferation of XML as a family of related standards including a markup language (XML), formatting semantics (XSL style sheets), a linking syntax (XLINK), and appropriate data schema standards have emerged as a de facto standard for encoding and sharing data between various applications. XML is designed to be simple, easily parsed and self-describing. XML is based on and support the idea of separation of concerns: information content is separated from information rendering, and relationships between data elements are provided via simple nesting and references. As the XML content grows, the ability to handle schemaless XML documents becomes more critical as most XML documents do not have schema or Document Type Definitions (DTDs). In addition, XML content and XML tools are often required to be combined in effective ways for better performance and higher flexibility. In this research, we proposed XML Integrated Environment (XIE) which is a general-purpose service-oriented architecture for processing XML documents in a scalable and efficient fashion. The XIE supports a new software service model that provides a proper abstraction to describe a service and divide it into four components: structure, connection, interface and logic. We also proposed and implemented XIE Service Language (XIESL) that can capture the creation and maintenance of the XML processes and the data flow specified by the user and then orchestrates the interactions between different XIE services. Moreover, XIESL manages the complexity of XML processing by implementing an XML processing pipeline that enables better management, control, interpretation and presentation of the XML data even for non-professional users. The XML Integrated Environment is envisioned to revolutionize the way non-professional programmers see, work and manage their XML assets. It offers them powerful tools and constructs to fully utilize the XML processing power embedded in its unified framework and service-oriented architecture

    Vectorwise: Beyond Column Stores

    Get PDF
    textabstractThis paper tells the story of Vectorwise, a high-performance analytical database system, from multiple perspectives: its history from academic project to commercial product, the evolution of its technical architecture, customer reactions to the product and its future research and development roadmap. One take-away from this story is that the novelty in Vectorwise is much more than just column-storage: it boasts many query processing innovations in its vectorized execution model, and an adaptive mixed row/column data storage model with indexing support tailored to analytical workloads. Another one is that there is a long road from research prototype to commercial product, though database research continues to achieve a strong innovative influence on product development
    corecore