19 research outputs found

    Window Query Processing with Proxy Cache

    Get PDF
    A location dependent query (LDQ) result set is valid only in a specific region called the validity region (VR). While limiting the validity of a particular result set to a given area, the VR may also be used in caching implementations to determine if cached results satisfy semantically equivalent queries. Existing LDQ caching schemes rely on the database servers to provide the VR at a cost of high computational overhead. Alternatively, a LDQ proxy cache, which approximates the VR can be employed, freeing the database servers from the high cost of calculating the VR. A LDQ proxy cache architecture is proposed to compute an estimated validity region (EVR) based on the observed querying history at the proxy server. We present an algorithm - Window_EVR - for the LDQ proxy to compute the EVR for a window query result set. The simulation results show that LDQ proxy caching using the Window_EVR algorithm significantly reduces both the window query response time and the workload at the database servers while maintaining query result set accuracy

    Adaptive schemes for location update generation in execution location-dependent continuous queries

    Get PDF
    Cataloged from PDF version of article.An important feature that is expected to be owned by today's mobile computing systems is the ability of processing location-dependent continuous queries on moving objects. The result of a location-dependent query depends on the current location of the mobile client which has generated the query as well as the locations of the moving objects on which the query has been issued. When a location-dependent query is specified to be continuous, the result of the query can continuously change. In order to provide accurate and timely query results to a client, the location of the client as well as the locations of moving objects in the system has to be closely monitored. Most of the location generation methods proposed in the literature aim to optimize utilization of the limited wireless bandwidth. The issues of correctness and timeliness of query results reported to clients have been largely ignored. In this paper, we propose an adaptive monitoring method (AMM) and a deadline-driven method (DDM) for managing the locations of moving objects. The aim of our methods is to generate location updates with the consideration of maintaining the correctness of query evaluation results without increasing location update workload. Extensive simulation experiments have been conducted to investigate the performance of the proposed methods as compared to a well-known location update generation method, the plain dead-reckoning (pdr). © 2005 Elsevier Inc. All rights reserved

    Data Retrieval for Location-Dependent Queries in a Multi-Cell Wireless Environment

    Get PDF

    Information Dissemination via Wireless Broadcast

    Get PDF
    The advent of sensor, wireless and portable device technologies will soon enable us to embed computing technologies transparently in the environment to provide uninterrupted services for our daily life. With temperature and location sensors and wireless access points embedded in a

    Multithreading Aware Hardware Prefetching for Chip Multiprocessors

    Get PDF
    To take advantage of the processing power in the Chip Multiprocessors design, applications must be divided into semi-independent processes that can run concur- rently on multiple cores within a system. Therefore, programmers must insert thread synchronization semantics (i.e. locks, barriers, and condition variables) to synchro- nize data access between processes. Indeed, threads spend long time waiting to acquire the lock of a critical section. In addition, a processor has to stall execution to wait for load data accesses to complete. Furthermore, there are often independent instructions which include load instructions beyond synchronization semantics that could be executed in parallel while a thread waits on the synchronization semantics. The conveniences of the cache memories come with some extra cost in Chip Multiprocessors. Cache Coherence mechanisms address the Memory Consistency problem. However, Cache Coherence adds considerable overhead to memory accesses. Having aggressive prefetcher on different cores of a Chip Multiprocessor can definitely lead to significant system performance degradation when running multi-threaded applications. This result of prefetch-demand interference when a prefetcher in one core ends up pulling shared data from a producing core before it has been written, the cache block will end up transitioning back and forth between the cores and result in useless prefetch, saturating the memory bandwidth and substantially increase the latency to critical shared data. We present a hardware prefetcher that enables large performance improvements from prefetching in Chip Multiprocessors by significantly reducing prefetch-demand interference. Furthermore, it will utilize the time that a thread spends waiting on syn- chronization semantics to run ahead of the critical section to speculate and prefetch independent load instruction data beyond the synchronization semantics

    Location-Dependent Query Processing Under Soft Real-Time Constraints

    Get PDF

    Storing and querying evolving knowledge graphs on the web

    Get PDF
    corecore