1,244 research outputs found

    Durability of Wireless Charging Systems Embedded Into Concrete Pavements for Electric Vehicles

    Get PDF
    Point clouds are widely used in various applications such as 3D modeling, geospatial analysis, robotics, and more. One of the key advantages of 3D point cloud data is that, unlike other data formats like texture, it is independent of viewing angle, surface type, and parameterization. Since each point in the point cloud is independent of the other, it makes it the most suitable source of data for tasks like object recognition, scene segmentation, and reconstruction. Point clouds are complex and verbose due to the numerous attributes they contain, many of which may not be always necessary for rendering, making retrieving and parsing a heavy task. As Sensors are becoming more precise and popular, effectively streaming, processing, and rendering the data is also becoming more challenging. In a hierarchical continuous LOD system, the previously fetched and rendered data for a region may become unavailable when revisiting it. To address this, we use a non-persistence cache using hash-map which stores the parsed point attributes, which still has some limitations, such as the dataset needing to be refetched and reprocessed if the tab or browser is closed and reopened which can be addressed by persistence caching. On the web, popularly persistence caching involves storing data in server memory, or an intermediate caching server like Redis. This is not suitable for point cloud data where we have to store parsed and processed large point data making point cloud visualization rely only on non-persistence caching. The thesis aims to contribute toward better performance and suitability of point cloud rendering on the web reducing the number of read requests to the remote file to access data.We achieve this with the application of client-side-based LRU Cache and Private File Open Space as a combination of both persistence and non-persistence caching of data. We use a cloud-optimized data format, which is better suited for web and streaming hierarchical data structures. Our focus is to improve rendering performance using WebGPU by reducing access time and minimizing the amount of data loaded in GPU. Preliminary results indicate that our approach significantly improves rendering performance and reduce network request when compared to traditional caching methods using WebGPU

    On the classification and evaluation of prefetching schemes

    Get PDF
    Abstract available: p. [2

    Active caching for recommender systems

    Get PDF
    Web users are often overwhelmed by the amount of information available while carrying out browsing and searching tasks. Recommender systems substantially reduce the information overload by suggesting a list of similar documents that users might find interesting. However, generating these ranked lists requires an enormous amount of resources that often results in access latency. Caching frequently accessed data has been a useful technique for reducing stress on limited resources and improving response time. Traditional passive caching techniques, where the focus is on answering queries based on temporal locality or popularity, achieve a very limited performance gain. In this dissertation, we are proposing an ‘active caching’ technique for recommender systems as an extension of the caching model. In this approach estimation is used to generate an answer for queries whose results are not explicitly cached, where the estimation makes use of the partial order lists cached for related queries. By answering non-cached queries along with cached queries, the active caching system acts as a form of query processor and offers substantial improvement over traditional caching methodologies. Test results for several data sets and recommendation techniques show substantial improvement in the cache hit rate, byte hit rate and CPU costs, while achieving reasonable recall rates. To ameliorate the performance of proposed active caching solution, a shared neighbor similarity measure is introduced which improves the recall rates by eliminating the dependence on monotinicity in the partial order lists. Finally, a greedy balancing cache selection policy is also proposed to select most appropriate data objects for the cache that help to improve the cache hit rate and recall further

    From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey

    Full text link
    Context data is in demand more than ever with the rapid increase in the development of many context-aware Internet of Things applications. Research in context and context-awareness is being conducted to broaden its applicability in light of many practical and technical challenges. One of the challenges is improving performance when responding to large number of context queries. Context Management Platforms that infer and deliver context to applications measure this problem using Quality of Service (QoS) parameters. Although caching is a proven way to improve QoS, transiency of context and features such as variability, heterogeneity of context queries pose an additional real-time cost management problem. This paper presents a critical survey of state-of-the-art in adaptive data caching with the objective of developing a body of knowledge in cost- and performance-efficient adaptive caching strategies. We comprehensively survey a large number of research publications and evaluate, compare, and contrast different techniques, policies, approaches, and schemes in adaptive caching. Our critical analysis is motivated by the focus on adaptively caching context as a core research problem. A formal definition for adaptive context caching is then proposed, followed by identified features and requirements of a well-designed, objective optimal adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys Journal at this time of publishing in arxiv.or

    A trigger-based middleware cache for ORMs

    Get PDF
    ACM/IFIP/USENIX 12th International Middleware Conference, Lisbon, Portugal, December 12-16, 2011. ProceedingsCaching is an important technique in scaling storage for high-traffic web applications. Usually, building caching mechanisms involves significant effort from the application developer to maintain and invalidate data in the cache. In this work we present CacheGenie, a caching middleware which makes it easy for web application developers to use caching mechanisms in their applications. CacheGenie provides high-level caching abstractions for common query patterns in web applications based on Object-RelationalMapping (ORM) frameworks. Using these abstractions, the developer does not have to worry about managing the cache (e.g., insertion and deletion) or maintaining consistency (e.g., invalidation or updates) when writing application code. We design and implement CacheGenie in the popular Django web application framework, with PostgreSQL as the database backend and memcached as the caching layer. To automatically invalidate or update cached data, we use triggers inside the database. CacheGenie requires no modifications to PostgreSQL or memcached. To evaluate our prototype, we port several Pinax web applications to use our caching abstractions. Our results show that it takes little effort for application developers to use CacheGenie, and that CacheGenie improves throughput by 2-2.5× for read-mostly workloads in Pinax.Quanta Computer (Firm

    Prefetching techniques for client server object-oriented database systems

    Get PDF
    The performance of many object-oriented database applications suffers from the page fetch latency which is determined by the expense of disk access. In this work we suggest several prefetching techniques to avoid, or at least to reduce, page fetch latency. In practice no prediction technique is perfect and no prefetching technique can entirely eliminate delay due to page fetch latency. Therefore we are interested in the trade-off between the level of accuracy required for obtaining good results in terms of elapsed time reduction and the processing overhead needed to achieve this level of accuracy. If prefetching accuracy is high then the total elapsed time of an application can be reduced significantly otherwise if the prefetching accuracy is low, many incorrect pages are prefetched and the extra load on the client, network, server and disks decreases the whole system performance. Access pattern of object-oriented databases are often complex and usually hard to predict accurately. The ..
    • 

    corecore