77,800 research outputs found

    Z39.50 broadcast searching and Z-server response times: perspectives from CC-interop

    Get PDF
    This paper begins by briefly outlining the evolution of Z39.50 and the current trends, including the work of the JISC CC-interop project. The research crux of the paper focuses on an investigation conducted with respect to testing Z39.50 server (Z-server) response times in a broadcast (parallel) searching environment. Customised software was configured to broadcast a search to all test Z-servers once an hour, for eleven weeks. The results were logged for analysis. Most Z-servers responded rapidly. 'Network congestion' and local OPAC usage were not found to significantly influence Z-server performance. Response time issues encountered by implementers may be the result of non-response by the Z-server and how Z-client software deals with this. The influence of 'quick and dirty' Z39.50 implementations is also identified as a potential cause of slow broadcast searching. The paper indicates various areas for further research, including setting shorter time-outs and greater end-user behavioural research to ascertain user requirements in this area. The influence more complex searches, such as Boolean, have on response times and suboptimal Z39.50 implementations are also emphasised for further study. This paper informs the LIS research community and has practical implications for those establishing Z39.50 based distributed systems, as well as those in the Web Services community. The paper challenges popular LIS opinion that Z39.50 is inherently sluggish and thus unsuitable for the demands of the modern user

    Динамическая система балансировки нагрузки веб-серверов

    Get PDF
    В статті розглянута задача розподілення навантаження у кластері серверів. Запропонована система динамічного балансування, що базується на роботі диспетчера і забезпечує балансування вхідного навантаження на веб-сервер.Usage of Internet and frequent accesses of large amount of multimedia data increase the network traffic. Performance evaluation and high availability of server are important factors for resolving this problem using cluster based systems. There are several low-cost servers using the load sharing cluster system which are connected to high speed network, and apply load balancing technique between servers. It offers high computing power and high availability. The overall increase in traffic on the World Wide Web is augmenting user perceived response times from popular Web sites, especially in conjunction with special events. A distributed website server can provide scalability and flexibility to manage with growing client demands. To improve the response time of the web server, the evident approach is to have multiple servers. Efficiency of a replicated web server system will depend on the way of distributed incoming requests among these replicas. A distributed Web-server architectures schedule client requests among the multiple server nodes in a user transparent way that affects the scalability and availability. The aim of this paper is the development of a load balancing techniques on distributed Web-server systems.Рассмотрена задача распределения нагрузки в кластере серверов. Предложена система динамической балансировки, что базируется на работе диспетчера и обеспечивает балансировку входной нагрузки на веб-сервер

    Method-based caching in multi-tiered server applications

    Get PDF
    Abstract In recent years, application server technology has become very popular for building complex but mission-critical systems such as Web-based E-Commerce applications. However, the resulting solutions tend to suffer from serious performance and scalability bottlenecks, because of their distributed nature and their various software layers. This paper deals with the problem by presenting an approach about transparently caching results of a service interface\u27s read-only methods on the client side. Cache consistency is provided by a descriptive cache invalidation model which may be specified by an application programmer. As the cache layer is transparent to the server as well as to the client code, it can be integrated with relatively low effort even in systems that have already been implemented. Experimental results show that the approach is very effective in improving a server\u27s response times and its transactional throughput. Roughly speaking, the overhead for cache maintenance is small when compared to the cost for method invocations on the server side. The cache\u27s performance improvements are dominated by the fraction of read method invocations and the cache hit rate. Our experiments are based on a realistic E-commerce Web site scenario and site user behaviour is emulated in an authentic way. By inserting our cache, the maximum user request throughput of the web application could be more than doubled while its response time (such as perceived by a web client) was kept at a very low level. Moreover, the cache can be smoothly integrated with traditional caching strategies acting on other system tiers (e.g. caching of dynamic Web pages on a Web server). The presented approach as well as the related implementation are not restricted to application server scenarios but may be applied to any kind of interface-based software layers

    Hypermedia-based discovery for source selection using low-cost linked data interfaces

    Get PDF
    Evaluating federated Linked Data queries requires consulting multiple sources on the Web. Before a client can execute queries, it must discover data sources, and determine which ones are relevant. Federated query execution research focuses on the actual execution, while data source discovery is often marginally discussed-even though it has a strong impact on selecting sources that contribute to the query results. Therefore, the authors introduce a discovery approach for Linked Data interfaces based on hypermedia links and controls, and apply it to federated query execution with Triple Pattern Fragments. In addition, the authors identify quantitative metrics to evaluate this discovery approach. This article describes generic evaluation measures and results for their concrete approach. With low-cost data summaries as seed, interfaces to eight large real-world datasets can discover each other within 7 minutes. Hypermedia-based client-side querying shows a promising gain of up to 50% in execution time, but demands algorithms that visit a higher number of interfaces to improve result completeness

    Dynamic Prefetching of Data Tiles for Interactive Visualization

    Get PDF
    In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset. We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques
    corecore