552,259 research outputs found
Performance impact of web services on Internet servers
While traditional Internet servers mainly served static and
later also dynamic content, the popularity of Web services
is increasing rapidly. Web services incorporate additional
overhead compared to traditional web interaction. This
overhead increases the demand on Internet servers which
is of particular importance when the request rate to the
server is high. We conduct experiments that show that the
imposed overhead of Web services is non-negligible
during server overload. In our experiments the response
time for Web services is more than 30% higher and the
server throughput more than 25% lower compared to
traditional web interaction using dynamically created
HTML pages
Design Architecture-Based on Web Server and Application Cluster in Cloud Environment
Cloud has been a computational and storage solution for many data centric
organizations. The problem today those organizations are facing from the cloud
is in data searching in an efficient manner. A framework is required to
distribute the work of searching and fetching from thousands of computers. The
data in HDFS is scattered and needs lots of time to retrieve. The major idea is
to design a web server in the map phase using the jetty web server which will
give a fast and efficient way of searching data in MapReduce paradigm. For real
time processing on Hadoop, a searchable mechanism is implemented in HDFS by
creating a multilevel index in web server with multi-level index keys. The web
server uses to handle traffic throughput. By web clustering technology we can
improve the application performance. To keep the work down, the load balancer
should automatically be able to distribute load to the newly added nodes in the
server
Hierarchy of protein loop-lock structures: a new server for the decomposition of a protein structure into a set of closed loops
HoPLLS (Hierarchy of protein loop-lock structures)
(http://leah.haifa.ac.il/~skogan/Apache/mydata1/main.html) is a web server that
identifies closed loops - a structural basis for protein domain hierarchy. The
server is based on the loop-and-lock theory for structural organisation of
natural proteins. We describe this web server, the algorithms for the
decomposition of a 3D protein into loops and the results of scientific
investigations into a structural "alphabet" of loops and locks.Comment: 11 pages, 4 figure
CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures
Classical simulations of protein flexibility remain computationally
expensive, especially for large proteins. A few years ago, we developed a fast
method for predicting protein structure fluctuations that uses a single protein
model as the input. The method has been made available as the CABS-flex web
server and applied in numerous studies of protein structure-function
relationships. Here, we present a major update of the CABS-flex web server to
version 2.0. The new features include: extension of the method to significantly
larger and multimeric proteins, customizable distance restraints and simulation
parameters, contact maps and a new, enhanced web server interface. CABS-flex
2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex
Browser-based Analysis of Web Framework Applications
Although web applications evolved to mature solutions providing sophisticated
user experience, they also became complex for the same reason. Complexity
primarily affects the server-side generation of dynamic pages as they are
aggregated from multiple sources and as there are lots of possible processing
paths depending on parameters. Browser-based tests are an adequate instrument
to detect errors within generated web pages considering the server-side process
and path complexity a black box. However, these tests do not detect the cause
of an error which has to be located manually instead. This paper proposes to
generate metadata on the paths and parts involved during server-side processing
to facilitate backtracking origins of detected errors at development time.
While there are several possible points of interest to observe for
backtracking, this paper focuses user interface components of web frameworks.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330
Transparent and scalable client-side server selection using netlets
Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent
- …
