192 research outputs found
KASR: A Reliable and Practical Approach to Attack Surface Reduction of Commodity OS Kernels
Commodity OS kernels have broad attack surfaces due to the large code base
and the numerous features such as device drivers. For a real-world use case
(e.g., an Apache Server), many kernel services are unused and only a small
amount of kernel code is used. Within the used code, a certain part is invoked
only at runtime while the rest are executed at startup and/or shutdown phases
in the kernel's lifetime run. In this paper, we propose a reliable and
practical system, named KASR, which transparently reduces attack surfaces of
commodity OS kernels at runtime without requiring their source code. The KASR
system, residing in a trusted hypervisor, achieves the attack surface reduction
through a two-step approach: (1) reliably depriving unused code of executable
permissions, and (2) transparently segmenting used code and selectively
activating them. We implement a prototype of KASR on Xen-4.8.2 hypervisor and
evaluate its security effectiveness on Linux kernel-4.4.0-87-generic. Our
evaluation shows that KASR reduces the kernel attack surface by 64% and trims
off 40% of CVE vulnerabilities. Besides, KASR successfully detects and blocks
all 6 real-world kernel rootkits. We measure its performance overhead with
three benchmark tools (i.e., SPECINT, httperf and bonnie++). The experimental
results indicate that KASR imposes less than 1% performance overhead (compared
to an unmodified Xen hypervisor) on all the benchmarks.Comment: The work has been accepted at the 21st International Symposium on
Research in Attacks, Intrusions, and Defenses 201
Software Aging Analysis of Web Server Using Neural Networks
Software aging is a phenomenon that refers to progressive performance
degradation or transient failures or even crashes in long running software
systems such as web servers. It mainly occurs due to the deterioration of
operating system resource, fragmentation and numerical error accumulation. A
primitive method to fight against software aging is software rejuvenation.
Software rejuvenation is a proactive fault management technique aimed at
cleaning up the system internal state to prevent the occurrence of more severe
crash failures in the future. It involves occasionally stopping the running
software, cleaning its internal state and restarting it. An optimized schedule
for performing the software rejuvenation has to be derived in advance because a
long running application could not be put down now and then as it may lead to
waste of cost. This paper proposes a method to derive an accurate and optimized
schedule for rejuvenation of a web server (Apache) by using Radial Basis
Function (RBF) based Feed Forward Neural Network, a variant of Artificial
Neural Networks (ANN). Aging indicators are obtained through experimental setup
involving Apache web server and clients, which acts as input to the neural
network model. This method is better than existing ones because usage of RBF
leads to better accuracy and speed in convergence.Comment: 11 pages, 8 figures, 1 table; International Journal of Artificial
Intelligence & Applications (IJAIA), Vol.3, No.3, May 201
Analysis of Software Aging in a Web Server
A number of recent studies have reported the phenomenon of āsoftware agingā, characterized by progressive performance degradation and/or an increased occurrence rate of hang/crash failures of a software system due to the exhaustion of operating system resources or the accumulation of errors. To counteract this phenomenon, a proactive technique called 'software rejuvenation' has been proposed. It essentially involves stopping the running software, cleaning its internal state and/or its environment and then restarting it. Software rejuvenation, being preventive in nature, begs the question as to when to schedule it. Periodic rejuvenation, while straightforward to implement, may not yield the best results, because the rate at which software ages is not constant, but it depends on the time-varying system workload. Software rejuvenation should therefore be planned and initiated in the face of the actual system behavior. This requires the measurement, analysis and prediction of system resource usage. In this paper, we study the development of resource usage in a web server while subjecting it to an artificial workload. We first collect data on several system resource usage and activity parameters. Non-parametric statistical methods are then applied for detecting and estimating trends in the data sets. Finally, we fit time series models to the data collected. Unlike the models used previously in the research on software aging, these time series models allow for seasonal patterns, and we show how the exploitation of the seasonal variation can help in adequately predicting the future resource usage. Based on the models employed here, proactive management techniques like software rejuvenation triggered by actual measurements can be built. --Software aging,software rejuvenation,Linux,Apache,web server,performance monitoring,prediction of resource utilization,non-parametric trend analysis,time series analysis
Adaptive Multimedia Content Delivery for Scalable Web Servers
The phenomenal growth in the use of the World Wide Web often places a heavy load on networks and servers, threatening to increase Web server response time and raising scalability issues for both the network and the server. With the advances in the field of optical networking and the increasing use of broadband technologies like cable modems and DSL, the server and not the network, is more likely to be the bottleneck. Many clients are willing to receive a degraded, less resource intensive version of the requested content as an alternative to connection failures. In this thesis, we present an adaptive content delivery system that transparently switches content depending on the load on the server in order to serve more clients. Our system is designed to work for dynamic Web pages and streaming multimedia traffic, which are not currently supported by other adaptive content approaches. We have designed a system which is capable of quantifying the load on the server and then performing the necessary adaptation. We designed a streaming MPEG server and client which can react to the server load by scaling the quality of frames transmitted. The main benefits of our approach include: transparent content switching for content adaptation, alleviating server load by a graceful degradation of server performance and no requirement of modification to existing server software, browsers or the HTTP protocol. We experimentally evaluate our adaptive server system and compare it with an unadaptive server. We find that adaptive content delivery can support as much as 25% more static requests, 15% more dynamic requests and twice as many multimedia requests as a non-adaptive server. Our, client-side experiments performed on the Internet show that the response time savings from our system are quite significant
Transparent and scalable client-side server selection using netlets
Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent
Quantifying Resource Sharing, Resource Isolation and Agility for Web Applications with Virtual Machines
Resource sharing between applications can significantly improve the resources required for all, which can reduce cost, and improve performance. Isolating resources on the other hand can also be beneficial as the failure or significant load on one application does not affect another. There is a delicate balance between resource sharing and resource isolation. Virtual machines may be a solution to this problem with the added benefit of being able to perform more dynamic load balancing, but this solution may be at a significant cost in performance. This thesis compares three different configurations for machines running application servers. It looks at speed at which a new application server can be started up, resource sharing and resource isolation between applications in an attempt to quantify the tradeoffs for each type of configuration
Performance Test Automation with Distributed Database Systems
Our previous research paper 2018;A Focus on Testing Issues in Distributed Database Systems' led us to a conclusion that Distributed Database Systems supports many good engineering practices but there is still place for refinements. A Distributed Database (DDB) is formed by a collection of multiple databases logically inter-related in a Computer Network. Apart from managing a plethora of complicated tasks, database management systems also need to be efficient in terms of concurrency, reliability, fault-tolerance and performance. As there has been a paradigm shift from centralized databases to Distributed databases, any testing process, when used in DDB correlates a series of stages for the construction of a DDB project right from the scratch and is employed in homogeneous systems. In this paper, an attempt is made to describe the establishment of Performance Testing with DDB systems. It focuses on the need for maintaining performance and some techniques to achieve performance in DDB systems. Three sample web based systems are tested by using TestMaker, one of the open source software, in order to highlight the helpful role of performance in the context of testing. The strengths and weaknesses of chosen performance testing tools viz., TestMaker, OpenSTA, and httperf are discussed
- ā¦