115 research outputs found

    Evaluating performance of web services in cloud computing environment with high availability

    Get PDF
    This paper presents an methodology for attaining high availability to the demands of the web clients. In order to improve in response time of web services during peak hours dynamic allocation of host nodes will be used in this research work. As web users are very demanding: they expect web services to be quickly accessible from the world 24*7. Fast response time leads to high availability of web services, while slow response time degrades the performance of web services. With the increasing trend of internet, it becomes a part of life. People use internet to help in their studies, business, shopping and many more things. To achieve this objective LAMP platform is used which are Linux, Apache, My SQL, and PHP. LAMP is used to increase the quality of product by using open source software. The proposed strategy will work as middle layer and provide highly availability to the web clients

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    An examination of the Asus WL-HDD 2.5 as a nepenthes malware collector

    No full text
    The Linksys WRT54g has been used as a host for network forensics tools for instance Snort for a long period of time. Whilst large corporations are already utilising network forensic tools, this paper demonstrates that it is quite feasible for a non-security specialist to track and capture malicious network traffic. This paper introduces the Asus Wireless Hard disk as a replacement for the popular Linksys WRT54g. Firstly, the Linksys router will be introduced detailing some of the research that was undertaken on the device over the years amongst the security community. It then briefly discusses malicious software and the impact this may have for a home user. The paper then outlines the trivial steps in setting up Nepenthes 0.1.7 (a malware collector) for the Asus WL-HDD 2.5 according to the Nepenthes and tests the feasibility of running the malware collector on the selected device. The paper then concludes on discussing the limitations of the device when attempting to execute Nepenthes

    Partition repositories for partition cloning—OS independent software maintenance in large clusters of PCs

    No full text
    As a novel approach to software maintenance in large clusters of PCs requiring multiple OS installations we implemented partition cloning and partition repositories as well as a set of OS independent tools for software maintenance using entire partitions, thus providing a clean abstraction of all operating system configuration state. We identify the evolution of software installations (different releases) and the customization of installed systems (different machines) as two orthogonal axes. Using this analysis we devise partition repositories as an efficient, incremental storage scheme to maintain all necessary partition images for versatile, large clusters of PCs. We evaluate our approach with a release history of sample images used in the Patagonia multi-purpose clusters at ETH ZĂĽrich including several Linux, Windows NT and Oberon images. The study includes quantitative data that shows the viability of the OS independent approach of working with entire partitions and investigates some relevant tradeoffs: e.g., between difference granularity and compression block size. For a 2 GByte Windows NT partition our repository system enables the storage of nearly a dozen generational images or several dozens of customized images within the storage budget of twice the image size. The partitions can be replicated and transferred to a large number of PCs with our Dolly cloning tool. At present, our system is a modular university prototype based entirely on open source software, and most parts of it are in daily use to maintain our CoPs and Patagonia clusters at ETH

    Virtual Machine Image Management for Elastic Resource Usage in Grid Computing

    Get PDF
    Grid Computing has evolved from an academic concept to a powerful paradigm in the area of high performance computing (HPC). Over the last few years, powerful Grid computing solutions were developed that allow the execution of computational tasks on distributed computing resources. Grid computing has recently attracted many commercial customers. To enable commercial customers to be able to execute sensitive data in the Grid, strong security mechanisms must be put in place to secure the customers' data. In contrast, the development of Cloud Computing, which entered the scene in 2006, was driven by industry: it was designed with respect to security from the beginning. Virtualization technology is used to separate the users e.g., by putting the different users of a system inside a virtual machine, which prevents them from accessing other users' data. The use of virtualization in the context of Grid computing has been examined early and was found to be a promising approach to counter the security threats that have appeared with commercial customers. One main part of the work presented in this thesis is the Image Creation Station (ICS), a component which allows users to administer their virtual execution environments (virtual machines) themselves and which is responsible for managing and distributing the virtual machines in the entire system. In contrast to Cloud computing, which was designed to allow even inexperienced users to execute their computational tasks in the Cloud easily, Grid computing is much more complex to use. The ICS makes it easier to use the Grid by overcoming traditional limitations like installing needed software on the compute nodes that users use to execute the computational tasks. This allows users to bring commercial software to the Grid for the first time, without the need for local administrators to install the software to computing nodes that are accessible by all users. Moreover, the administrative burden is shifted from the local Grid site's administrator to the users or experienced software providers that allow the provision of individually tailored virtual machines to each user. But the ICS is not only responsible for enabling users to manage their virtual machines themselves, it also ensures that the virtual machines are available on every site that is part of the distributed Grid system. A second aspect of the presented solution focuses on the elasticity of the system by automatically acquiring free external resources depending on the system's current workload. In contrast to existing systems, the presented approach allows the system's administrator to add or remove resource sets during runtime without needing to restart the entire system. Moreover, the presented solution allows users to not only use existing Grid resources but allows them to scale out to Cloud resources and use these resources on-demand. By ensuring that unused resources are shut down as soon as possible, the computational costs of a given task are minimized. In addition, the presented solution allows each user to specify which resources can be used to execute a particular job. This is useful when a job processes sensitive data e.g., that is not allowed to leave the company. To obtain a comparable function in today's systems, a user must submit her computational task to a particular resource set, losing the ability to automatically schedule if more than one set of resources can be used. In addition, the proposed solution prioritizes each set of resources by taking different metrics into account (e.g. the level of trust or computational costs) and tries to schedule the job to resources with the highest priority first. It is notable that the priority often mimics the physical distance from the resources to the user: a locally available Cluster usually has a higher priority due to the high level of trust and the computational costs, that are usually lower than the costs of using Cloud resources. Therefore, this scheduling strategy minimizes the costs of job execution by improving security at the same time since data is not necessarily transferred to remote resources and the probability of attacks by malicious external users is minimized. Bringing both components together results in a system that adapts automatically to the current workload by using external (e.g., Cloud) resources together with existing locally available resources or Grid sites and provides individually tailored virtual execution environments to the system's users

    Raspberry Pi Technology

    Get PDF
    • …
    corecore