248 research outputs found

    Infrastructural Security for Virtualized Grid Computing

    Get PDF
    The goal of the grid computing paradigm is to make computer power as easy to access as an electrical power grid. Unlike the power grid, the computer grid uses remote resources located at a service provider. Malicious users can abuse the provided resources, which not only affects their own systems but also those of the provider and others. Resources are utilized in an environment where sensitive programs and data from competitors are processed on shared resources, creating again the potential for misuse. This is one of the main security issues, since in a business environment competitors distrust each other, and the fear of industrial espionage is always present. Currently, human trust is the strategy used to deal with these threats. The relationship between grid users and resource providers ranges from highly trusted to highly untrusted. This wide trust relationship occurs because grid computing itself changed from a research topic with few users to a widely deployed product that included early commercial adoption. The traditional open research communities have very low security requirements, while in contrast, business customers often operate on sensitive data that represents intellectual property; thus, their security demands are very high. In traditional grid computing, most users share the same resources concurrently. Consequently, information regarding other users and their jobs can usually be acquired quite easily. This includes, for example, that a user can see which processes are running on another user´s system. For business users, this is unacceptable since even the meta-data of their jobs is classified. As a consequence, most commercial customers are not convinced that their intellectual property in the form of software and data is protected in the grid. This thesis proposes a novel infrastructural security solution that advances the concept of virtualized grid computing. The work started back in 2007 and led to the development of the XGE, a virtual grid management software. The XGE itself uses operating system virtualization to provide a virtualized landscape. Users’ jobs are no longer executed in a shared manner; they are executed within special sandboxed environments. To satisfy the requirements of a traditional grid setup, the solution can be coupled with an installed scheduler and grid middleware on the grid head node. To protect the prominent grid head node, a novel dual-laned demilitarized zone is introduced to make attacks more difficult. In a traditional grid setup, the head node and the computing nodes are installed in the same network, so a successful attack could also endanger the user´s software and data. While the zone complicates attacks, it is, as all security solutions, not a perfect solution. Therefore, a network intrusion detection system is enhanced with grid specific signatures. A novel software called Fence is introduced that supports end-to-end encryption, which means that all data remains encrypted until it reaches its final destination. It transfers data securely between the user´s computer, the head node and the nodes within the shielded, internal network. A lightweight kernel rootkit detection system assures that only trusted kernel modules can be loaded. It is no longer possible to load untrusted modules such as kernel rootkits. Furthermore, a malware scanner for virtualized grids scans for signs of malware in all running virtual machines. Using virtual machine introspection, that scanner remains invisible for most types of malware and has full access to all system calls on the monitored system. To speed up detection, the load is distributed to multiple detection engines simultaneously. To enable multi-site service-oriented grid applications, the novel concept of public virtual nodes is presented. This is a virtualized grid node with a public IP address shielded by a set of dynamic firewalls. It is possible to create a set of connected, public nodes, either present on one or more remote grid sites. A special web service allows users to modify their own rule set in both directions and in a controlled manner. The main contribution of this thesis is the presentation of solutions that convey the security of grid computing infrastructures. This includes the XGE, a software that transforms a traditional grid into a virtualized grid. Design and implementation details including experimental evaluations are given for all approaches. Nearly all parts of the software are available as open source software. A summary of the contributions and an outlook to future work conclude this thesis

    A Grid-Enabled Infrastructure for Resource Sharing, E-Learning, Searching and Distributed Repository Among Universities

    Get PDF
    In the recent years, service-based approaches for sharing of data among repositories and online learning are rising to prominence because of their potential to meet the requirements in the area of high performance computing. Developing education based grid services and assuring high availability reliability and scalability are demanding in web service architectures. On the other hand, grid computing provides flexibility towards aggregating distributed CPU, memory, storage, data and supports large number of distributed resource sharing to provide the full potential for education like applications to share the knowledge that can be attainable on any single system. However, the literature shows that the potential of grid resources for educational purposes is not being utilized yet. In this paper, an education based grid framework architecture that provides promising platform to support sharing of geographically dispersed learning content among universities is developed. It allows students, faculty and researchers to share and gain knowledge in their area of interest by using e-learning, searching and distributed repository services among universities from anywhere, anytime. Globus toolkit 5.2.5 (GTK) software is used as grid middleware that provides resource access, discovery and management, data movement, security, and so forth. Furthermore, this work uses the OGSA-DAI that provides database access and operations. The resulting infrastructure enables users to discover education services and interact with them using the grid portal

    A highly-available and scalable microservice architecture for access management

    Get PDF
    Access management is a key aspect of providing secure services and applications in information technology. Ensuring secure access is particularly challenging in a cloud environment wherein resources are scaled dynamically. In fact keeping track of dynamic cloud instances and administering access to them requires careful coordination and mechanisms to ensure reliable operations. PrivX is a commercial offering from SSH Communications and Security Oyj that automatically scans and keeps track of the cloud instances and manages access to them. PrivX is currently built on the microservices approach, wherein the application is structured as a collection of loosely coupled services. However, PrivX requires external modules and with specific capabilities to ensure high availability. Moreover, complex scripts are required to monitor the whole system. The goal of this thesis is to make PrivX highly-available and scalable by using a container orchestration framework. To this end, we first conduct a detailed study of mostly widely used container orchestration frameworks: Kubernetes, Docker Swarm and Nomad. We then select Kubernetes based on a feature evaluation relevant to the considered scenario. We package the individual components of PrivX, including its database, into Docker containers and deploy them on a Kubernetes cluster. We also build a prototype system to demonstrate how microservices can be managed on a Kubernetes cluster. Additionally, an auto scaling tool is created to scale specific services based on predefined rules. Finally, we evaluate the service recovery time for each of the services in PrivX, both in the RPM deployment model and the prototype Kubernetes deployment model. We find that there is no significant difference in service recovery time between the two models. However, Kubernetes ensured high availability of the services. We find that Kubernetes is the preferred mode for deploying PrivX and it makes PrivX highly available and scalable

    Securing an operational fintech web platform

    Get PDF
    Financial technology is used to describe new tech that seeks to improve and automate the delivery and use of financial services. The Fintech industry is highly vulnerable to security attacks. Protection of the sensitive data is critical matter for the enterprises that provide services to their clients by depending on sensitive private data. As of today, many enterprises are providing Fintech solutions. Clients of these companies allow their data to be stored and processed by these companies. Security in these applications is essential

    ENHANCING THE PERFORMANCE AND SECURITY OF ANONYMOUS COMMUNICATION NETWORKS

    Get PDF
    With the increasing importance of the Internet in our daily lives, the private information of millions of users is prone to more security risks. Users data are collected either for commercial purposes and sold by service providers to marketeers or political purposes and used to track people by governments, or even for personal purposes by hackers. Protecting online users privacy has become a more pressing matter over the years. To this end, anonymous communication networks were developed to serve this purpose. Tors anonymity network is one of the most widely used anonymity networks online; it consists of thousands of routers run by volunteers. Tor preserves the anonymity of its users by relaying the traffic through a number of routers (called onion routers) forming a circuit. Tor was mainly developed as a low-latency network to support interactive applications such as web browsing and messaging applications. However, due to some deficiencies in the original design of Tors network, the performance is affected to the point that interactive applications cannot tolerate it. In this thesis, we attempt to address a number of the performance-limiting issues in Tor networks design. Several researches proposed changes in the transport design to eliminate the effect of these problems and improve the performance of Tors network. In our work, we propose "QuicTor," an improvement to the transport layer of Tors network by using Googles protocol "QUIC" instead of TCP. QUIC was mainly developed to eliminate TCPs latency introduced from the handshaking delays and the head-of-line blocking problem. We provide an empirical evaluation of our proposed design and compare it to two other proposed designs, IMUX and PCTCP.We show that QuicTor significantly enhances the performance of Tors network. Tor was mainly developed as a low-latency network to support interactive web browsing and messaging applications. However, a considerable percentage of Tor traffic is consumed by bandwidth acquisitive applications such as BitTorrent. This results in an unfair allocation of the available bandwidth and significant degradation in the Quality-of-service (QoS) delivered to users. In this thesis, we present a QoS-aware deep reinforcement learning approach for Tors circuit scheduling (QDRL). We propose a design that coalesces the two scheduling levels originally presented in Tor and addresses it as a single resource allocation problem. We use the QoS requirements of different applications to set the weight of active circuits passing through a relay. Furthermore, we propose a set of approaches to achieve the optimal trade-off between system fairness and efficiency. We designed and implemented a reinforcement-learning-based scheduling approach (TRLS), a convex-optimization-based scheduling approach (CVX-OPT), and an average-rate-based proportionally fair heuristic (AR-PF). We also compared the proposed approaches with basic heuristics and with the implemented scheduler in Tor. We show that our reinforcement-learning-based approach (TRLS) achieved the highest QoS-aware fairness level with a resilient performance to the changes in an environment with a dynamic nature, such as the Tor networ

    Evolution of network computing paradigms: applications of mobile agents in wired and wireless networks

    Get PDF
    The World Wide Web (or Web for short) is the largest client-server computing system commonly available, which is used through its widely accepted universal client (the Web browser) that uses a standard communication protocol known as the HyperText Transfer Protocol (HTTP) to display information described in the HyperText Markup Language (HTML). The current Web computing model allows the execution of server-side applications such as Servlets and client-side applications such as Applets. However, it offers limited support for another model of network computing where users would be able to use remote, and perhaps more powerful, machines for their computing needs. The client-server model enables anyone with a Web-enabled device ranging from desktop computers to cellular telephones, to retrieve information from the Web. In today's information society, however, users are overwhelmed by the information with which they are confronted on a daily basis. For subscribers of mobile wireless data services, this may present a problem. Wireless handheld devices, such as cellular telephones are connected via wireless networks that suffer from low bandwidth and have a greater tendency for network errors. In addition, wireless connections can be lost or degraded by mobility. Therefore, there a need for entities that act on behalf of users to simplify the tasks of discovering and managing network computing resources. It has been said that software agents are a solution in search of a problem. Mobile agents, however, are inherently distributed in nature, and therefore they represent a natural view of a distributed system. They provide an ideal mechanism for implementing complex systems, and they are well suited for applications that are communicationscentric such as Web-based network computing. Another attractive area of mobile agents is processing data over unreliable networks (such as wireless networks). In such an environment, the low reliability network can be used to transfer agents rather than a chunk. of data. The agent can travel to the nodes of the network, collect or process information without the risk of network disconnection, then return home. The publications of this doctorate by published works report on research undertaken in the area of distributed systems with emphasis on network computing paradigms, Web-based distributed computing, and the applications of mobile agents in Web-based distributed computing and wireless computing. The contributions of this collection of related papers can be summarized in four points. First, I have shown how to extend the Web to include computing resources; to illustrate the feasibility of my approach I have constructed a proof of concept implementation. Second, a mobile agent-based approach to Web-based distributed computing, that harness the power of the Web as a computing resource, has been proposed and a system has been prototyped. This, however, means that users will be able to use remote machines to execute their code, but this introduces a security risk. I need to make sure that malicious users cannot harm the remote system. For this, a security policy design pattern for mobile Java code has been developed. Third, a mediator-based approach to wireless client/server computing has been proposed and guidelines for implementing it have been published. This approach allows access to Internet services and distributed object systems from resource-constraint handheld wireless devices such as cellular telephones. Fourth and finally, a mobile agent-based approach to the Wireless Internet has been designed and implemented. In this approach, remote mobile agents can be accessed and used from wireless handheld devices. Handheld wireless devices will benefit greatly from this approach since it overcomes wireless network limitations such as low bandwidth and disconnection, and enhances the functionality of services by being able to operate without constant user input

    Services in pervasive computing environments : from design to delivery

    Get PDF
    The work presented in this thesis is based on the assumption that modern computer technologies are already potentially pervasive: CPUs are embedded in any sort of device; RAM and storage memory of a modern PDA is comparable to those of a ten years ago Unix workstation; Wi-Fi, GPRS, UMTS are leveraging the development of the wireless Internet. Nevertheless, computing is not pervasive because we do not have a clear conceptual model of the pervasive computer and we have not tools, methodologies, and middleware to write and to seamlessly deliver at once services over a multitude of heterogeneous devices and different delivery contexts. Our thesis addresses these issues starting from the analysis of forces in a pervasive computing environment: user mobility, user profile, user position, and device profile. The conceptual model, or metaphor, we use to drive our work is to consider the environment as surrounded by a multitude of services and objects and devices as the communicating gates between the real world and the virtual dimension of pervasive computing around us. Our thesis is thus built upon three main “pillars”. The first pillar is a domain-object-driven methodology which allows developer to abstract from low level details of the final delivery platform, and provides the user with the ability to access services in a multi-channel way. The rationale is that domain objects are self-contained pieces of software able to represent data and to compute functions and procedures. Our approach fills the gap between users and domain objects building an appropriate user interface which is both adapted to the domain object and to the end user device. As example, we present how to design, implement and deliver an electronic mail application over various platforms. The second pillar of this thesis analyzes in more details the forces that make direct object manipulation inadequate in a pervasive context. These forces are the user profile, the device profile, the context of use, and the combinatorial explosion of domain objects. From the analysis of the electronic mail application presented as example, we notice that according to the end user device, or according to particular circumstances during the access to the service (for instance if the user access the service by the interactive TV while he is having his breakfast) some functionalities are not compulsory and do not fit an adequate task sequence. So we decided to make task models explicit in the design of a service and to integrate the capability to automatically generate user interfaces for domain objects with the formal definition of task models adapted to the final delivery context. Finally, the third pillar of our thesis is about the lifecycle of services in a pervasive computing environment. Our solutions are based upon an existing framework, the Jini connection technology, and enrich this framework with new services and architectures for the deployment and discovery of services, for the user session management, and for the management of offline agents

    NETWORK SERVICE DELIVERY AND THROUGHPUT OPTIMIZATION VIA SOFTWARE DEFINED NETWORKING

    Get PDF
    In today\u27s world, transmitting data across large bandwidth-delay product (BDP) networks requires special configuration on end users\u27 machines in order to be done efficiently. This added level of complexity creates extra cost and is usually overlooked by users unknowledgeable to the issues. This is one example problem which can be ameliorated with the emerging software defined networking (SDN) paradigm. In an SDN, packet forwarding is controlled via software controllers. In an OpenFlow SDN, a controller can control the forwarding, rewriting, and dropping of packets based on their header attributes. The ability to handle packets in customizable ways in software has significant implications for both users and operators of the network. Via SDN, network providers can easily provide services to enhance users\u27 experience of the network. Steroid OpenFlow Service (SOS) is presented as a solution to seamless enhancement of TCP data transfer throughput over large BDP networks without any modification to the software and configurations on users\u27 machines. SOS utilizes OpenFlow to redirect application specific traffic to application specific service agents. SOS uses service agents on both ends of the connection to seamlessly terminate a user\u27s TCP connection, launch a set of parallel TCP connections, and leverage multiple paths when available to maximize throughput

    The design and development of multi-agent based RFID middleware system for data and devices management

    Get PDF
    Thesis (D. Tech. (Electrical Engineering)) - Central University of technology, Free State, 2012Radio frequency identification technology (RFID) has emerged as a key technology for automatic identification and promises to revolutionize business processes. While RFID technology adoption is improving rapidly, reliable and widespread deployment of this technology still faces many significant challenges. The key deployment challenges include how to use the simple, unreliable raw data generated by RFID deployments to make business decisions; and how to manage a large number of deployed RFID devices. In this thesis, a multi-agent based RFID middleware which addresses some of the RFID data and device management challenges was developed. The middleware developed abstracts the auto-identification applications from physical RFID device specific details and provides necessary services such as device management, data cleaning, event generation, query capabilities and event persistence. The use of software agent technology offers a more scalable and distributed system architecture for the proposed middleware. As part of a multi-agent system, application-independent domain ontology for RFID devices was developed. This ontology can be used or extended in any application interested with RFID domain ontology. In order to address the event processing tasks within the proposed middleware system, a temporal-based RFID data model which considers both applications’ temporal and spatial granules in the data model itself for efficient event processing was developed. The developed data model extends the conventional Entity-Relationship constructs by adding a time attribute to the model. By maintaining the history of events and state changes, the data model captures the fundamental RFID application logic within the data model. Hence, this new data model supports efficient generation of application level events, updating, querying and analysis of both recent and historical events. As part of the RFID middleware, an adaptive sliding-window based data cleaning scheme for reducing missed readings from RFID data streams (called WSTD) was also developed. The WSTD scheme models the unreliability of the RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. The WSTD scheme is capable of efficiently coping with both environmental variations and tag dynamics by automatically and continuously adapting its cleaning window size, based on observed readings
    • …
    corecore