291 research outputs found

    OnionBots: Subverting Privacy Infrastructure for Cyber Attacks

    Full text link
    Over the last decade botnets survived by adopting a sequence of increasingly sophisticated strategies to evade detection and take overs, and to monetize their infrastructure. At the same time, the success of privacy infrastructures such as Tor opened the door to illegal activities, including botnets, ransomware, and a marketplace for drugs and contraband. We contend that the next waves of botnets will extensively subvert privacy infrastructure and cryptographic mechanisms. In this work we propose to preemptively investigate the design and mitigation of such botnets. We first, introduce OnionBots, what we believe will be the next generation of resilient, stealthy botnets. OnionBots use privacy infrastructures for cyber attacks by completely decoupling their operation from the infected host IP address and by carrying traffic that does not leak information about its source, destination, and nature. Such bots live symbiotically within the privacy infrastructures to evade detection, measurement, scale estimation, observation, and in general all IP-based current mitigation techniques. Furthermore, we show that with an adequate self-healing network maintenance scheme, that is simple to implement, OnionBots achieve a low diameter and a low degree and are robust to partitioning under node deletions. We developed a mitigation technique, called SOAP, that neutralizes the nodes of the basic OnionBots. We also outline and discuss a set of techniques that can enable subsequent waves of Super OnionBots. In light of the potential of such botnets, we believe that the research community should proactively develop detection and mitigation methods to thwart OnionBots, potentially making adjustments to privacy infrastructure.Comment: 12 pages, 8 figure

    Dynamic Load Balancing for Massively Multiplayer Online Games

    Get PDF
    In recent years, there has been an important growth of online gaming. Todayā€™s Massively Multiplayer Online Games (MMOGs) can contain millions of synchronous players scattered across the world and participating with each other within a single shared game. Traditional Client/Server architectures of MMOGs exhibit different problems in scalability, reliability, and latency, as well as the cost of adding new servers when demand is too high. P2P architecture provides considerable support for scalability of MMOGs. It also achieves good response times by supporting direct connections between players. This thesis proposes a novel hybrid Peer-to-Peer architecture for MMOGs and a new dynamic load balancing for massively multiplayer online games (MMOGs) based this hybrid Peer-to-Peer architecture. We have divided the game world space into several regions. Each region in the game world space is controlled and managed by using both a super-peer and a clone-super-peer. The region's super-peer is responsible for distributing the game update among the players inside the region, as well as managing the game communications between the players. However, the clone-super-peer is responsible for controlling the players' migration from one region to another, in addition to be the super-peer of the region when the super-peer leaves the game. In this thesis, we have designed and simulated a static and dynamic Area of Interest Management (AoIM) for MMOGs based on both architectures hybrid P2P and client-server with the possibility of players to move from one region to another. In this thesis also, we have designed and evaluated the static and dynamic load balancing for MMOGs based on hybrid P2P architecture. We have used OPNET Modeler 18.0 to simulate and evaluate the proposed system, especially standard applications, custom applications, TDMA and RX Group. Our dynamic load balancer is responsible for distributing the load among the regions in the game world space. The position of the load balancer is located between the game server and the regions. The results, following extensive experiments, show that low delay and higher traffic communication can be achieved using both of hybrid P2P architecture, static and dynamic AoIM, dynamic load balancing for MMOGs based on hybrid P2P system

    An Efficient Holistic Data Distribution and Storage Solution for Online Social Networks

    Get PDF
    In the past few years, Online Social Networks (OSNs) have dramatically spread over the world. Facebook [4], one of the largest worldwide OSNs, has 1.35 billion users, 82.2% of whom are outside the US [36]. The browsing and posting interactions (text content) between OSN users lead to user data reads (visits) and writes (updates) in OSN datacenters, and Facebook now serves a billion reads and tens of millions of writes per second [37]. Besides that, Facebook has become one of the top Internet traļ¬ƒc sources [36] by sharing tremendous number of large multimedia ļ¬les including photos and videos. The servers in datacenters have limited resources (e.g. bandwidth) to supply latency eļ¬ƒcient service for multimedia ļ¬le sharing among the rapid growing users worldwide. Most online applications operate under soft real-time constraints (e.g., ā‰¤ 300 ms latency) for good user experience, and its service latency is negatively proportional to its income. Thus, the service latency is a very important requirement for Quality of Service (QoS) to the OSN as a web service, since it is relevant to the OSNā€™s revenue and user experience. Also, to increase OSN revenue, OSN service providers need to constrain capital investment, operation costs, and the resource (bandwidth) usage costs. Therefore, it is critical for the OSN to supply a guaranteed QoS for both text and multimedia contents to users while minimizing its costs. To achieve this goal, in this dissertation, we address three problems. i) Data distribution among datacenters: how to allocate data (text contents) among data servers with low service latency and minimized inter-datacenter network load; ii) Eļ¬ƒcient multimedia ļ¬le sharing: how to facilitate the servers in datacenters to eļ¬ƒciently share multimedia ļ¬les among users; iii) Cost minimized data allocation among cloud storages: how to save the infrastructure (datacenters) capital investment and operation costs by leveraging commercial cloud storage services. Data distribution among datacenters. To serve the text content, the new OSN model, which deploys datacenters globally, helps reduce service latency to worldwide distributed users and release the load of the existing datacenters. However, it causes higher inter-datacenter communica-tion load. In the OSN, each datacenter has a full copy of all data, and the master datacenter updates all other datacenters, generating tremendous load in this new model. The distributed data storage, which only stores a userā€™s data to his/her geographically closest datacenters, simply mitigates the problem. However, frequent interactions between distant users lead to frequent inter-datacenter com-munication and hence long service latencies. Therefore, the OSNs need a data allocation algorithm among datacenters with minimized network load and low service latency. Eļ¬ƒcient multimedia ļ¬le sharing. To serve multimedia ļ¬le sharing with rapid growing user population, the ļ¬le distribution method should be scalable and cost eļ¬ƒcient, e.g. minimiza-tion of bandwidth usage of the centralized servers. The P2P networks have been widely used for ļ¬le sharing among a large amount of users [58, 131], and meet both scalable and cost eļ¬ƒcient re-quirements. However, without fully utilizing the altruism and trust among friends in the OSNs, current P2P assisted ļ¬le sharing systems depend on strangers or anonymous users to distribute ļ¬les that degrades their performance due to user selļ¬sh and malicious behaviors. Therefore, the OSNs need a cost eļ¬ƒcient and trustworthy P2P-assisted ļ¬le sharing system to serve multimedia content distribution. Cost minimized data allocation among cloud storages. The new trend of OSNs needs to build worldwide datacenters, which introduce a large amount of capital investment and maintenance costs. In order to save the capital expenditures to build and maintain the hardware infrastructures, the OSNs can leverage the storage services from multiple Cloud Service Providers (CSPs) with existing worldwide distributed datacenters [30, 125, 126]. These datacenters provide diļ¬€erent Get/Put latencies and unit prices for resource utilization and reservation. Thus, when se-lecting diļ¬€erent CSPsā€™ datacenters, an OSN as a cloud customer of a globally distributed application faces two challenges: i) how to allocate data to worldwide datacenters to satisfy application SLA (service level agreement) requirements including both data retrieval latency and availability, and ii) how to allocate data and reserve resources in datacenters belonging to diļ¬€erent CSPs to minimize the payment cost. Therefore, the OSNs need a data allocation system distributing data among CSPsā€™ datacenters with cost minimization and SLA guarantee. In all, the OSN needs an eļ¬ƒcient holistic data distribution and storage solution to minimize its network load and cost to supply a guaranteed QoS for both text and multimedia contents. In this dissertation, we propose methods to solve each of the aforementioned challenges in OSNs. Firstly, we verify the beneļ¬ts of the new trend of OSNs and present OSN typical properties that lay the basis of our design. We then propose Selective Data replication mechanism in Distributed Datacenters (SD3) to allocate user data among geographical distributed datacenters. In SD3,a datacenter jointly considers update rate and visit rate to select user data for replication, and further atomizes a userā€™s diļ¬€erent types of data (e.g., status update, friend post) for replication, making sure that a replica always reduces inter-datacenter communication. Secondly, we analyze a BitTorrent ļ¬le sharing trace, which proves the necessity of proximity-and interest-aware clustering. Based on the trace study and OSN properties, to address the second problem, we propose a SoCial Network integrated P2P ļ¬le sharing system for enhanced Eļ¬ƒciency and Trustworthiness (SOCNET) to fully and cooperatively leverage the common-interest, geographically-close and trust properties of OSN friends. SOCNET uses a hierarchical distributed hash table (DHT) to cluster common-interest nodes, and then further clusters geographically close nodes into a subcluster, and connects the nodes in a subcluster with social links. Thus, when queries travel along trustable social links, they also gain higher probability of being successfully resolved by proximity-close nodes, simultaneously enhancing eļ¬ƒciency and trustworthiness. Thirdly, to handle the third problem, we model the cost minimization problem under the SLA constraints using integer programming. According to the system model, we propose an Eco-nomical and SLA-guaranteed cloud Storage Service (ES3), which ļ¬nds a data allocation and resource reservation schedule with cost minimization and SLA guarantee. ES3 incorporates (1) a data al-location and reservation algorithm, which allocates each data item to a datacenter and determines the reservation amount on datacenters by leveraging all the pricing policies; (2) a genetic algorithm based data allocation adjustment approach, which makes data Get/Put rates stable in each data-center to maximize the reservation beneļ¬t; and (3) a dynamic request redirection algorithm, which dynamically redirects a data request from an over-utilized datacenter to an under-utilized datacenter with suļ¬ƒcient reserved resource when the request rate varies greatly to further reduce the payment. Finally, we conducted trace driven experiments on a distributed testbed, PlanetLab, and real commercial cloud storage (Amazon S3, Windows Azure Storage and Google Cloud Storage) to demonstrate the eļ¬ƒciency and eļ¬€ectiveness of our proposed systems in comparison with other systems. The results show that our systems outperform others in the network savings and data distribution eļ¬ƒciency

    A Design Approach to IoT Endpoint Security for Production Machinery Monitoring

    Get PDF
    The Internet of Things (IoT) has significant potential in upgrading legacy production machinery with monitoring capabilities to unlock new capabilities and bring economic benefits. However, the introduction of IoT at the shop floor layer exposes it to additional security risks with potentially significant adverse operational impact. This article addresses such fundamental new risks at their root by introducing a novel endpoint security-by-design approach. The approach is implemented on a widely applicable production-machinery-monitoring application by introducing real-time adaptation features for IoT device security through subsystem isolation and a dedicated lightweight authentication protocol. This paper establishes a novel viewpoint for the understanding of IoT endpoint security risks and relevant mitigation strategies and opens a new space of risk-averse designs that enable IoT benefits, while shielding operational integrity in industrial environments

    Solving key design issues for massively multiplayer online games on peer-to-peer architectures

    Get PDF
    Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale on the Internet and are predominantly implemented by Client/Server architectures. While such a classical approach to distributed system design offers many benefits, it suffers from significant technical and commercial drawbacks, primarily reliability and scalability costs. This realisation has sparked recent research interest in adapting MMOGs to Peer-to-Peer (P2P) architectures. This thesis identifies six key design issues to be addressed by P2P MMOGs, namely interest management, event dissemination, task sharing, state persistency, cheating mitigation, and incentive mechanisms. Design alternatives for each issue are systematically compared, and their interrelationships discussed. How well representative P2P MMOG architectures fulfil the design criteria is also evaluated. It is argued that although P2P MMOG architectures are developing rapidly, their support for task sharing and incentive mechanisms still need to be improved. The design of a novel framework for P2P MMOGs, Mediator, is presented. It employs a self-organising super-peer network over a P2P overlay infrastructure, and addresses the six design issues in an integrated system. The Mediator framework is extensible, as it supports flexible policy plug-ins and can accommodate the introduction of new superpeer roles. Key components of this framework have been implemented and evaluated with a simulated P2P MMOG. As the Mediator framework relies on super-peers for computational and administrative tasks, membership management is crucial, e.g. to allow the system to recover from super-peer failures. A new technology for this, namely Membership-Aware Multicast with Bushiness Optimisation (MAMBO), has been designed, implemented and evaluated. It reuses the communication structure of a tree-based application-level multicast to track group membership efficiently. Evaluation of a demonstration application shows i that MAMBO is able to quickly detect and handle peers joining and leaving. Compared to a conventional supervision architecture, MAMBO is more scalable, and yet incurs less communication overheads. Besides MMOGs, MAMBO is suitable for other P2P applications, such as collaborative computing and multimedia streaming. This thesis also presents the design, implementation and evaluation of a novel task mapping infrastructure for heterogeneous P2P environments, Deadline-Driven Auctions (DDA). DDA is primarily designed to support NPC host allocation in P2P MMOGs, and specifically in the Mediator framework. However, it can also support the sharing of computational and interactive tasks with various deadlines in general P2P applications. Experimental and analytical results demonstrate that DDA efficiently allocates computing resources for large numbers of real-time NPC tasks in a simulated P2P MMOG with approximately 1000 players. Furthermore, DDA supports gaming interactivity by keeping the communication latency among NPC hosts and ordinary players low. It also supports flexible matchmaking policies, and can motivate application participants to contribute resources to the system

    Towards Collaborative Scientific Workflow Management System

    Get PDF
    The big data explosion phenomenon has impacted several domains, starting from research areas to divergent of business models in recent years. As this intensive amount of data opens up the possibilities of several interesting knowledge discoveries, over the past few years divergent of research domains have undergone the shift of trend towards analyzing those massive amount data. Scientific Workflow Management System (SWfMS) has gained much popularity in recent years in accelerating those data-intensive analyses, visualization, and discoveries of important information. Data-intensive tasks are often significantly time-consuming and complex in nature and hence SWfMSs are designed to efficiently support the specification, modification, execution, failure handling, and monitoring of the tasks in a scientific workflow. As far as the complexity, dimension, and volume of data are concerned, their effective analysis or management often become challenging for an individual and requires collaboration of multiple scientists instead. Hence, the notion of 'Collaborative SWfMS' was coined - which gained significant interest among researchers in recent years as none of the existing SWfMSs directly support real-time collaboration among scientists. In terms of collaborative SWfMSs, consistency management in the face of conflicting concurrent operations of the collaborators is a major challenge for its highly interconnected document structure among the computational modules - where any minor change in a part of the workflow can highly impact the other part of the collaborative workflow for the datalink relation among them. In addition to the consistency management, studies show several other challenges that need to be addressed towards a successful design of collaborative SWfMSs, such as sub-workflow composition and execution by different sub-groups, relationship between scientific workflows and collaboration models, sub-workflow monitoring, seamless integration and access control of the workflow components among collaborators and so on. In this thesis, we propose a locking scheme to facilitate consistency management in collaborative SWfMSs. The proposed method works by locking workflow components at a granular attribute level in addition to supporting locks on a targeted part of the collaborative workflow. We conducted several experiments to analyze the performance of the proposed method in comparison to related existing methods. Our studies show that the proposed method can reduce the average waiting time of a collaborator by up to 36% while increasing the average workflow update rate by up to 15% in comparison to existing descendent modular level locking techniques for collaborative SWfMSs. We also propose a role-based access control technique for the management of collaborative SWfMSs. We leverage the Collaborative Interactive Application Methodology (CIAM) for the investigation of role-based access control in the context of collaborative SWfMSs. We present our proposed method with a use-case of Plant Phenotyping and Genotyping research domain. Recent study shows that the collaborative SWfMSs often different sets of opportunities and challenges. From our investigations on existing research works towards collaborative SWfMSs and findings of our prior two studies, we propose an architecture of collaborative SWfMSs. We propose - SciWorCS - a Collaborative Scientific Workflow Management System as a proof of concept of the proposed architecture; which is the first of its kind to the best of our knowledge. We present several real-world use-cases of scientific workflows using SciWorCS. Finally, we conduct several user studies using SciWorCS comprising different real-world scientific workflows (i.e., from myExperiment) to understand the user behavior and styles of work in the context of collaborative SWfMSs. In addition to evaluating SciWorCS, the user studies reveal several interesting facts which can significantly contribute in the research domain, as none of the existing methods considered such empirical studies, and rather relied only on computer generated simulated studies for evaluation

    Use of latent semantic indexing for content based searching and routing of mobile agents on P2P network

    Get PDF
    The peer-to-peer (P2P) system has a number of nodes that are connected to each other in an unstructured or a structured overlay network. One of the most important problems in a P2P system is locating of resources that are shared by various nodes. Techniques such as Flooding and Distributed Hash-Table (DHT) have been proposed to locate resources shared by various nodes. Flooding suffers from saturation as number of nodes increase, while DHT cannot handle multiple keys to define and search a resource. Various further research works including multi agent systems (MAS) have been pursued that take unstructured or structured networks as a backbone and hence inherently suffer from problems. We present the solution that is more efficient and effective for discovering shared resources on a network that is influenced by content shared by nodes. Our solution presents use of multiple agents that manage the shared information on a node and a mobile agent called Reconnaissance Agent (RA) that is responsible for querying various nodes. To reduce the search load on nodes that have unrelated content, an efficient migration route is proposed for RA that is based on cosine similarity of content shared by nodes and user query. Results show reduction in search load and traffic due to communication, and increase in recall value for locating of resources defined by multiple keys using RA that are logically similar to user query. Furthermore, the results indicate that by use of our technique the relevance of search results is higher; that is obtained by minimal traffic generation/communication and hops made by RA

    Context-aware collaborative storage and programming for mobile users

    Get PDF
    Since people generate and access most digital content from mobile devices, novel innovative mobile apps and services are possible. Most people are interested in sharing this content with communities defined by friendship, similar interests, or geography in exchange for valuable services from these innovative apps. At the same time, they want to own and control their content. Collaborative mobile computing is an ideal choice for this situation. However, due to the distributed nature of this computing environment and the limited resources on mobile devices, maintaining content availability and storage fairness as well as providing efficient programming frameworks are challenging. This dissertation explores several techniques to improve these shortcomings of collaborative mobile computing platforms. First, it proposes a medley of three techniques into one system, MobiStore, that offers content availability in mobile peer-to-peer networks: topology maintenance with robust connectivity, structural reorientation based on the current state of the network, and gossip-based hierarchical updates. Experimental results showed that MobiStore outperforms a state-of-the-art comparison system in terms of content availability and resource usage fairness. Next, the dissertation explores the usage of social relationship properties (i.e., network centrality) to improve the fairness of resource allocation for collaborative computing in peer-to-peer online social networks. The challenge is how to provide fairness in content replication for P2P-OSN, given that the peers in these networks exchange information only with one-hop neighbors. The proposed solution provides fairness by selecting the peers to replicate content based on their potential to introduce the storage skewness, which is determined from their structural properties in the network. The proposed solution, Philia, achieves higher content availability and storage fairness than several comparison systems. The dissertation concludes with a high-level distributed programming model, which efficiently uses computing resources on a cloud-assisted, collaborative mobile computing platform. This platform pairs mobile devices with virtual machines (VMs) in the cloud for increased execution performance and availability. On such a platform, two important challenges arise: first, pairing the two computing entities into a seamless computation, communication, and storage unit; and second, using the computing resources in a cost-effective way. This dissertation proposes Moitree, a distributed programming model and middleware that translates high-level programming constructs into events and provides the illusion of a single computing entity over the mobile-VM pairs. From programmersā€™ viewpoint, the Moitree API models user collaborations into dynamic groups formed over location, time, or social hierarchies. Experimental results from a prototype implementation show that Moitree is scalable, suitable for real-time apps, and can improve the performance of collaborating apps regarding latency and energy consumption

    A Survey of DeFi Security: Challenges and Opportunities

    Full text link
    DeFi, or Decentralized Finance, is based on a distributed ledger called blockchain technology. Using blockchain, DeFi may customize the execution of predetermined operations between parties. The DeFi system use blockchain technology to execute user transactions, such as lending and exchanging. The total value locked in DeFi decreased from \$200 billion in April 2022 to \$80 billion in July 2022, indicating that security in this area remained problematic. In this paper, we address the deficiency in DeFi security studies. To our best knowledge, our paper is the first to make a systematic analysis of DeFi security. First, we summarize the DeFi-related vulnerabilities in each blockchain layer. Additionally, application-level vulnerabilities are also analyzed. Then we classify and analyze real-world DeFi attacks based on the principles that correlate to the vulnerabilities. In addition, we collect optimization strategies from the data, network, consensus, smart contract, and application layers. And then, we describe the weaknesses and technical approaches they address. On the basis of this comprehensive analysis, we summarize several challenges and possible future directions in DeFi to offer ideas for further research
    • ā€¦
    corecore