27 research outputs found

    Autonomous migration of vertual machines for maximizing resource utilization

    Get PDF
    Virtualization of computing resources enables multiple virtual machines to run on a physical machine. When many virtual machines are deployed on a cluster of PCs, some physical machines will inevitably experience overload while others are under-utilized over time due to varying computational demands. This computational imbalance across the cluster undermines the very purpose of maximizing resource utilization through virtualization. To solve this imbalance problem, virtual machine migration has been introduced, where a virtual machine on a heavily loaded physical machine is selected and moved to a lightly loaded physical machine. The selection of the source virtual machine and the destination physical machine is based on a single fixed threshold value. Key to such threshold-based VM migration is to determine when to move which VM to what physical machine, since wrong or inadequate decisions can cause unnecessary migrations that would adversely affect the overall performance. The fixed threshold may not necessarily work for different computing infrastructures. Finding the optimal threshold is critical. In this research, a virtual machine migration framework is presented that autonomously finds and adjusts variable thresholds at runtime for different computing requirements to improve and maximize the utilization of computing resources. Central to this approach is the previous history of migrations and their effects before and after each migration in terms of standard deviation of utilization. To broaden this research, a proactive learning methodology is introduced that not only accumulates the past history of computing patterns and resulting migration decisions but more importantly searches all possibilities for the most suitable decisions. This research demonstrates through experimental results that the learning approach autonomously finds thresholds close to the optimal ones for different computing scenarios and that such varying thresholds yield an optimal number of VM migrations for maximizing resource utilization. The proposed framework is set up on a cluster of 8 and 16 PCs, each of which has multiple User-Mode Linux (UML)-based virtual machines. An extensive set of benchmark programs is deployed to closely resemble a real-world computing environment. Experimental results indicate that the proposed framework indeed autonomously finds thresholds close to the optimal ones for different computing scenarios, balances the load across the cluster through autonomous VM migration, and improves the overall performance of the dynamically changing computing environment

    Authentication and Data Protection under Strong Adversarial Model

    Get PDF
    We are interested in addressing a series of existing and plausible threats to cybersecurity where the adversary possesses unconventional attack capabilities. Such unconventionality includes, in our exploration but not limited to, crowd-sourcing, physical/juridical coercion, substantial (but bounded) computational resources, malicious insiders, etc. Our studies show that unconventional adversaries can be counteracted with a special anchor of trust and/or a paradigm shift on a case-specific basis. Complementing cryptography, hardware security primitives are the last defense in the face of co-located (physical) and privileged (software) adversaries, hence serving as the special trust anchor. Examples of hardware primitives are architecture-shipped features (e.g., with CPU or chipsets), security chips or tokens, and certain features on peripheral/storage devices. We also propose changes of paradigm in conjunction with hardware primitives, such as containing attacks instead of counteracting, pretended compliance, and immunization instead of detection/prevention. In this thesis, we demonstrate how our philosophy is applied to cope with several exemplary scenarios of unconventional threats, and elaborate on the prototype systems we have implemented. Specifically, Gracewipe is designed for stealthy and verifiable secure deletion of on-disk user secrets under coercion; Hypnoguard protects in-RAM data when a computer is in sleep (ACPI S3) in case of various memory/guessing attacks; Uvauth mitigates large-scale human-assisted guessing attacks by receiving all login attempts in an indistinguishable manner, i.e., correct credentials in a legitimate session and incorrect ones in a plausible fake session; Inuksuk is proposed to protect user files against ransomware or other authorized tampering. It augments the hardware access control on self-encrypting drives with trusted execution to achieve data immunization. We have also extended the Gracewipe scenario to a network-based enterprise environment, aiming to address slightly different threats, e.g., malicious insiders. We believe the high-level methodology of these research topics can contribute to advancing the security research under strong adversarial assumptions, and the promotion of software-hardware orchestration in protecting execution integrity therein

    An analysis of key generation efficiency of RSA cryptosystem in distributed environments

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2005Includes bibliographical references (leaves: 68)Text in English Abstract: Turkish and Englishix, 74 leavesAs the size of the communication through networks and especially through Internet grew, there became a huge need for securing these connections. The symmetric and asymmetric cryptosystems formed a good complementary approach for providing this security. While the asymmetric cryptosystems were a perfect solution for the distribution of the keys used by the communicating parties, they were very slow for the actual encryption and decryption of the data flowing between them. Therefore, the symmetric cryptosystems perfectly filled this space and were used for the encryption and decryption process once the session keys had been exchanged securely. Parallelism is a hot research topic area in many different fields and being used to deal with problems whose solutions take a considerable amount of time. Cryptography is no exception and, computer scientists have discovered that parallelism could certainly be used for making the algorithms for asymmetric cryptosystems go faster and the experimental results have shown a good promise so far. This thesis is based on the parallelization of a famous public-key algorithm, namely RSA

    Reliable Software for Unreliable Hardware - A Cross-Layer Approach

    Get PDF
    A novel cross-layer reliability analysis, modeling, and optimization approach is proposed in this thesis that leverages multiple layers in the system design abstraction (i.e. hardware, compiler, system software, and application program) to exploit the available reliability enhancing potential at each system layer and to exchange this information across multiple system layers

    A commodity trusted computing module

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 107-110).The Trusted Execution Module (TEM) is a high-level specification for a commodity chip that can execute user-supplied procedures in a trusted environment. The TEM draws inspiration from the Trusted Platform Module (TPM), the first security-related hardware that has gained massive adoption in the PC market. However, the TEM is capable of securely executing procedures expressing arbitrary computation, originating from a potentially untrusted party, whereas the TPM is limited to a set of cryptographic functions that is fixed at design-time. Despite its greater flexibility, the TEM design was implemented on the same inexpensive off-the-shelf hardware as the TPM, and it does not require any export-restricted technology. Furthermore, the TEM removes the expensive requirement of a secure binding to it host computer. This makes TEM a great candidate for the next-generation TPM. However, the TEM's guarantees of secure execution enable exciting applications that were far beyond the reach of TPM-powered systems. The applications include but are not limited to mobile agents, peer-to-peer multiplayer online games, and anonymous offline payments.by Victor Marius Costan.M.Eng

    Managing application software suppliers in information system development projects

    Get PDF
    Information system development (ISD) projects have been associated with the "software crisis" for over three decades. A set of common "root causes" has often been cited in literature with corresponding "solutions". Yet the overall project success rate has remained low, resulting in a paradox of many solutions and little progress over the years. This study examines the management of application software acquisition from external suppliers in ISD projects. Three case studies are documented based on participant observation with complete membership roles. After within-case analyses highlighting issues in individual cases, crosscase analyses are conducted, first to identify a pattern of ISD project challenges and then to search for their explanations. Concepts from agency theory, contract theory and product development literature are used in the process of diagnosing root causes behind the observations. The proposed explanation is that the Traditional Systems Development Framework (TSDF), characterized by competitive-bidding-monopolized-development, underlies the identified root causes. Accordingly, competitive development is suggested as an alternative approach. Following the "Inference to the Best Explanation" (IBE) analytical strategy, the suggested approach is subject to two contrastive analyses, first with the prepackaged software development and then with the construction industry, to demonstrate that the suggestion is a "warranted inference". Further analogical analyses illustrate the feasibility of development competition for software product development. A Performance-Based Systems Development Framework (PBSDF) is outlined as a tentative implementation of the suggested competitive development approach for ISD projects supported by risk-sharing supplier contract and a relative product evaluation approach. A number of future research implications are described as a result of this study after summarizing the research contributions

    Remote support technology for small business

    Get PDF
    Small business is in need of a more efficient solution for managing their Information Technology support needs. Due to small business\u27s need for custom solutions, IT service providers must dedicate highly skilled personnel to client business sites, incurring high overhead costs and restricting their ability to apply their employee base to multiple clients. This restriction in cost and flexibility places a high cost burden on small business clients, straining an already limited budget. The use of remote IT support technology may provide the basis for a solution to these problems. By applying remote technology, an IT provider could centralize their employee workforce, managing clients from a single location rather than dedicating manpower to client sites. If the technology was available to support such a model, this change in the methodology could result in a more manageable solution. Small business had the highest propensity to outsource IT support for the management of their hardware, software, web hosting, server/host management, networking, and security requirements. Many remote tools currently exist to support these needs, offering solutions for access, alerts, system monitoring, diagnosis, and reporting of a client\u27s IT infrastructure. Using these tools for remote support, a remote solution showed the greatest ability to manage the software, server/host management, and networking needs of small business organizations. Web hosting service requirements were strongly supported as well, although the use of remote solutions would cause a change in the current overall structure of web hosting support, leaving the solution more difficult to implement. In the areas of hardware and security, although many of the primary needs for support were strongly addressed, flaws were discovered that made the use of the methodology less than ideal. The primary flaws of remote support resulted from the inability to manage hardware device failure, the inability to manage the network medium, and security issues resulting from the ability to separate a system administrator from the designated system through denial of service type attacks. Although each of these flaws displayed a significant issue with the use of a remote management IT solution, it was determined that the risk of each could be limited through the use of redundancy, offering a feasible work around. From both a business and a technological perspective, remote solutions proved to be a viable alternative to on-site support for the management of small business IT needs. The total cost of remote solutions is extremely comparable to the average yearly salary of an IT employee, typically offering the same potential for the support of a client\u27s IT infrastructure as a one time investment. In addition, remote solutions offer significant savings to the provider in the reduction of administrative overhead and the increased potential for business expansion, allowing for significant cost savings to be passed on to the client. Although the use of remote technology does not offer a perfect solution in its support of small business, the functionality which is readily available presents the strong potential to increase the efficiency of current small business IT support methods and offer more cost effective solutions to small business organizations

    Design of a reference architecture for an IoT sensor network

    Get PDF
    corecore