40 research outputs found

    Distributed PC Based Routers: Bottleneck Analysis and Architecture Proposal

    Get PDF
    Recent research in the different functional areas of modern routers have made proposals that can greatly increase the efficiency of these machines. Most of these proposals can be implemented quickly and often efficiently in software. We wish to use personal computers as forwarders in a network to utilize the advances made by researchers. We therefore examine the ability of a personal computer to act as a router. We analyze the performance of a single general purpose computer and show that I/O is the primary bottleneck. We then study the performance of distributed router composed of multiple general purpose computers. We study the performance of a star topology and through experimental results we show that although its performance is good, it lacks flexibility in its design. We compare it with a multistage architecture. We conclude with a proposal for an architecture that provides us with a forwarder that is both flexible and scalable.© IEE

    MMP

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 129-135).Reliability and security are quickly becoming users' biggest concern due to the increasing reliance on computers in all areas of society. Hardware-enforced, fine-grained memory protection can increase the reliability and security of computer systems, but will be adopted only if the protection mechanism does not compromise performance, and if the hardware mechanism can be used easily by existing software. Mondriaan memory protection (MMP) provides fine-grained memory protection for a linear address space, while supporting an efficient hardware implementation. MMP's use of linear addressing makes it compatible with current software programming models and program binaries, and it is also backwards compatible with current operating systems and instruction sets. MMP can be implemented efficiently because it separates protection information from program data, allowing protection information to be compressed and cached efficiently. This organization is similar to paging hardware, where the translation information for a page of data bytes is compressed to a single translation value and cached in the TLB. MMP stores protection information in tables in protected system memory, just as paging hardware stores translation information in page tables. MMP is well suited to improve the robustness of modern software. Modern software development favors modules (or plugins) as a way to structure and provide extensibility for large systems, like operating systems, web servers and web clients. Protection between modules written in unsafe languages is currently provided only by programmer convention, reducing system stability.(cont.) Device drivers, which are implemented as loadable modules, are now the most frequent source of operating system crashes (e.g., 85% of Windows XP crashes in one study [SBL03]). MMP provides a mechanism to enforce module boundaries, increasing system robustness by isolating modules from each other and making all memory sharing explicit. We implement the MMP hardware in a simulator and modify a version of the Linux 2.4.19 operating system to use it. Linux loads its device drivers as kernel module extensions, and MMP enforces the module boundaries, only allowing the device drivers access to the memory they need to function. The memory isolation provided by MMP increases Linux's resistance to programmer error, and exposed two kernel bugs in common, heavily-tested drivers. Experiments with several benchmarks where MMP was used extensively indicate the space taken by the MMP data structures is less than 11% of the memory used by the kernel, and the kernel's runtime, according to a simple performance model, increases less than 12% (relative to an unmodified kernel).by Emmett Jethro Witchel.Ph.D

    Concurrency Platforms for Real-Time and Cyber-Physical Systems

    Get PDF
    Parallel processing is an important way to satisfy the increasingly demanding computational needs of modern real-time and cyber-physical systems, but existing parallel computing technologies primarily emphasize high-throughput and average-case performance metrics, which are largely unsuitable for direct application to real-time, safety-critical contexts. This work contrasts two concurrency platforms designed to achieve predictable worst case parallel performance for soft real-time workloads with millisecond periods and higher. One of these is then the basis for the CyberMech platform, which enables parallel real-time computing for a novel yet representative application called Real-Time Hybrid Simulation (RTHS). RTHS combines demanding parallel real-time computation with real-time simulation and control in an earthquake engineering laboratory environment, and results concerning RTHS characterize a reasonably comprehensive survey of parallel real-time computing in the static context, where the size, shape, timing constraints, and computational requirements of workloads are fixed prior to system runtime. Collectively, these contributions constitute the first published implementations and evaluations of general-purpose concurrency platforms for real-time and cyber-physical systems, explore two fundamentally different design spaces for such systems, and successfully demonstrate the utility and tradeoffs of parallel computing for statically determined real-time and cyber-physical systems

    The Open Source Way of Working: a New Paradigm for the Division of Labour in Software Development?

    Get PDF
    The interest the Open Source Software Development Model has recently raised amongst social scientists has resulted in an accumulation of relevant research concerned with explaining and describing the motivations of Open Source developers and the advantages the Open Source methodology has over traditional proprietary software development models. However, existing literature has often examined the Open Source phenomenon from an excessively abstract and idealised perspective of the common interests of open source developers, therefore neglecting the very important organisational and institutional aspects of communities of individuals that may, in fact, have diverse interests and motivations. It is the aim of this paper to begin remedying this shortcoming by analysing the sources of authority in Open Source projects and the hierarchical structures according to which this authority is organised and distributed inside them. In order to do so, a theoretical framework based on empirical evidence extracted from a variety of projects is built, its main concerns being the description and explanation of recruitment, enculturation, promotion and conflict resolution dynamics present in Open Source projects. The paper argues that 'distributed authority' is a principal means employed by such communities to increase stability, diminish the severity and scope of conflicts over technical direction, and ease the problems of assessing the quality of contributions. The paper also argues that distributed authority is principally derived from interpersonal interaction and the construction of trust between individuals drawn to the project by diverse interests that are mediated and moderated through participants' common interest in the project's successful outcome. The paper presents several conclusions concerning the governance of open source communities and priorities for future research.open source software, hierarchies, trust, teams, co-operation.

    Government Preferences for Promoting Open-Source Software: A Solution in Search of a Problem

    Get PDF
    Governments around the world are making or considering efforts to promote open-source software (typically produced by cooperatives of individuals) at the expense of proprietary software (generally sold by for-profit software developers). This article examines the economic basis for these kinds of government interventions in the market. It first provides some background on the software industry. The article discusses the industrial organization and performance of the proprietary software business and describes how the open-source movement produces and distributes software. It then surveys current government proposals and initiatives to support open-source software and examines whether there is a significant market failure that would justify such intervention in the software industry. The article concludes that the software industry has performed remarkably well over the past 20 years in the absence of government intervention. There is no evidence of any significant market failures in the provision of commercial software and no evidence that the establishment of policy preferences in favor of open-source software on the part of governments would increase consumer welfare

    Government Preferences for Promoting Open-Source Software: A Solution in Search of a Problem

    Get PDF
    Governments around the world are making or considering efforts to promote open-source software (typically produced by cooperatives of individuals) at the expense of proprietary software (generally sold by for-profit software developers). This article examines the economic basis for these kinds of government interventions in the market. It first provides some background on the software industry. The article discusses the industrial organization and performance of the proprietary software business and describes how the open-source movement produces and distributes software. It then surveys current government proposals and initiatives to support open-source software and examines whether there is a significant market failure that would justify such intervention in the software industry. The article concludes that the software industry has performed remarkably well over the past 20 years in the absence of government intervention. There is no evidence of any significant market failures in the provision of commercial software and no evidence that the establishment of policy preferences in favor of open-source software on the part of governments would increase consumer welfare

    Using program behaviour to exploit heterogeneous multi-core processors

    Get PDF
    Multi-core CPU architectures have become prevalent in recent years. A number of multi-core CPUs consist of not only multiple processing cores, but multiple different types of processing cores, each with different capabilities and specialisations. These heterogeneous multi-core architectures (HMAs) can deliver exceptional performance; however, they are notoriously difficult to program effectively. This dissertation investigates the feasibility of ameliorating many of the difficulties encountered in application development on HMA processors, by employing a behaviour aware runtime system. This runtime system provides applications with the illusion of executing on a homogeneous architecture, by presenting a homogeneous virtual machine interface. The runtime system uses knowledge of a program's execution behaviour, gained through explicit code annotations, static analysis or runtime monitoring, to inform its resource allocation and scheduling decisions, such that the application makes best use of the HMA's heterogeneous processing cores. The goal of this runtime system is to enable non-specialist application developers to write applications that can exploit an HMA, without the developer requiring in-depth knowledge of the HMA's design. This dissertation describes the development of a Java runtime system, called Hera-JVM, aimed at investigating this premise. Hera-JVM supports the execution of unmodified Java applications on both processing core types of the heterogeneous IBM Cell processor. An application's threads of execution can be transparently migrated between the Cell's different core types by Hera-JVM, without requiring the application's involvement. A number of real-world Java benchmarks are executed across both of the Cell's core types, to evaluate the efficacy of abstracting a heterogeneous architecture behind a homogeneous virtual machine. By characterising the performance of each of the Cell processor's core types under different program behaviours, a set of influential program behaviour characteristics is uncovered. A set of code annotations are presented, which enable program code to be tagged with these behaviour characteristics, enabling a runtime system to track a program's behaviour throughout its execution. This information is fed into a cost function, which Hera-JVM uses to automatically estimate whether the executing program's threads of execution would benefit from being migrated to a different core type, given their current behaviour characteristics. The use of history, hysteresis and trend tracking, by this cost function, is explored as a means of increasing its stability and limiting detrimental thread migrations. The effectiveness of a number of different migration strategies is also investigated under real-world Java benchmarks, with the most effective found to be a strategy that can target code, such that a thread is migrated whenever it executes this code. This dissertation also investigates the use of runtime monitoring to enable a runtime system to automatically infer a program's behaviour characteristics, without the need for explicit code annotations. A lightweight runtime behaviour monitoring system is developed, and its effectiveness at choosing the most appropriate core type on which to execute a set of real-world Java benchmarks is examined. Combining explicit behaviour characteristic annotations with those characteristics which are monitored at runtime is also explored. Finally, an initial investigation is performed into the use of behaviour characteristics to improve application performance under a different type of heterogeneous architecture, specifically, a non-uniform memory access (NUMA) architecture. Thread teams are proposed as a method of automatically clustering communicating threads onto the same NUMA node, thereby reducing data access overheads. Evaluation of this approach shows that it is effective at improving application performance, if the application's threads can be partitioned across the available NUMA nodes of a system. The findings of this work demonstrate that a runtime system with a homogeneous virtual machine interface can reduce the challenge of application development for HMA processors, whilst still being able to exploit such a processor by taking program behaviour into account

    Reverse Engineering: WiMAX and IEEE 802.16e

    Get PDF
    Wireless communications is part of everyday life. As it is incorporated into new products and services, it brings additional security risks and requirements. A thorough understanding of wireless protocols is necessary for network administrators and manufacturers. Though most wireless protocols have strict standards, many parts of the hardware implementation may deviate from the standard and be proprietary. In these situations reverse engineering must be conducted to fully understand the strengths and vulnerabilities of the communication medium. New 4G broadband wireless access protocols, including IEEE 802.16e and WiMAX, offer higher data rates and wider coverage than earlier 3G technologies. Many security vulnerabilities, including various Denial of Service (DoS) attacks, have been discovered in 3G protocols and the original IEEE 802.16 standard. Many of these vulnerabilities and new security flaws exist in the revised standard IEEE 802.16e. Most of the vulnerabilities already discovered allow for DoS attacks to be carried out on WiMAX networks. This study examines and analyzes a new DoS attack on IEEE 802.16e standard. We investigate how system parameters for the WiMAX Bandwidth Contention Resolution (BCR) process affect network vulnerability to DoS attacks. As this investigation developed and transitioned into analyzing hardware implementations, reverse engineering was needed to locate and modify the BCR system parameters. Controlling the BCR system parameters in hardware is not a normal task. The protocol allows only the BS to set the system parameters. The BS gives one setting of the BCR system parameters to all WiMAX clients on the network and everyone is suppose to follow these settings. Our study looks at what happens if a set of users, attackers, do not follow the BS\u27s settings and set their BCR system parameters independently. We hypothesize and analyze different techniques to do this in hardware with the goal being to replicate previous software simulations that looked at this behavior. This document details our approaches to reverse engineer IEEE 802.16e and WiMAX. Additionally, we look at network security analysis and how to design experiments to reduce time and cost. Factorial experiment design and ANOVA analysis is the solution. In using these approaches, one can test multiple factors in parallel, producing robust, repeatable and statistically significant results. By treating all other parameters as noise when testing first order effects, second and third order effects can be analyzed with less significance. The details of this type of experimental design is given along with NS-2 simulations and hardware experiments that analyze the BCR system parameters. This purpose of this paper is to serve as guide for reverse engineering network protocols and conducting network experiments. As wireless communication and network security become ubiquitous, the methods and techniques detailed in this study become increasingly important. This document can serve as a guide to reduce time and effort when reverse engineering other communication protocols and conducting network experiments
    corecore