137 research outputs found

    Deterministic 1-k routing on meshes with applications to worm-hole routing

    Get PDF
    In 11-kk routing each of the n2n^2 processing units of an n×nn \times n mesh connected computer initially holds 11 packet which must be routed such that any processor is the destination of at most kk packets. This problem reflects practical desire for routing better than the popular routing of permutations. 11-kk routing also has implications for hot-potato worm-hole routing, which is of great importance for real world systems. We present a near-optimal deterministic algorithm running in \sqrt{k} \cdot n / 2 + \go{n} steps. We give a second algorithm with slightly worse routing time but working queue size three. Applying this algorithm considerably reduces the routing time of hot-potato worm-hole routing. Non-trivial extensions are given to the general ll-kk routing problem and for routing on higher dimensional meshes. Finally we show that kk-kk routing can be performed in \go{k \cdot n} steps with working queue size four. Hereby the hot-potato worm-hole routing problem can be solved in \go{k^{3/2} \cdot n} steps

    Achieving parallel performance in scientific computations

    Get PDF

    Statistical learning in network architecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 167-[177]).The Internet has become a ubiquitous substrate for communication in all parts of society. However, many original assumptions underlying its design are changing. Amid problems of scale, complexity, trust and security, the modern Internet accommodates increasingly critical services. Operators face a security arms race while balancing policy constraints, network demands and commercial relationships. This thesis espouses learning to embrace the Internet's inherent complexity, address diverse problems and provide a component of the network's continued evolution. Malicious nodes, cooperative competition and lack of instrumentation on the Internet imply an environment with partial information. Learning is thus an attractive and principled means to ensure generality and reconcile noisy, missing or conflicting data. We use learning to capitalize on under-utilized information and infer behavior more reliably, and on faster time-scales, than humans with only local perspective. Yet the intrinsic dynamic and distributed nature of networks presents interesting challenges to learning. In pursuit of viable solutions to several real-world Internet performance and security problems, we apply statistical learning methods as well as develop new, network-specific algorithms as a step toward overcoming these challenges. Throughout, we reconcile including intelligence at different points in the network with the end-to-end arguments. We first consider learning as an end-node optimization for efficient peer-to-peer overlay neighbor selection and agent-centric latency prediction. We then turn to security and use learning to exploit fundamental weaknesses in malicious traffic streams. Our method is both adaptable and not easily subvertible. Next, we show that certain security and optimization problems require collaboration, global scope and broad views.(cont.) We employ ensembles of weak classifiers within the network core to mitigate IP source address forgery attacks, thereby removing incentive and coordination issues surrounding existing practice. Finally, we argue for learning within the routing plane as a means to directly optimize and balance provider and user objectives. This thesis thus serves first to validate the potential for using learning methods to address several distinct problems on the Internet and second to illuminate design principles in building such intelligent systems in network architecture.by Robert Edward Beverly, IV.Ph.D

    Techniques for Processing TCP/IP Flow Content in Network Switches at Gigabit Line Rates

    Get PDF
    The growth of the Internet has enabled it to become a critical component used by businesses, governments and individuals. While most of the trafïŹc on the Internet is legitimate, a proportion of the trafïŹc includes worms, computer viruses, network intrusions, computer espionage, security breaches and illegal behavior. This rogue trafïŹc causes computer and network outages, reduces network throughput, and costs governments and companies billions of dollars each year. This dissertation investigates the problems associated with TCP stream processing in high-speed networks. It describes an architecture that simpliïŹes the processing of TCP data streams in these environments and presents a hardware circuit capable of TCP stream processing on multi-gigabit networks for millions of simultaneous network connections. Live Internet trafïŹc is analyzed using this new TCP processing circuit

    Progress Report : 1991 - 1994

    Get PDF

    Towards Simulation and Emulation of Large-Scale Computer Networks

    Get PDF
    Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today\u27s networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation

    Secure Integrated Routing and Localization in Wireless Optical Sensor Networks

    Get PDF
    Wireless ad hoc and sensor networks are envisioned to be self-organizing and autonomous networks, that may be randomly deployed where no fixed infrastructure is either feasible or cost-effective. The successful commercialization of such networks depends on the feasible implementation of network services to support security-aware applications. Recently, free space optical (FSO) communication has emerged as a viable technology for broadband distributed wireless optical sensor network (WOSN) applications. The challenge of employing FSO include its susceptibility to adverse weather conditions and the line of sight requirement between two communicating nodes. In addition, it is necessary to consider security at the initial design phase of any network and routing protocol. This dissertation addresses the feasibility of randomly deployed WOSNs employing broad beam FSO with regard to the network layer, in which two important problems are specifically investigated. First, we address the parameter assignment problem which considers the relationship amongst the physical layer parameters of node density, transmission radius and beam divergence of the FSO signal in order to yield probabilistic guarantees on network connectivity. We analyze the node isolation property of WOSNs, and its relation to the connectivity of the network. Theoretical analysis and experimental investigation were conducted to assess the effects of hierarchical clustering as well as fading due to atmospheric turbulence on connectivity, thereby demonstrating the design choices necessary to make the random deployment of the WOSN feasible. Second, we propose a novel light-weight circuit-based, secure and integrated routing and localization paradigm within the WOSN, that leverages the resources of the base station. Our scheme exploits the hierarchical cluster-based organization of the network, and the directionality of links to deliver enhanced security performance including per hop and broadcast authentication, confidentiality, integrity and freshness of routing signals. We perform security and attack analysis and synthesis to characterize the protocol’s performance, compared to existing schemes, and demonstrate its superior performance for WOSNs. Through the investigation of this dissertation, we demonstrate the fundamental tradeoff between security and connectivity in WOSNs, and illustrate how the transmission radius may be used as a high sensitivity tuning parameter to balance there two metrics of network performance. We also present WOSNs as a field of study that opens up several directions for novel research, and encompasses problems such as connectivity analysis, secure routing and localization, intrusion detection, topology control, secure data aggregation and novel attack scenarios

    The Public and the Private at the United States Border with Cyberspace

    Get PDF
    In the twenty-first century, a state can come to know more about each of its citizens via surveillance than ever before in human history. Some states are beginning to exercise this ability. Much of this additional surveillance ability derives from enhanced access to digital information. This digital information comes in the form of bits of data that flow through both rivers and oceans of data. These rivers are full of information that passes by a given point, or series of points, in a network and can be intercepted; these oceans are also stocked with information that can be searched after the fact. These data are held in private hands as well as public. The most effective (or invasive, depending upon your vantage point) new forms of surveillance often involve a search of data held in a combination of private and public hands. Both private and public entities are increasingly encouraged to retain more data as a result of legal and market pressures. There are essentially no Fourth Amendment protections for U.S. citizens whose data is collected by a private third-party and turned over to the state. Nor are there such constitutional protections for the re-use of privately collected data by state actors. The few statutory provisions that protect citizens in these contexts are out-of-date and riddled with loop-holes. This inquiry prompts hard questions about the need to redefine the public and the private in a digital age. The meaning of the public and the private is changing, in material ways, both from the perspective of the observer and the observed. We need to rethink legal protections for citizens from state surveillance in a digital age as a result of this third-party data problem

    Internet Daemons: Digital Communications Possessed

    Get PDF
    We’re used to talking about how tech giants like Google, Facebook, and Amazon rule the internet, but what about daemons? Ubiquitous programs that have colonized the Net’s infrastructure—as well as the devices we use to access it—daemons are little known. Fenwick McKelvey weaves together history, theory, and policy to give a full account of where daemons come from and how they influence our lives—including their role in hot-button issues like network neutrality. Going back to Victorian times and the popular thought experiment Maxwell’s Demon, McKelvey charts how daemons evolved from concept to reality, eventually blossoming into the pandaemonium of code-based creatures that today orchestrates our internet. Digging into real-life examples like sluggish connection speeds, Comcast’s efforts to control peer-to-peer networking, and Pirate Bay’s attempts to elude daemonic control (and skirt copyright), McKelvey shows how daemons have been central to the internet, greatly influencing everyday users. Internet Daemons asks important questions about how much control is being handed over to these automated, autonomous programs, and the consequences for transparency and oversight. Table of Contents Abbreviations and Technical Terms Introduction 1. The Devil We Know: Maxwell’s Demon, Cyborg Sciences, and Flow Control 2. Possessing Infrastructure: Nonsynchronous Communication, IMPs, and Optimization 3. IMPs, OLIVERs, and Gateways: Internetworking before the Internet 4. Pandaemonium: The Internet as Daemons 5. Suffering from Buffering? Affects of Flow Control 6. The Disoptimized: The Ambiguous Tactics of the Pirate Bay 7. A Crescendo of Online Interactive Debugging? Gamers, Publics and Daemons Conclusion Acknowledgments Appendix: Internet Measurement and Mediators Notes Bibliography Index Reviews Beneath social media, beneath search, Internet Daemons reveals another layer of algorithms: deeper, burrowed into information networks. Fenwick McKelvey is the best kind of intellectual spelunker, taking us deep into the infrastructure and shining his light on these obscure but vital mechanisms. What he has delivered is a precise and provocative rethinking of how to conceive of power in and among networks. —Tarleton Gillespie, author of Custodians of the Internet Internet Daemons is an original and important contribution to the field of digital media studies. Fenwick McKelvey extensively maps and analyzes how daemons influence data exchanges across Internet infrastructures. This study insightfully demonstrates how daemons are transformative entities that enable particular ways of transferring information and connecting up communication, with significant social and political consequences. —Jennifer Gabrys, author of Program Eart

    NoC-based Architectures for Real-Time Applications : Performance Analysis and Design Space Exploration

    Get PDF
    Monoprocessor architectures have reached their limits in regard to the computing power they offer vs the needs of modern systems. Although multicore architectures partially mitigate this limitation and are commonly used nowadays, they usually rely on intrinsically non-scalable buses to interconnect the cores. The manycore paradigm was proposed to tackle the scalability issue of bus-based multicore processors. It can scale up to hundreds of processing elements (PEs) on a single chip, by organizing them into computing tiles (holding one or several PEs). Intercore communication is usually done using a Network-on-Chip (NoC) that consists of interconnected onchip routers allowing communication between tiles. However, manycore architectures raise numerous challenges, particularly for real-time applications. First, NoC-based communication tends to generate complex blocking patterns when congestion occurs, which complicates the analysis, since computing accurate worst-case delays becomes difficult. Second, running many applications on large Systems-on-Chip such as manycore architectures makes system design particularly crucial and complex. On one hand, it complicates Design Space Exploration, as it multiplies the implementation alternatives that will guarantee the desired functionalities. On the other hand, once a hardware architecture is chosen, mapping the tasks of all applications on the platform is a hard problem, and finding an optimal solution in a reasonable amount of time is not always possible. Therefore, our first contributions address the need for computing tight worst-case delay bounds in wormhole NoCs. We first propose a buffer-aware worst-case timing analysis (BATA) to derive upper bounds on the worst-case end-to-end delays of constant-bit rate data flows transmitted over a NoC on a manycore architecture. We then extend BATA to cover a wider range of traffic types, including bursty traffic flows, and heterogeneous architectures. The introduced method is called G-BATA for Graph-based BATA. In addition to covering a wider range of assumptions, G-BATA improves the computation time; thus increases the scalability of the method. In a second part, we develop a method addressing design and mapping for applications with real-time constraints on manycore platforms. It combines model-based engineering tools (TTool) and simulation with our analytical verification technique (G-BATA) and tools (WoPANets) to provide an efficient design space exploration framework. Finally, we validate our contributions on (a) a serie of experiments on a physical platform and (b) two case studies taken from the real world: an autonomous vehicle control application, and a 5G signal decoder applicatio
    • 

    corecore