242 research outputs found

    The end of the Intel age

    Get PDF
    Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 108-111).Executive Summary - The End of the Intel Era. Today, Intel is nearly synonymous with computers. In the past thirty years nearly all personal computers and the great majority of servers have shipped with a processor based on Intel's x86 architecture, of which Intel is the dominant vendor. Yet the past few years have seen a subtle yet remarkable convergence of different industry trends that very well may topple the semiconductor giant. For the past three decades, computers have largely assumed the same shape and form, regardless of their task. Laptops, desktops, and servers have all been based on the same open modular architecture established by IBM. Yet this is not likely to be the case going forward. The past decade has seen the rise of embedded computing, perhaps best epitomized by smartphones and tablet computers. Instead of the standard PC architecture where individual components can be easily exchanged, embedded devices are typically modular designs with highly integrated physical components. Independent functional units, all designed by independent companies, are integrated onto the same piece of silicon to achieve system cost and performance targets. Instead of a standard x86 processor, each device category likely has a chip optimized for its specific application. At the same time that the form of computing is changing, we are witnessing a redistribution of where computing power resides with Cloud Computing and data centers. These have ordinarily been the province of Intel based machines, but data centers have moved from using standard off-the-shelf PCs to custom designed motherboards. Again, we are seeing a shift from the modular personal computer architecture to one that is customized for the task at hand. Another concern for Intel is that the standard metrics by which products compete are in flux. For both embedded systems and data centers, the operational costs and constraints are starting to outweigh the initial outlay costs. An example is the industry shift from overall performance to system power efficiency. Intel has been a relentless driver of processor performance, and this is a significant change of focus for its R&D divisions. Of all Intel's competitors, ARM best represents the magnitude of these challenges for Intel, and is well positioned to take advantage of all these trends. Their business model of licensing their design is well suited for a world with customized architectures, and their extensive experience in low power embedded devices has given them an advantage over Intel in processor power efficiency. Intel is heavily invested in its existing vision of the market. They have always maintained a manufacturing process advantage through tremendous investments in new foundries, and have long championed the open PC modular architecture. Time will ultimately show if Intel is capable of meeting these growing challenges. Yet it is clear that in order to do so, it must make radical changes to itself. One may ask if it is even the same company that emerges.by Robert Swope Fleming.S.M.in Engineering and Managemen

    Understanding Digital Technology’s Evolution and the Path of Measured Productivity Growth: Present and Future in the Mirror of the Past

    Get PDF
    Three styles of explanation have been advanced by economists seeking to account for the so-called 'productivity paradox'. The coincidence of a persisting slowdown in the growth of measured total factor productivity (TFP) in the US, since the mid-1970's, with the wave of information technology (It) innovations, is said by some to be an illusion due to the mismeasurement of real output growth; by others to expose the mistaken expectations about the benefits of computerization; and by still others to reflect the amount of time, and the volume of intangible investments in 'learning', and the time required for ancillary innovations that allow the new digital technologies to be applied in ways that are reflected in measured productivity growth. This paper shows that rather than viewing these as competing hypotheses, the dynamics of the transition to a new technological and economic regime based upon a general purpose technology (GPT) should be understood to be likely to give rise to all three 'effects.' It more fully articulates and supports this thesis, which was first advanced in the 'computer and dynamo' papers by David (1990, 1991). The relevance of that historical experience is re-asserted and supported by further evidence rebutting skeptics who have argued that the diffusion of electrification and computerization have little in common. New evidence is produced about the links between IT use, mass customization, and the upward bias of output price deflators arising from the method used to 'chain in' new products prices. The measurement bias due to the exclusion of intangible investments from the scope of the official national product accounts also is examined. Further, it is argued that the development of the general-purpose PC delayed the re-organization of businesses along lines that would have more directly raised task productivity, even though the technologies yielded positive 'revenue productivity' gains for large companies. The paper concludes by indicating the emerging technical and organizational developments that are likely to deliver a sustained surge of measured TFP growth during the decades that lie immediately ahead.

    FPGA acceleration of sequence analysis tools in bioinformatics

    Full text link
    Thesis (Ph.D.)--Boston UniversityWith advances in biotechnology and computing power, biological data are being produced at an exceptional rate. The purpose of this study is to analyze the application of FPGAs to accelerate high impact production biosequence analysis tools. Compared with other alternatives, FPGAs offer huge compute power, lower power consumption, and reasonable flexibility. BLAST has become the de facto standard in bioinformatic approximate string matching and so its acceleration is of fundamental importance. It is a complex highly-optimized system, consisting of tens of thousands of lines of code and a large number of heuristics. Our idea is to emulate the main phases of its algorithm on FPGA. Utilizing our FPGA engine, we quickly reduce the size of the database to a small fraction, and then use the original code to process the query. Using a standard FPGA-based system, we achieved 12x speedup over a highly optimized multithread reference code. Multiple Sequence Alignment (MSA)--the extension of pairwise Sequence Alignment to multiple Sequences--is critical to solve many biological problems. Previous attempts to accelerate Clustal-W, the most commonly used MSA code, have directly mapped a portion of the code to the FPGA. We use a new approach: we apply prefiltering of the kind commonly used in BLAST to perform the initial all-pairs alignments. This results in a speedup of from 8Ox to 190x over the CPU code (8 cores). The quality is comparable to the original according to a commonly used benchmark suite evaluated with respect to multiple distance metrics. The challenge in FPGA-based acceleration is finding a suitable application mapping. Unfortunately many software heuristics do not fall into this category and so other methods must be applied. One is restructuring: an entirely new algorithm is applied. Another is to analyze application utilization and develop accuracy/performance tradeoffs. Using our prefiltering approach and novel FPGA programming models we have achieved significant speedup over reference programs. We have applied approximation, seeding, and filtering to this end. The bulk of this study is to introduce the pros and cons of these acceleration models for biosequence analysis tools

    Culture and Code: The Evolution of Digital Architecture and the Formation of Networked Publics

    Get PDF
    Culture and Code traces the construction of the modern idea of the Internet and offers a potential glimpse of how that idea may change in the near future. Developed through a theoretical framework that links Sheila Jasanoff and Sang-Hyun Kim’s theory of the sociotechnical imaginary to broader theories on publics and counterpublics, Culture and Code offers a way to reframe the evolution of Internet technology and its culture as an enmeshed part of larger socio-political shifts within society. In traveling the history of the modern Internet as detailed in its technical documentation, legal documents, user created content, and popular media this dissertation positions the construction of the idea of the Internet and its technology as the result of an ongoing series of intersections and collisions between the sociotechnical imaginaries of three different publics: Implementors, Vendors, and Users. These publics were identified as the primary audiences of the 1989 Internet Engineering Task Force specification of the four-layer TCP/IP model that became a core part of our modern infrastructure. Using that model as a continued metaphor throughout the work, Culture and Code shows how each public’s sociotechnical imaginary developed, how they influenced and shaped one another, and the inevitable conflicts that arose leading to a coalescing sociotechnical imaginary that is centered around vendor control while continuing to project the ideal of the empowered user

    Tiled microprocessors

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 251-258).Current-day microprocessors have reached the point of diminishing returns due to inherent scalability limitations. This thesis examines the tiled microprocessor, a class of microprocessor which is physically scalable but inherits many of the desirable properties of conventional microprocessors. Tiled microprocessors are composed of an array of replicated tiles connected by a special class of network, the Scalar Operand Network (SON), which is optimized for low-latency, low-occupancy communication between remote ALUs on different tiles. Tiled microprocessors can be constructed to scale to 100's or 1000's of functional units. This thesis identifies seven key criteria for achieving physical scalability in tiled microprocessors. It employs an archetypal tiled microprocessor to examine the challenges in achieving these criteria and to explore the properties of Scalar Operand Networks. The thesis develops the field of SONs in three major ways: it introduces the 5-tuple performance metric, it describes a complete, high-frequency SON implementation, and it proposes a taxonomy, called AsTrO, for categorizing them.(cont.) To develop these ideas, the thesis details the design, implementation and analysis of a tiled microprocessor prototype, the Raw Microprocessor, which was implemented at MIT in 180 nm technology. Overall, compared to Raw, recent commercial processors with half the transistors required 30x as many lines of code, occupied 100x as many designers, contained 50x as many pre-tapeout bugs, and resulted in 33x as many post-tapeout bugs. At the same time, the Raw microprocessor proves to be more versatile in exploiting ILP, stream, and server-farm workloads with modest to large amounts of parallelism.by Michael Bedford Taylor.Ph.D

    Spartan Daily, March 12, 1982

    Get PDF
    Volume 78, Issue 25https://scholarworks.sjsu.edu/spartandaily/6868/thumbnail.jp

    Theology & technology: An Exploration of their relationship with special reference to the work of Albert Borgmann and intelligent transportation systems

    Get PDF
    This thesis summarizes a large body of literature concerning the sociology, рhilosophy, and history of technology and the specific set of technologies concerned with Intelligent Transportation Systems (ITS). It considers technologies of various kinds within the Old and New Testaments and how technology has been understood and occasionally discussed by contemporary theologians. Intelligent Transportation Systems are a very prominent area of modem technologies that will shape the future of society in profound ways. The overall field of ITS is described and then a specific case study concerning a set of automated highway systems applications within three states and two large national parks within the United States is presented. The case study then provides a backdrop to explore specific ways in which theology might engage in a conversation with intelligent transportation systems specifically and technology more generally. Since theologians have written relatively little about technology, we draw upon the work of a leading philosopher of technology who is informed by his Christian commitments, Albert Borgmaim. The extensive philosophy of Bergmann about technology and the character of contemporary life is described. Various considerations about how to create, foster, and maintain a sustained dialogue between disparate intellectual traditions and disciplines are suggested. This includes attention to goals for dialogue, respective strengths that various parties bring to the conversation, and the willingness to hear and learn from the other. A framework to categorize interactions between theology and technology is introduced. Borgmann's ideas, coupled with those of other theologians and philosophers are then applied to the case study. The worth of this approach is then assessed in light of what theologians might contribute to discussion and decision-making about technological systems and devices facing toward the future. Consideration is also given to what technology might contribute to the theological enterprise. The investigation demonstrates the importance of such dialogues and the viability of initiating them

    Visual object-oriented development of parallel applications

    Get PDF
    PhD ThesisDeveloping software for parallel architectures is a notoriously difficult task, compounded further by the range of available parallel architectures. There has been little research effort invested in how to engineer parallel applications for more general problem domains than the traditional numerically intensive domain. This thesis addresses these issues. An object-oriented paradigm for the development of general-purpose parallel applications, with full lifecycle support, is proposed and investigated, and a visual programming language to support that paradigm is developed. This thesis presents experiences and results from experiments with this new model for parallel application development.Engineering and Physical Sciences Research Council

    Design Automation of Low Power Circuits in Nano-Scale CMOS and Beyond-CMOS Technologies.

    Full text link
    Today’s integrated system on chips (SoCs) usually consist of billions of transistors accounting for both digital and analog blocks. Integrating such massive blocks on a single chip involves several challenges, especially when transferring analog blocks from an older technology to newer ones. Furthermore, the exponential growth for IoT devices necessitates small and low power circuits. Hence, new devices and architectures must be investigated to meet the power and area constraints for wireless sensor networks (WSNs). In such cases, design automation becomes an essential tool to reduce the time to market of the circuits. This dissertation focuses on automating the design process of analog designs in advanced CMOS technology nodes, as well as reciprocal quantum logic (RQL) superconducting circuits. For CMOS analog circuits, our design automation technique employs digital automatic placement and routing tools to synthesize and lay out analog blocks along with digital blocks in a cell-based design approach. This technique was demonstrated in the design of a digital-to-analog converter. In the domain of RQL circuits, the automated design of several functional units of a commercial Processor is presented. These automation techniques enable the design of VLSI-scale circuits in this technology. In addition to the investigation of new technologies, several new baseband signal processor architectures are presented in this dissertation. These architectures are suitable for low-power mm3-scale WSNs and enable high frequency transceivers to operate within the power constraints of standalone IoT nodes.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133177/1/elnaz_1.pd
    corecore