1,441 research outputs found

    Microprocessors: the engines of the digital age

    Get PDF
    The microprocessor—a computer central processing unit integrated onto a single microchip—has come to dominate computing across all of its scales from the tiniest consumer appliance to the largest supercomputer. This dominance has taken decades to achieve, but an irresistible logic made the ultimate outcome inevitable. The objectives of this Perspective paper are to offer a brief history of the development of the microprocessor and to answer questions such as: where did the microprocessor come from, where is it now, and where might it go in the future

    High Availability and Scalability of Mainframe Environments using System z and z/OS as example

    Get PDF
    Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations

    Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    Get PDF
    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power

    MicroComputer and Local Government

    Get PDF
    In 1976, two young Californians named Steve Jobs and Steve Wozniak started a revolution. It was a quiet revolution... no shots were fired... no demonstrations occurred... there were no casualties, but it was a revolution nevertheless... a revolution that will have long-lasting results. That year these two young men developed the Apple, the first commercially successful microcomputer

    Database machines in support of very large databases

    Get PDF
    Software database management systems were developed in response to the needs of early data processing applications. Database machine research developed as a result of certain performance deficiencies of these software systems. This thesis discusses the history of database machines designed to improve the performance of database processing and focuses primarily on the Teradata DBC/1012, the only successfully marketed database machine that supports very large databases today. Also reviewed is the response of IBM to the performance needs of its database customers; this response has been in terms of improvements in both software and hardware support for database processing. In conclusion, an analysis is made of the future of database machines, in particular the DBC/1012, in light of recent IBM enhancements and its immense customer base

    A Survey of the Economic Role of Software Platforms in Computer-Based Industries

    Get PDF
    Software platforms are a critical component of the computer systems underpinning leading– edge products ranging from third– generation mobile phones to video games. After describing some key economic features of computer systems and software platforms, the paper presents case studies of personal computers, video games, personal digital assistants, smart mobile phones, and digital content devices. It then compares several economic aspects of these businesses including their industry evolution, pricing structures, and degrees of integration.software platforms, hardware platforms, network effects, bundling, multi-sided markets

    Optimal use of computing equipment in an automated industrial inspection context

    Get PDF
    This thesis deals with automatic defect detection. The objective was to develop the techniques required by a small manufacturing business to make cost-efficient use of inspection technology. In our work on inspection techniques we discuss image acquisition and the choice between custom and general-purpose processing hardware. We examine the classes of general-purpose computer available and study popular operating systems in detail. We highlight the advantages of a hybrid system interconnected via a local area network and develop a sophisticated suite of image-processing software based on it. We quantitatively study the performance of elements of the TCP/IP networking protocol suite and comment on appropriate protocol selection for parallel distributed applications. We implement our own distributed application based on these findings. In our work on inspection algorithms we investigate the potential uses of iterated function series and Fourier transform operators when preprocessing images of defects in aluminium plate acquired using a linescan camera. We employ a multi-layer perceptron neural network trained by backpropagation as a classifier. We examine the effect on the training process of the number of nodes in the hidden layer and the ability of the network to identify faults in images of aluminium plate. We investigate techniques for introducing positional independence into the network's behaviour. We analyse the pattern of weights induced in the network after training in order to gain insight into the logic of its internal representation. We conclude that the backpropagation training process is sufficiently computationally intensive so as to present a real barrier to further development in practical neural network techniques and seek ways to achieve a speed-up. Weconsider the training process as a search problem and arrive at a process involving multiple, parallel search "vectors" and aspects of genetic algorithms. We implement the system as the mentioned distributed application and comment on its performance

    Understanding Digital Technology’s Evolution and the Path of Measured Productivity Growth: Present and Future in the Mirror of the Past

    Get PDF
    Three styles of explanation have been advanced by economists seeking to account for the so-called 'productivity paradox'. The coincidence of a persisting slowdown in the growth of measured total factor productivity (TFP) in the US, since the mid-1970's, with the wave of information technology (It) innovations, is said by some to be an illusion due to the mismeasurement of real output growth; by others to expose the mistaken expectations about the benefits of computerization; and by still others to reflect the amount of time, and the volume of intangible investments in 'learning', and the time required for ancillary innovations that allow the new digital technologies to be applied in ways that are reflected in measured productivity growth. This paper shows that rather than viewing these as competing hypotheses, the dynamics of the transition to a new technological and economic regime based upon a general purpose technology (GPT) should be understood to be likely to give rise to all three 'effects.' It more fully articulates and supports this thesis, which was first advanced in the 'computer and dynamo' papers by David (1990, 1991). The relevance of that historical experience is re-asserted and supported by further evidence rebutting skeptics who have argued that the diffusion of electrification and computerization have little in common. New evidence is produced about the links between IT use, mass customization, and the upward bias of output price deflators arising from the method used to 'chain in' new products prices. The measurement bias due to the exclusion of intangible investments from the scope of the official national product accounts also is examined. Further, it is argued that the development of the general-purpose PC delayed the re-organization of businesses along lines that would have more directly raised task productivity, even though the technologies yielded positive 'revenue productivity' gains for large companies. The paper concludes by indicating the emerging technical and organizational developments that are likely to deliver a sustained surge of measured TFP growth during the decades that lie immediately ahead.

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl
    corecore