627 research outputs found

    About time

    Get PDF
    Time has historically been a measure of progress of recurrent physical processes. Coordination of future actions, prediction of future events, and assigning order to events are three practical reasons for implementing clocks and signalling mechanisms. In large networks of computers, these needs lead to the problem of synchronizing the clocks throughout the network. Recent methods allow this to be done in large networks with precision around 1 millisecond despite mean message exchange times near 5 milliseconds. These methods are discussed

    Massive parallelism in the future of science

    Get PDF
    Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior. In the domain of data parallelism, computers comprising very large numbers of processing agents, one for each data item in the result will be designed. These agents collectively can solve problems thousands of times faster than current supercomputers. In the domain of distributed parallelism, computations comprising large numbers of resource attached to the world network will be designed. The network will support computations far beyond the power of any one machine. In the domain of people parallelism collaborations among large groups of scientists around the world who participate in projects that endure well past the sojourns of individuals within them will be designed. Computing and telecommunications technology will support the large, long projects that will characterize big science by the turn of the century. Scientists must become masters in these three domains during the coming decade

    Memory protection

    Get PDF
    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate

    Modeling reality

    Get PDF
    Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations

    Blindness in designing intelligent systems

    Get PDF
    New investigations of the foundations of artificial intelligence are challenging the hypothesis that problem solving is the cornerstone of intelligence. New distinctions among three domains of concern for humans--description, action, and commitment--have revealed that the design process for programmable machines, such as expert systems, is based on descriptions of actions and induces blindness to nonanalytic action and commitment. Design processes focusing in the domain of description are likely to yield programs like burearcracies: rigid, obtuse, impersonal, and unable to adapt to changing circumstances. Systems that learn from their past actions, and systems that organize information for interpretation by human experts, are more likely to be successful in areas where expert systems have failed

    The internet worm

    Get PDF
    In November 1988 a worm program invaded several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files. An analysis of the work's decompiled code revealed a battery of attacks by a knowledgeable insider, and demonstrated a number of security weaknesses. The attack occurred in an open network, and little can be inferred about the vulnerabilities of closed networks used for critical operations. The attack showed that passwork protection procedures need review and strengthening. It showed that sets of mutually trusting computers need to be carefully controlled. Sharp public reaction crystalized into a demand for user awareness and accountability in a networked world

    Saving all the bits

    Get PDF
    The scientific tradition of saving all the data from experiments for independent validation and for further investigation is under profound challenge by modern satellite data collectors and by supercomputers. The volume of data is beyond the capacity to store, transmit, and comprehend the data. A promising line of study is discovery machines that study the data at the collection site and transmit statistical summaries of patterns observed. Examples of discovery machines are the Autoclass system and the genetic memory system of NASA-Ames, and the proposal for knowbots by Kahn and Cerf

    Worldnet

    Get PDF
    The expanding use of powerful workstations coupled to ubiquitous networks is transforming scientific and engineering research and the the ways organizations around the world do business. By the year 2000, few enterprises will be able to succeed without mastery of this technology, which will be embodied in an information infrastructure based on a worldwide network. A recurring theme in all the discussions of what might be possible within the emerging Worldnet is people and machines working together in new ways across distance and time. A review is presented of the basic concepts on which the architecture of Worldnet must be built: coordination of action, authentication, privacy, and naming. Worldnet must provide additional functions to support the ongoing processes of suppliers and consumers: help services, aids for designing and producing subsystems, spinning off new machines, and resistance to attack. This discussion begins to reveal the constituent elements of a theory for Worldnet, a theory focused on what people will do with computers rather than on what computers do

    The ARPANET after twenty years

    Get PDF
    The ARPANET began operations in 1969 with four nodes as an experiment in resource sharing among computers. It has evolved into a worldwide research network of over 60,000 nodes, influencing the design of other networks in business, education, and government. It demonstrated the speed and reliability of packet-switching networks. Its protocols have served as the models for international standards. And yet the significance of the ARPANET lies not in its technology, but in the profound alterations networking has produced in human practices. Network designers must now turn their attention to the discourses of scientific technology, business, education, and government that are being mixed together in the milieux of networking, and in particular the conflicts and misunderstandings that arise from the different world views of these discourses

    Information technologies for astrophysics circa 2001

    Get PDF
    It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade
    corecore