3,648 research outputs found

    Memory protection

    Get PDF
    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate

    Virtual memory

    Get PDF
    Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today

    Will machines ever think

    Get PDF
    Artificial Intelligence research has come under fire for failing to fulfill its promises. A growing number of AI researchers are reexamining the bases of AI research and are challenging the assumption that intelligent behavior can be fully explained as manipulation of symbols by algorithms. Three recent books -- Mind over Machine (H. Dreyfus and S. Dreyfus), Understanding Computers and Cognition (T. Winograd and F. Flores), and Brains, Behavior, and Robots (J. Albus) -- explore alternatives and open the door to new architectures that may be able to learn skills

    Modeling reality

    Get PDF
    Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations

    Blindness in designing intelligent systems

    Get PDF
    New investigations of the foundations of artificial intelligence are challenging the hypothesis that problem solving is the cornerstone of intelligence. New distinctions among three domains of concern for humans--description, action, and commitment--have revealed that the design process for programmable machines, such as expert systems, is based on descriptions of actions and induces blindness to nonanalytic action and commitment. Design processes focusing in the domain of description are likely to yield programs like burearcracies: rigid, obtuse, impersonal, and unable to adapt to changing circumstances. Systems that learn from their past actions, and systems that organize information for interpretation by human experts, are more likely to be successful in areas where expert systems have failed

    The internet worm

    Get PDF
    In November 1988 a worm program invaded several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files. An analysis of the work's decompiled code revealed a battery of attacks by a knowledgeable insider, and demonstrated a number of security weaknesses. The attack occurred in an open network, and little can be inferred about the vulnerabilities of closed networks used for critical operations. The attack showed that passwork protection procedures need review and strengthening. It showed that sets of mutually trusting computers need to be carefully controlled. Sharp public reaction crystalized into a demand for user awareness and accountability in a networked world

    The ARPANET after twenty years

    Get PDF
    The ARPANET began operations in 1969 with four nodes as an experiment in resource sharing among computers. It has evolved into a worldwide research network of over 60,000 nodes, influencing the design of other networks in business, education, and government. It demonstrated the speed and reliability of packet-switching networks. Its protocols have served as the models for international standards. And yet the significance of the ARPANET lies not in its technology, but in the profound alterations networking has produced in human practices. Network designers must now turn their attention to the discourses of scientific technology, business, education, and government that are being mixed together in the milieux of networking, and in particular the conflicts and misunderstandings that arise from the different world views of these discourses
    corecore