22,790 research outputs found

    Requirements for a Research-oriented IC Design System

    Get PDF
    Computer-aided design techniques for integrated circuits grown in an incremental way, responding to various perceived needs, so that today there are a number of useful programs for logic generation, simulation at various levels, test preparation, artwork generation and analysis (including design rule checking), and interactive graphical editing. While the design of many circuits has benefitted from these programs, when industry wants to produce a high-volume part, the design and layout are done manually, followed by digitizing and perhaps some graphic editing before it is converted to pattern generation format, leading to the often heard statement that computer-aided design of integrated circuits doesn't work. If progress is to be made, it seems clear that the entire design process has to be thought through in basic terms, and much more attention must be paid to the way in which computational techniques can complement the designer's abilities. Currently, it is appropriate to try to characterize the design process in abstract terms, so that implementation and technological biases don't cloud the view of a desired system. In this paper, we briefly describe the conversion of algorithms to masks at a very general level, and then describe several projects at MIT which aim to provide contributions to an integrated design system. It is emphasized that no complete system design exists now at MIT, and that we believe that general design considerations must constantly be tested by building (and rebuilding) the various subcomponents, the structure of which is guided by our view of the overall design process

    Distribution Constraints: The Chase for Distributed Data

    Get PDF
    This paper introduces a declarative framework to specify and reason about distributions of data over computing nodes in a distributed setting. More specifically, it proposes distribution constraints which are tuple and equality generating dependencies (tgds and egds) extended with node variables ranging over computing nodes. In particular, they can express co-partitioning constraints and constraints about range-based data distributions by using comparison atoms. The main technical contribution is the study of the implication problem of distribution constraints. While implication is undecidable in general, relevant fragments of so-called data-full constraints are exhibited for which the corresponding implication problems are complete for EXPTIME, PSPACE and NP. These results yield bounds on deciding parallel-correctness for conjunctive queries in the presence of distribution constraints

    Peripheral Constraint Versus On-line Programming in Rapid Aimed Sequestial Movements

    Get PDF
    The purpose of this investigation was to examine how the programming and control of a rapid aiming sequence shifts with increased complexity. One objective was to determine if a preprogramming/peripheral constraint explanation is adequate to characterize control of an increasingly complex rapid aiming sequence, and if not, at what point on-line programming better accounts for the data. A second objective was to examine when on-line programming occurs. Three experiments were conducted in which complexity was manipulated by increasing the number of targets from 1 to 11. Initiation- and execution-timing patterns, probe reaction time, and movement kinematics were measured. Results supported the peripheral constraint/pre-programming explanation for sequences up to 7 targets if they were executed in a blocked fashion. For sequences executed in a random fashion (one length followed by a different length), preprogramming did not increase with complexity, and on-line programming occurred without time cost. Across all sequences there was evidence that the later targets created a peripheral constraint on movements to previous targets. We suggest that programming is influenced by two factors: the overall spatial trajectory, which is consistent with Sidaway’s subtended angle hypothesis (1991), and average velocity, with the latter established based on the number of targets in the sequence. As the number of targets increases, average velocity decreases, which controls variability of error in the extent of each movement segment. Overall the data support a continuous model of processing, one in which programming and execution co-occur, and can do so without time cost

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    State-based scheduling: An architecture for telescope observation scheduling

    Get PDF
    The applicability of constraint-based scheduling, a methodology previously developed and validated in the domain of factory scheduling, is extended to problem domains that require attendance to a wider range of state-dependent constraints. The problem of constructing and maintaining a short-term observation schedule for the Hubble Space Telescope (HST), which typifies this type of domain is the focus of interest. The nature of the constraints encountered in the HST domain is examined, system requirements are discussed with respect to utilization of a constraint-based scheduling methodology in such domains, and a general framework for state-based scheduling is presented
    • …
    corecore