196 research outputs found

    MARS: aRISC-based architecture for Lisp

    Get PDF
    [[abstract]]A RISC-based chip set architecture for Lisp is presented in this paper. This architecture contains an instruction fetch unit (IFU) and three processing units—integer processing unit (IPU), floating-point processing unit (FPU), and list processing unit (LPU). The IFU feeds instructions to the processing units and supports fast procedure call/return and branch, the IPU and FPU execute operations of different data type, and the LPU handles the Lisp runtime environment, dynamic type checking, and fast list access. In this architecture, the critical path of complex register file access and ALU operation is distributed into the LPU and IPU, and the tracing of a list can be done quickly by the non-delayed car or cdr instructions of the LPU. Performance simulation shows that this architecture would be about 6.2 times faster than SPUR and about 2.2 times faster than MIPS-X.[[booktype]]紙本[[booktype]]電子

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited

    Computational needs survey of NASA automation and robotics missions. Volume 2: Appendixes

    Get PDF
    NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is the fact that mission computing requirements are frequency unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. Here, NASA, industry and academic communities are provided with a preliminary set of advanced mission computational processing requirements of automation and robotics (A and R) systems. The results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implemented capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Here, appendixes are provided

    A distributed agent architecture for real-time knowledge-based systems: Real-time expert systems project, phase 1

    Get PDF
    We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control

    Hypermedia = hypercommunication

    Get PDF
    New hardware and software technology gave application designers the freedom to use new realism in human computer interaction. High-quality images, motion video, stereo sound and music, speech, touch, gesture provide richer data channels between the person and the machine. Ultimately, this will lead to richer communication between people with the computer as an intermediary. The whole point of hyper-books, hyper-newspapers, virtual worlds, is to transfer the concept and relationships, the 'data structure', from the mind of creator to that of user. Some of the characteristics of this rich information channel are discussed, and some examples are presented

    Embedded Automation in Human-Agent Environment

    Full text link

    Comparing mark-and sweep and stop-and-copy garbage collection

    Full text link
    Stop-and-copy garbage collection has been preferred to mark-and-sweep collection in the last decade because its collec-tion time is proportional to the size of reachable data and not to the memory size. This paper compares the CPU overhead and the memory requirements of the two collec-tion algorithms extended with generations, and finds that mark-and-sweep collection requires at most a small amount of additional CPU overhead (3-690) but, requires an aver-age of 20 % (and up to 40%) less memory to achieve the same page fault rate. The comparison is based on results obtained using trace-driven simulation with large Common Lisp programs.
    corecore