1,772 research outputs found

    Representation of Medical Guidelines Using a Classification-Based System

    Get PDF
    Medical guidelines play an increasing role in selecting diag-nostic and therapeutic steps under the aspects of effectiveness, invasiveness, and costs. To work directly on patient data already available in electronic form, they should be integrated into a medical Information System. In order to develop a "medical guideline module" (MGM) managing and applying guidelines to patients, a "knowledge level" representation of guidelines is necessary which reflects the structure of medical knowledge and matches medical processes. Furthermore, a direct transformation to the "symbol level" is needed. We use a nested, frame-like structure on the knowledge level and show that a classification-based knowledge representation system (CBKRS) is principally well suited for the symbol level. To facilitate the usage and to be independent of a particular CBKRS, we introduce an intermediate level called "intelligent object system" (IOS). It is developed by augmenting a simple data model for describing complex objects with prototypes and implications as a means to classify objects and to draw inferences based on this Classification. Finally, the transformation of guidelines to prototypes and implications is described

    Some Notes on the Past and Future of Lisp-Stat

    Get PDF
    Lisp-Stat was originally developed as a framework for experimenting with dynamic graphics in statistics. To support this use, it evolved into a platform for more general statistical computing. The choice of the Lisp language as the basis of the system was in part coincidence and in part a very deliberate decision. This paper describes the background behind the choice of Lisp, as well as the advantages and disadvantages of this choice. The paper then discusses some lessons that can be drawn from experience with Lisp-Stat and with the R language to guide future development of Lisp-Stat, R, and similar systems.

    Software Engineering Laboratory Series: Proceedings of the Twenty-Second Annual Software Engineering Workshop

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited

    Some Notes on the Past and Future of Lisp-Stat

    Get PDF
    Lisp-Stat was originally developed as a framework for experimenting with dynamic graphics in statistics. To support this use, it evolved into a platform for more general statistical computing. The choice of the Lisp language as the basis of the system was in part coincidence and in part a very deliberate decision. This paper describes the background behind the choice of Lisp, as well as the advantages and disadvantages of this choice. The paper then discusses some lessons that can be drawn from experience with Lisp-Stat and with the R language to guide future development of Lisp-Stat, R, and similar systems

    Proceedings of the CUNY Games Conference 5.0

    Full text link
    The CUNY Games Network is an organization dedicated to encouraging research, scholarship and teaching in the developing field of games-based learning. We connect educators from every campus and discipline at CUNY and beyond who are interested in digital and non-digital games, simulations, and other forms of interactive teaching and inquiry-based learning. The CUNY Games Conference distills its best cutting-edge interactive presentations into a two-day event to promote and discuss game-based pedagogies in higher education, focusing particularly on non-digital learning activities that faculty can use in the classroom every day. The conference will include workshops lead by CUNY Games Organizers on how to modify existing games for the classroom, how to incorporate elements of play into simulations and critical thinking activities, as well as poster sessions, play testing, and game play. For the digitally minded, we will also offer a workshop in creating computer games in Unity

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain
    corecore