19,900 research outputs found

    On the integration of object-oriented and process-oriented computation in persistent environments

    Get PDF
    Persistent programming is concerned with the construction of large and long lived systems of data [1,2]. Such systems have traditionally required concurrent access for two reasons. The first is that of speed, be it access speed for multiple users or execution speed for parallel activities. The second reason for concurrency is to control the complexity of large systems by decomposing them into parallel activities. This process-oriented approach to system construction has much in common with the object-oriented approach. We will demonstrate, in this paper, the facilities of the language Napier [17] which allows the integration of the two methodologies along with a persistent environment to provide concurrently accessed object-oriented databases.Othe

    Strategic Directions in Object-Oriented Programming

    Get PDF
    This paper has provided an overview of the field of object-oriented programming. After presenting a historical perspective and some major achievements in the field, four research directions were introduced: technologies integration, software components, distributed programming, and new paradigms. In general there is a need to continue research in traditional areas:\ud (1) as computer systems become more and more complex, there is a need to further develop the work on architecture and design; \ud (2) to support the development of complex systems, there is a need for better languages, environments, and tools; \ud (3) foundations in the form of the conceptual framework and other theories must be extended to enhance the means for modeling and formal analysis, as well as for understanding future computer systems

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Challenging the Computational Metaphor: Implications for How We Think

    Get PDF
    This paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think

    Isabelle/PIDE as Platform for Educational Tools

    Full text link
    The Isabelle/PIDE platform addresses the question whether proof assistants of the LCF family are suitable as technological basis for educational tools. The traditionally strong logical foundations of systems like HOL, Coq, or Isabelle have so far been counter-balanced by somewhat inaccessible interaction via the TTY (or minor variations like the well-known Proof General / Emacs interface). Thus the fundamental question of math education tools with fully-formal background theories has often been answered negatively due to accidental weaknesses of existing proof engines. The idea of "PIDE" (which means "Prover IDE") is to integrate existing provers like Isabelle into a larger environment, that facilitates access by end-users and other tools. We use Scala to expose the proof engine in ML to the JVM world, where many user-interfaces, editor frameworks, and educational tools already exist. This shall ultimately lead to combined mathematical assistants, where the logical engine is in the background, without obstructing the view on applications of formal methods, formalized mathematics, and math education in particular.Comment: In Proceedings THedu'11, arXiv:1202.453

    Some Notes on the Past and Future of Lisp-Stat

    Get PDF
    Lisp-Stat was originally developed as a framework for experimenting with dynamic graphics in statistics. To support this use, it evolved into a platform for more general statistical computing. The choice of the Lisp language as the basis of the system was in part coincidence and in part a very deliberate decision. This paper describes the background behind the choice of Lisp, as well as the advantages and disadvantages of this choice. The paper then discusses some lessons that can be drawn from experience with Lisp-Stat and with the R language to guide future development of Lisp-Stat, R, and similar systems.
    corecore