20,420 research outputs found

    An interactive semantics of logic programming

    Full text link
    We apply to logic programming some recently emerging ideas from the field of reduction-based communicating systems, with the aim of giving evidence of the hidden interactions and the coordination mechanisms that rule the operational machinery of such a programming paradigm. The semantic framework we have chosen for presenting our results is tile logic, which has the advantage of allowing a uniform treatment of goals and observations and of applying abstract categorical tools for proving the results. As main contributions, we mention the finitary presentation of abstract unification, and a concurrent and coordinated abstract semantics consistent with the most common semantics of logic programming. Moreover, the compositionality of the tile semantics is guaranteed by standard results, as it reduces to check that the tile systems associated to logic programs enjoy the tile decomposition property. An extension of the approach for handling constraint systems is also discussed.Comment: 42 pages, 24 figure, 3 tables, to appear in the CUP journal of Theory and Practice of Logic Programmin

    The Minimal Levels of Abstraction in the History of Modern Computing

    Get PDF
    From the advent of general-purpose, Turing-complete machines, the relation between operators, programmers, and users with computers can be seen in terms of interconnected informational organisms (inforgs) henceforth analysed with the method of levels of abstraction (LoAs), risen within the Philosophy of Informa- tion (PI). In this paper, the epistemological levellism proposed by L. Floridi in the PI to deal with LoAs will be formalised in constructive terms using category the- ory, so that information itself is treated as structure-preserving functions instead of Cartesian products. The milestones in the history of modern computing are then analysed via constructive levellism to show how the growth of system complexity lead to more and more information hiding

    User Story Software Estimation:a Simplification of Software Estimation Model with Distributed Extreme Programming Estimation Technique

    Get PDF
    Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation). DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming

    Dynamically typed languages

    Get PDF
    Dynamically typed languages such as Python and Ruby have experienced a rapid grown in popularity in recent times. However, there is much confusion as to what makes these languages interesting relative to statically typed languages, and little knowledge of their rich history. In this chapter I explore the general topic of dynamically typed languages, how they differ from statically typed languages, their history, and their defining features

    Semantic Component Composition

    Full text link
    Building complex software systems necessitates the use of component-based architectures. In theory, of the set of components needed for a design, only some small portion of them are "custom"; the rest are reused or refactored existing pieces of software. Unfortunately, this is an idealized situation. Just because two components should work together does not mean that they will work together. The "glue" that holds components together is not just technology. The contracts that bind complex systems together implicitly define more than their explicit type. These "conceptual contracts" describe essential aspects of extra-system semantics: e.g., object models, type systems, data representation, interface action semantics, legal and contractual obligations, and more. Designers and developers spend inordinate amounts of time technologically duct-taping systems to fulfill these conceptual contracts because system-wide semantics have not been rigorously characterized or codified. This paper describes a formal characterization of the problem and discusses an initial implementation of the resulting theoretical system.Comment: 9 pages, submitted to GCSE/SAIG '0

    Using Java for distributed computing in the Gaia satellite data processing

    Get PDF
    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.Comment: Experimental Astronomy, August 201

    A general framework for positioning, evaluating and selecting the new generation of development tools.

    Get PDF
    This paper focuses on the evaluation and positioning of a new generation of development tools containing subtools (report generators, browsers, debuggers, GUI-builders, ...) and programming languages that are designed to work together and have a common graphical user interface and are therefore called environments. Several trends in IT have led to a pluriform range of developments tools that can be classified in numerous categories. Examples are: object-oriented tools, GUI-tools, upper- and lower CASE-tools, client/server tools and 4GL environments. This classification does not sufficiently cover the tools subject in this paper for the simple reason that only one criterion is used to distinguish them. Modern visual development environments often fit in several categories because to a certain extent, several criteria can be applied to evaluate them. In this study, we will offer a broad classification scheme with which tools can be positioned and which can be refined through further research.

    On Some Integrated Approaches to Inference

    Full text link
    We present arguments for the formulation of unified approach to different standard continuous inference methods from partial information. It is claimed that an explicit partition of information into a priori (prior knowledge) and a posteriori information (data) is an important way of standardizing inference approaches so that they can be compared on a normative scale, and so that notions of optimal algorithms become farther-reaching. The inference methods considered include neural network approaches, information-based complexity, and Monte Carlo, spline, and regularization methods. The model is an extension of currently used continuous complexity models, with a class of algorithms in the form of optimization methods, in which an optimization functional (involving the data) is minimized. This extends the family of current approaches in continuous complexity theory, which include the use of interpolatory algorithms in worst and average case settings
    corecore