27,137 research outputs found

    Life of occam-Pi

    Get PDF
    This paper considers some questions prompted by a brief review of the history of computing. Why is programming so hard? Why is concurrency considered an “advanced” subject? What’s the matter with Objects? Where did all the Maths go? In searching for answers, the paper looks at some concerns over fundamental ideas within object orientation (as represented by modern programming languages), before focussing on the concurrency model of communicating processes and its particular expression in the occam family of languages. In that focus, it looks at the history of occam, its underlying philosophy (Ockham’s Razor), its semantic foundation on Hoare’s CSP, its principles of process oriented design and its development over almost three decades into occam-? (which blends in the concurrency dynamics of Milner’s ?-calculus). Also presented will be an urgent need for rationalisation – occam-? is an experiment that has demonstrated significant results, but now needs time to be spent on careful review and implementing the conclusions of that review. Finally, the future is considered. In particular, is there a future

    Reusable Knowledge-based Components for Building Software Applications: A Knowledge Modelling Approach

    Get PDF
    In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach

    Designing Software Architectures As a Composition of Specializations of Knowledge Domains

    Get PDF
    This paper summarizes our experimental research and software development activities in designing robust, adaptable and reusable software architectures. Several years ago, based on our previous experiences in object-oriented software development, we made the following assumption: ‘A software architecture should be a composition of specializations of knowledge domains’. To verify this assumption we carried out three pilot projects. In addition to the application of some popular domain analysis techniques such as use cases, we identified the invariant compositional structures of the software architectures and the related knowledge domains. Knowledge domains define the boundaries of the adaptability and reusability capabilities of software systems. Next, knowledge domains were mapped to object-oriented concepts. We experienced that some aspects of knowledge could not be directly modeled in terms of object-oriented concepts. In this paper we describe our approach, the pilot projects, the experienced problems and the adopted solutions for realizing the software architectures. We conclude the paper with the lessons that we learned from this experience

    On the Verified-by-Construction Approach

    No full text

    Rational physical agent reasoning beyond logic

    No full text
    The paper addresses the problem of defining a theoretical physical agent framework that satisfies practical requirements of programmability by non-programmer engineers and at the same time permitting fast realtime operation of agents on digital computer networks. The objective of the new framework is to enable the satisfaction of performance requirements on autonomous vehicles and robots in space exploration, deep underwater exploration, defense reconnaissance, automated manufacturing and household automation

    Logic-Based Specification Languages for Intelligent Software Agents

    Full text link
    The research field of Agent-Oriented Software Engineering (AOSE) aims to find abstractions, languages, methodologies and toolkits for modeling, verifying, validating and prototyping complex applications conceptualized as Multiagent Systems (MASs). A very lively research sub-field studies how formal methods can be used for AOSE. This paper presents a detailed survey of six logic-based executable agent specification languages that have been chosen for their potential to be integrated in our ARPEGGIO project, an open framework for specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each executable language, the logic foundations are described and an example of use is shown. A comparison of the six languages and a survey of similar approaches complete the paper, together with considerations of the advantages of using logic-based languages in MAS modeling and prototyping.Comment: 67 pages, 1 table, 1 figure. Accepted for publication by the Journal "Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe Editor-in-Chie

    Separating Agent-Functioning and Inter-Agent Coordination by Activated Modules: The DECOMAS Architecture

    Full text link
    The embedding of self-organizing inter-agent processes in distributed software applications enables the decentralized coordination system elements, solely based on concerted, localized interactions. The separation and encapsulation of the activities that are conceptually related to the coordination, is a crucial concern for systematic development practices in order to prepare the reuse and systematic integration of coordination processes in software systems. Here, we discuss a programming model that is based on the externalization of processes prescriptions and their embedding in Multi-Agent Systems (MAS). One fundamental design concern for a corresponding execution middleware is the minimal-invasive augmentation of the activities that affect coordination. This design challenge is approached by the activation of agent modules. Modules are converted to software elements that reason about and modify their host agent. We discuss and formalize this extension within the context of a generic coordination architecture and exemplify the proposed programming model with the decentralized management of (web) service infrastructures

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: ‱ The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. ‱ The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. ‱ The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. ‱ The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Building Program Vector Representations for Deep Learning

    Full text link
    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis. Our representation learning approach directly makes deep learning a reality in this new field. We evaluate the learned vector representations both qualitatively and quantitatively. We conclude, based on the experiments, the coding criterion is successful in building program representations. To evaluate whether deep learning is beneficial for program analysis, we feed the representations to deep neural networks, and achieve higher accuracy in the program classification task than "shallow" methods, such as logistic regression and the support vector machine. This result confirms the feasibility of deep learning to analyze programs. It also gives primary evidence of its success in this new field. We believe deep learning will become an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1
    • 

    corecore