42 research outputs found

    An optimizing prolog front-end to a relational query system

    Get PDF

    Design of testbed and emulation tools

    Get PDF
    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems

    Combining neural networks with symbolic approaches to perform complex event processing on non-symbolic data

    Get PDF
    This thesis presents three approaches to detecting situations of interest from non-symbolic data inputs such as images, audio and video through the use of Complex Event Processing (CEP) in an agile, reliable and efficient manner. These approaches must be agile, meaning that they must allow the implementation of solutions for a range of situations. We want them to be reliable, meaning that they must correctly detect the situations we are interested in. Finally, we want them to be efficient in terms of time and training data requirements. First, we present ProbCEP, an approach that combines proxy models with symbolic programming to perform CEP. We consider proxy models consisting of pre-trained neural networks, which allow the system to use non-symbolic inputs. However, the data used to train such proxy models is not necessarily related to the situations of interest (called complex events) we want to detect. Logic rules are used to define under which conditions these complex events occur, based on the output of the proxy models. We also show how the speed of the system can be significantly increased using specific optimization techniques. Then, we show two neuro-symbolic approaches, DeepProbCEP and Neuroplex. These approaches are designed to train a system to identify complex events from an input stream using small amounts of training data. Thanks to the injection of human knowledge into the system, these approaches require significantly less data than neural-only approaches. Following that, we explore the reliability of DeepProbCEP against several adversarial attacks that poison the training data. We also demonstrate that DeepProbCEP is an agile approach, allowing users to change the behaviour of the system to adapt to many situations. Finally, we discuss the potential of the research advances presented in this thesis for real-world applications, as well as possible areas for future research

    Type-Based Publish/Subscribe

    Get PDF
    This paper presents type-based publish/subscribe, a new variant of the publish/subscribe paradigm. Producers publish message objects on a communication bus, and consumers subscribe to the bus by specifying the types of the objects they are interested in. Message objects are considered as first class citizens and are classified by their types, instead of arbitrarily fixed topics. By reusing the type scheme of the language to classify message objects, type-based publish/subscribe avoids any unnatural subscription scheme and provides for a seamless integration of a publish/subscribe middleware with the programming language. Type-based publish/subscribe has several quantifiable advantages over other publish/subscribe variants. In particular, the knowledge of the type of message objects enforces performance optimizations when combined with dynamic filters for content-based subscription. %from dynamically defined requirements. Our type-based publish/subscribe prototype is based on Distributed Asynchronous Collections (DACs), programming abstractions for publish/subscribe interaction. They are implemented using GJ, an extended Java compiler adding genericity to the Java language, and enable the expression of safely typed distributed interaction without requiring any generation of typed proxies

    Application Migration Effort in the Cloud

    Get PDF
    Over the last years, the utilization of cloud resources has been steadily rising and an increasing number of enterprises are moving applications to the cloud. A leading trend is the adoption of Platform as a Service to support rapid application deployment. By providing a managed environment, cloud platforms take away a lot of complex configuration effort required to build scalable applications. However, application migrations to and between clouds cost development effort and open up new risks of vendor lock-in. This is problematic because frequent migrations may be necessary in the dynamic and fast changing cloud market. So far, the effort of application migration in PaaS environments and typical issues experienced in this task are hardly understood. To improve this situation, we present a cloud-to-cloud migration of a real-world application to seven representative cloud platforms. In this case study, we analyze the feasibility of the migrations in terms of portability and the effort of the migrations. We present a Docker-based deployment system that provides the ability of isolated and reproducible measurements of deployments to platform vendors, thus enabling the comparison of platforms for a particular application. Using this system, the study identifies key problems during migrations and quantifies these differences by distinctive metrics

    IPAD 2: Advances in Distributed Data Base Management for CAD/CAM

    Get PDF
    The Integrated Programs for Aerospace-Vehicle Design (IPAD) Project objective is to improve engineering productivity through better use of computer-aided design and manufacturing (CAD/CAM) technology. The focus is on development of technology and associated software for integrated company-wide management of engineering information. The objectives of this conference are as follows: to provide a greater awareness of the critical need by U.S. industry for advancements in distributed CAD/CAM data management capability; to present industry experiences and current and planned research in distributed data base management; and to summarize IPAD data management contributions and their impact on U.S. industry and computer hardware and software vendors

    Translating expert system rules into Ada code with validation and verification

    Get PDF
    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system
    corecore