616 research outputs found

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    Safe Automated Refactoring for Intelligent Parallelization of Java 8 Streams

    Full text link
    Streaming APIs are becoming more pervasive in mainstream Object-Oriented programming languages and platforms. For example, the Stream API introduced in Java 8 allows for functional-like, MapReduce-style operations in processing both finite, e.g., collections, and infinite data structures. However, using this API efficiently involves subtle considerations such as determining when it is best for stream operations to run in parallel, when running operations in parallel can be less efficient, and when it is safe to run in parallel due to possible lambda expression side-effects. Also, streams may not run all operations in parallel depending on particular collectors used in reductions. In this paper, we present an automated refactoring approach that assists developers in writing efficient stream code in a semantics-preserving fashion. The approach, based on a novel data ordering and typestate analysis, consists of preconditions and transformations for automatically determining when it is safe and possibly advantageous to convert sequential streams to parallel, unorder or de-parallelize already parallel streams, and optimize streams involving complex reductions. The approach was implemented as a plug-in to the popular Eclipse IDE, uses the WALA and SAFE analysis frameworks, and was evaluated on 11 Java projects consisting of ∼642K lines of code. We found that 57 of 157 candidate streams (36.31%) were refactorable, and an average speedup of 3.49 on performance tests was observed. The results indicate that the approach is useful in optimizing stream code to their full potential

    An Asynchronous Simulation Framework for Multi-User Interactive Collaboration: Application to Robot-Assisted Surgery

    Get PDF
    The field of surgery is continually evolving as there is always room for improvement in the post-operative health of the patient as well as the comfort of the Operating Room (OR) team. While the success of surgery is contingent upon the skills of the surgeon and the OR team, the use of specialized robots has shown to improve surgery-related outcomes in some cases. These outcomes are currently measured using a wide variety of metrics that include patient pain and recovery, surgeon’s comfort, duration of the operation and the cost of the procedure. There is a need for additional research to better understand the optimal criteria for benchmarking surgical performance. Presently, surgeons are trained to perform robot-assisted surgeries using interactive simulators. However, in the absence of well-defined performance standards, these simulators focus primarily on the simulation of the operative scene and not the complexities associated with multiple inputs to a real-world surgical procedure. Because interactive simulators are typically designed for specific robots that perform a small number of tasks controlled by a single user, they are inflexible in terms of their portability to different robots and the inclusion of multiple operators (e.g., nurses, medical assistants). Additionally, while most simulators provide high-quality visuals, simplification techniques are often employed to avoid stability issues for physics computation, contact dynamics and multi-manual interaction. This study addresses the limitations of existing simulators by outlining various specifications required to develop techniques that mimic real-world interactions and collaboration. Moreover, this study focuses on the inclusion of distributed control, shared task allocation and assistive feedback -- through machine learning, secondary and tertiary operators -- alongside the primary human operator

    Well-Formed and Scalable Invasive Software Composition

    Get PDF
    Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons

    Deploying ontologies in software design

    Get PDF
    In this thesis we will be concerned with the relation between ontologies and software design. Ontologies are studied in the artificial intelligence community as a means to explicitly represent standardised domain knowledge in order to enable knowledge shar¬ ing and reuse. We deploy ontologies in software design with emphasis on a traditional software engineering theme: error detection. In particular, we identify a type of error that is often difficult to detect: conceptual errors. These are related to the description of the domain whom which the system will operate. They require subjective knowledge about correct forms of domain description to detect them. Ontologies provide these forms of domain description and we are interested in applying them and verify their correctness(chapter 1). After presenting an in depth analysis of the field of ontologies and software testing as conceived and implemented by the software engineering and artificial intelligence communities(chapter 2), we discuss an approach which enabled us to deploy ontologies in the early phases of software development (i.e., specifications) in order to detect conceptual errors (chapter 3). This is based on the provision of ontological axioms which are used to verify conformance of specification constructs to the underpinning ontology. To facilitate the integration of ontology with applications that adopt it we developed an architecture and built tools to implement this form of conceptual error check(chapter 4). We apply and evaluate the architecture in a variety of contexts to identify potential uses (chapter 5). An implication of this method for de¬ ploying ontologies to reason about the correctness of applications is to raise our trust in the given ontologies. However, when the ontologies themselves are erroneous we might fail to reveal pernicious discrepancies. To cope with this problem we extended the architecture to a multi-layer form(chapter 4) which gives us the ability to check the ontologies themselves for correctness. We apply this multi-layer architecture to cap¬ ture errors found in a complex ontologies lattice(chapter 6). We further elaborate on the weaknesses in ontology evaluation methods and employ a technique stemming from software engineering, that of experience management, to facilitate ontology testing and deployment(chapter 7). The work presented in this thesis aims to improve practice in ontology use and identify areas to which ontologies could be of benefits other than the advocated ones of knowledge sharing and reuse(chapter 8)

    Mining behavioral specifications of distributed systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Abstraction-Based Program Specialization

    Get PDF
    Abstraction-based program specialization (ABPS) was investigated so that it could be applied to Java and make automated improvements to help with finite state verification. Research was conducted on partial evaluation and abstract interpretation. A prototype to do abstraction-based program specialization was constructed by Hatcliff, Dwyer, and Laubach. This work scaled the prototype to a subset of Java and mad some general improvements. Today's software is large and complex. Because of this complexity, traditional validation and program testing techniques are hard to apply. One method in use is finite-state verification (FSV). FSV requires a program to be modeled as a finite-state transition system. Currently, the modeling is done by hand, an error-prone process. Also, the state space of a non-trivial program is extremely large (potentially infinite). This thesis created an ABPS that uses partial evaluation and abstract interpretation to reduce a program model's state pace. Partial evaluation performs symbolic execution; it specializes programs by folding constants and pruning infeasible branches from the computation tree. The abstract interpretation component r places program data types with small sets of abstract tokens that capture information relevant to properties being verified. This can dramatically reduce a program's stat space. Abstraction-based program specialization is a viable option for improving code and automating the use of finite state verifiers. Much work still needs to be done to completely scale abstraction-based program specialization to include all of Java and to make the process more automatic. Finally, several examples illustrate how ABPS can be applied to automatically create models of simple software systems

    A Graphical Environment Supporting the Algebraic Specification of Abstract Data Types

    Get PDF
    Abstract Data Types (ADTs) are a powerful conceptual and practical device for building high-quality software because of the way they can describe objects whilst hiding the details of how they are represented within a computer. In order to implement ADTs correctly, it is first necessary to precisely describe their properties and behaviour, typically within a mathematical framework such as algebraic specification. These techniques are no longer merely research topics but are now tools used by software practitioners. Unfortunately, the high level of mathematical sophistication required to exploit these methods has made them unattractive to a large portion of their intended audience. This thesis investigates the use of computer graphics as a way of making the formal specification of ADTs more palatable. Computer graphics technology has recently been explored as a way of making computer programs more understandable by revealing aspects of their structure and run-time behaviour that are usually hidden in textual representations. These graphical techniques can also be used to create and edit programs. Although such visualisation techniques have been incorporated into tools supporting several phases of software development, a survey presented in this thesis of existing systems reveals that their application to supporting the formal specification of ADTs has so far been ignored. This thesis describes the development of a prototype tool (called VISAGE) for visualising and visually programming formally-specified ADTs. VISAGE uses a synchronised combination of textual and graphical views to illustrate the various facets of an ADT's structure and behaviour. The graphical views use both static and dynamic representations developed specifically for this domain. VISAGE's visual programming facility has powerful mechanisms for creating and manipulating entire structures (as well as their components) that make it at least comparable with textual methods. In recognition of the importance of examples as a way of illustrating abstract concepts, VISAGE provides a dedicated tool (called the PLAYPEN) that allows the creation of example data by the user. These data can then be transformed by the operations belonging to the ADT with the result shown by means of a dynamic, graphical display. An evaluation of VISAGE was conducted in order to detect any improvement in subjects' performance, confidence and understanding of ADT specifications. The subjects were asked to perform a set of simple specification tasks with some using VISAGE and the others using manual techniques to act as a control. An analysis of the results shows a distinct positive reaction from the VISAGE group that was completely absent in the control group thereby supporting the thesis that the algebraic specification of ADTs can be made more accessible and palatable though the use of computer graphic techniques
    corecore