37 research outputs found

    The Design and Implementation of Bloqqi - A Feature-Based Diagram Programming Language

    Get PDF
    This dissertation presents the design and implementation of a new block diagram programming language, Bloqqi, for building control systems with focus on variability. The language has been developed in collaboration with industry with the goal of reducing engineering time and improving reuse of functionality.When building a control system for a plant, there are typically different variants of the same base functionality. A plant may have several variants of a tank, for example, one variant with heating and another one without. This dissertation presents novel language mechanisms for describing this kind of variability, based on diagram inheritance. For instance, Bloqqi supports specifying what features, like heating, the base functionality can have. These specifications are then used to automatically derive smart-editing support in the form of a feature-based wizard. In this wizard, the user can select what features the base functionality should have, and code is generated corresponding to these features. The new language mechanisms allow feature-based libraries to be created and extended in a modular way.This dissertation presents techniques for implementing rich graphical editors with smart editing support based on semantic analysis. A prototype compiler and graphical editor have been implemented for the language, using the semantic formalism reference attribute grammars (RAGs). RAGs allow tools to share the semantic specifications, which makes it possible to modularly extend the compiler with support for advanced semantic feedback to the user of the graphical editor

    Reflecting on the Physics of Notations applied to a visualisation case study

    Get PDF
    This paper presents a critical reflection upon the concept of 'physics of notations' proposed by Moody. This is based upon the post hoc application of the concept in the analysis of a visualisation tool developed for a common place mathematics tool. Although this is not the intended design and development approach presumed or preferred by the physics of notations, there are benefits to analysing an extant visualisation. In particular, our analysis benefits from the visualisation having been developed and refined employing graphic design professionals and extensive formative user feedback. Hence the rationale for specific visualisation features is to some extent traceable. This reflective analysis shines a light on features of both the visualisation and domain visualised, illustrating that it could have been analysed more thoroughly at design time. However the same analysis raises a variety of interesting questions about the viability of scoping practical visualisation design in the framework proposed by the physics of notations

    Developments in Dataflow Programming

    Get PDF
    Dataflow has historically been motivated either by parallelism or programmability or some combination of the two. This work, rather than being directed primarily at parallelism or programmability, is instead aimed at maximising the overall utility to the programmer of the system at large. This means that it aims to result in a system in which it is easy to create well-constructed, flexible programs that comply with the principles of software engineering and architecture, but also that the proposed system should be capable at performing practical real-life tasks and should be as widely applicable as can be achieved. With those aims in mind, this project has four goals: * to argue for a unified global dataflow coordination system, extensible to be able to accommodate components of any form that may exist now or in the future; * to establish a link between the design of such a system and the principles of software engineering and architecture; * to design a dataflow coordination system based on those principles, aiming where possible to embed them in the design so that they become easy or unthinking for programmers to apply; and * to implement and test components of the proposed system, using it to build a set of three sample algorithms. Taking the best ideas that have been proposed in dataflow programming in the past --- those that most effectively embed the principles of software engineering --- and extending them with new proposals where necessary, a collection of interactions and functionalities is proposed, including a novel way of using partial evaluation of functions and data dimensionality to represent iteration in an acyclic graph. The proposed design was implemented as far as necessary to construct three test algorithms: calculating a factorial, generating terms of the Fibonacci sequence and performing a merge-sort. The implementation was successful in representing iteration in acyclic dataflow, and the test algorithms generated correct results, limited only by the numerical representation capabilities of the underlying language. Testing and working with the implemented system revealed the importance to usability of the system being visual, interactive and, in a distributed environment, always-available. Proposed further work falls into three categories: writing a full specification (in particular, defining the interfaces by which components will interact); developing new features to extend the functionality; and further developing the test implementation. The conclusion summarises the vision of a unified global dataflow coordination system and makes an appeal for cooperation on its development as an open, non-profit dataflow system run for the good of its community, rather than allowing a proliferation of competing systems run for commercial gain

    Versatile interaction specification of tools and agents

    Get PDF
    Journal ArticleVista is a software infrastructure addressing the vexing problem of software tool interaction?especially how to get egocentric tools to work well together. Vista neither assumes nor requires that tools or tool-mediating agents understand a cooperative messaging protocol, only that they share some common means of interprocess communication. Most IPC mechanisms are too ad hoc and low-level for use by non (or non-expert) programmers. Vista helps by encapsulating such mechanisms in abstract data types obeying high-level protocols. This software framework cleanly integrates a visual language editor, a compiler, libraries, specification analysis tools, and a process control executive into a unified whole

    Runtime Adaptation of Scientific Service Workflows

    Get PDF
    Software landscapes are rather subject to change than being complete after having been built. Changes may be caused by a modified customer behavior, the shift to new hardware resources, or otherwise changed requirements. In such situations, several challenges arise. New architectural models have to be designed and implemented, existing software has to be integrated, and, finally, the new software has to be deployed, monitored, and, where appropriate, optimized during runtime under realistic usage scenarios. All of these situations often demand manual intervention, which causes them to be error-prone. This thesis addresses these types of runtime adaptation. Based on service-oriented architectures, an environment is developed that enables the integration of existing software (i.e., the wrapping of legacy software as web services). A workflow modeling tool that aims at an easy-to-use approach by separating the role of the workflow expert and the role of the domain expert. After the development of workflows, tools that observe the executing infrastructure and perform automatic scale-in and scale-out operations are presented. Infrastructure-as-a-Service providers are used to scale the infrastructure in a transparent and cost-efficient way. The deployment of necessary middleware tools is automatically done. The use of a distributed infrastructure can lead to communication problems. In order to keep workflows robust, these exceptional cases need to treated. But, in this way, the process logic of a workflow gets mixed up and bloated with infrastructural details, which yields an increase in its complexity. In this work, a module is presented that can deal automatically with infrastructural faults and that thereby allows to keep the separation of these two layers. When services or their components are hosted in a distributed environment, some requirements need to be addressed at each service separately. Although techniques as object-oriented programming or the usage of design patterns like the interceptor pattern ease the adaptation of service behavior or structures. Still, these methods require to modify the configuration or the implementation of each individual service. On the other side, aspect-oriented programming allows to weave functionality into existing code even without having its source. Since the functionality needs to be woven into the code, it depends on the specific implementation. In a service-oriented architecture, where the implementation of a service is unknown, this approach clearly has its limitations. The request/response aspects presented in this thesis overcome this obstacle and provide a SOA-compliant and new methods to weave functionality into the communication layer of web services. The main contributions of this thesis are the following: Shifting towards a service-oriented architecture: The generic and extensible Legacy Code Description Language and the corresponding framework allow to wrap existing software, e.g., as web services, which afterwards can be composed into a workflow by SimpleBPEL without overburdening the domain expert with technical details that are indeed handled by a workflow expert. Runtime adaption: Based on the standardized Business Process Execution Language an automatic scheduling approach is presented that monitors all used resources and is able to automatically provision new machines in case a scale-out becomes necessary. If the resource's load drops, e.g., because of less workflow executions, a scale-in is also automatically performed. The scheduling algorithm takes the data transfer between the services into account in order to prevent scheduling allocations that eventually increase the workflow's makespan due to unnecessary or disadvantageous data transfers. Furthermore, a multi-objective scheduling algorithm that is based on a genetic algorithm is able to additionally consider cost, in a way that a user can define her own preferences rising from optimized execution times of a workflow and minimized costs. Possible communication errors are automatically detected and, according to certain constraints, corrected. Adaptation of communication: The presented request/response aspects allow to weave functionality into the communication of web services. By defining a pointcut language that only relies on the exchanged documents, the implementation of services must neither be known nor be available. The weaving process itself is modeled using web services. In this way, the concept of request/response aspects is naturally embedded into a service-oriented architecture

    SAVCBS 2004 Specification and Verification of Component-Based Systems: Workshop Proceedings

    Get PDF
    This is the proceedings of the 2004 SAVCBS workshop. The workshop is concerned with how formal (i.e., mathematical) techniques can be or should be used to establish a suitable foundation for the specification and verification of component-based systems. Component-based systems are a growing concern for the software engineering community. Specification and reasoning techniques are urgently needed to permit composition of systems from components. Component-based specification and verification is also vital for scaling advanced verification techniques such as extended static analysis and model checking to the size of real systems. The workshop considers formalization of both functional and non-functional behavior, such as performance or reliability

    Applications Development for the Computational Grid

    Get PDF

    Security-Pattern Recognition and Validation

    Get PDF
    The increasing and diverse number of technologies that are connected to the Internet, such as distributed enterprise systems or small electronic devices like smartphones, brings the topic IT security to the foreground. We interact daily with these technologies and spend much trust on a well-established software development process. However, security vulnerabilities appear in software on all kinds of PC(-like) platforms, and more and more vulnerabilities are published, which compromise systems and their users. Thus, software has also to be modified due to changing requirements, bugs, and security flaws and software engineers must more and more face security issues during the software design; especially maintenance programmers must deal with such use cases after a software has been released. In the domain of software development, design patterns have been proposed as the best-known solutions for recurring problems in software design. Analogously, security patterns are best practices aiming at ensuring security. This thesis develops a deeper understanding of the nature of security patterns. It focuses on their validation and detection regarding the support of reviews and maintenance activities. The landscape of security patterns is diverse. Thus, published security patterns are collected and organized to identify software-related security patterns. The description of the selected software-security patterns is assessed, and they are compared against the common design patterns described by Gamma et al. to identify differences and issues that may influence the detection of security patterns. Based on these insights and a manual detection approach, we illustrate an automatic detection method for security patterns. The approach is implemented in a tool and evaluated in a case study with 25 real-world Android applications from Google Play

    Provenance-Aware CXXR

    Get PDF
    A provenance-aware computer system is one that records information about the operations it performs on data to enable it to provide an account of the process that led to a particular item of data. These systems allow users to ask questions of data, such as “What was the sequence of steps involved in its creation?”, “What other items of data were used to create it?”, or “What items of data used it during their creation?”. This work will present a study of how, and the extent to which the CXXR statistical programming software can be made aware of the provenance of the data on which it operates. CXXR is a variant of the R programming language and environment, which is an open source implementation of S. Interestingly S is notable for becoming an early pioneer of provenance-aware computing in 1988. Examples of adapting software such as CXXR for provenance-awareness are few and far between, and the idiosyncrasies of an interpreter such as CXXR—moreover the R language itself—present interesting challenges to provenance-awareness: such as receiving input from a variety of sources and complex evaluation mechanisms. Herein presented are designs for capturing and querying provenance information in such an environment, along with serialisation facilities to preserve data together with its provenance so that they may be distributed and/or subsequently restored to a CXXR session. Also presented is a method for enabling this serialised provenance information to be interoperable with other provenance-aware software. This work also looks at the movement towards making research reproducible, and considers that provenance-aware systems, and provenance-aware CXXR in particular, are well positioned to further the goal of making computational research reproducible

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation
    corecore