17,628 research outputs found

    Context Oriented Software Middleware

    Get PDF
    Our middleware approach, Context-Oriented Software Middleware (COSM), supports context-dependent software with self-adaptability and dependability in a mobile computing environment. The COSM-middleware is a generic and platform-independent adaptation engine, which performs a runtime composition of the software's context-dependent behaviours based on the execution contexts. Our middleware distinguishes between the context-dependent and context-independent functionality of software systems. This enables the COSM-middleware to adapt the application behaviour by composing a set of context-oriented components, that implement the context-dependent functionality of the software. Accordingly, the software dependability is achieved by considering the functionality of the COSM-middleware and the adaptation impact/costs. The COSM-middleware uses a dynamic policy-based engine to evaluate the adaptation outputs and verify the fitness of the adaptation output with the application's objectives, goals and the architecture quality attributes. These capabilities are demonstrated through an empirical evaluation of a case study implementation

    A Framework for Agile Development of Component-Based Applications

    Get PDF
    Agile development processes and component-based software architectures are two software engineering approaches that contribute to enable the rapid building and evolution of applications. Nevertheless, few approaches have proposed a framework to combine agile and component-based development, allowing an application to be tested throughout the entire development cycle. To address this problematic, we have built CALICO, a model-based framework that allows applications to be safely developed in an iterative and incremental manner. The CALICO approach relies on the synchronization of a model view, which specifies the application properties, and a runtime view, which contains the application in its execution context. Tests on the application specifications that require values only known at runtime, are automatically integrated by CALICO into the running application, and the captured needed values are reified at execution time to resume the tests and inform the architect of potential problems. Any modification at the model level that does not introduce new errors is automatically propagated to the running system, allowing the safe evolution of the application. In this paper, we illustrate the CALICO development process with a concrete example and provide information on the current implementation of our framework

    Software engineering perspectives on physiological computing

    Get PDF
    Physiological computing is an interesting and promising concept to widen the communication channel between the (human) users and computers, thus allowing an increase of software systems' contextual awareness and rendering software systems smarter than they are today. Using physiological inputs in pervasive computing systems allows re-balancing the information asymmetry between the human user and the computer system: while pervasive computing systems are well able to flood the user with information and sensory input (such as sounds, lights, and visual animations), users only have a very narrow input channel to computing systems; most of the time, restricted to keyboards, mouse, touchscreens, accelerometers and GPS receivers (through smartphone usage, e.g.). Interestingly, this information asymmetry often forces the user to subdue to the quirks of the computing system to achieve his goals -- for example, users may have to provide information the software system demands through a narrow, time-consuming input mode that the system could sense implicitly from the human body. Physiological computing is a way to circumvent these limitations; however, systematic means for developing and moulding physiological computing applications into software are still unknown. This thesis proposes a methodological approach to the creation of physiological computing applications that makes use of component-based software engineering. Components help imposing a clear structure on software systems in general, and can thus be used for physiological computing systems as well. As an additional bonus, using components allow physiological computing systems to leverage reconfigurations as a means to control and adapt their own behaviours. This adaptation can be used to adjust the behaviour both to the human and to the available computing environment in terms of resources and available devices - an activity that is crucial for complex physiological computing systems. With the help of components and reconfigurations, it is possible to structure the functionality of physiological computing applications in a way that makes them manageable and extensible, thus allowing a stepwise and systematic extension of a system's intelligence. Using reconfigurations entails a larger issue, however. Understanding and fully capturing the behaviour of a system under reconfiguration is challenging, as the system may change its structure in ways that are difficult to fully predict. Therefore, this thesis also introduces a means for formal verification of reconfigurations based on assume-guarantee contracts. With the proposed assume-guarantee contract framework, it is possible to prove that a given system design (including component behaviours and reconfiguration specifications) is satisfying real-time properties expressed as assume-guarantee contracts using a variant of real-time linear temporal logic introduced in this thesis - metric interval temporal logic for reconfigurable systems. Finally, this thesis embeds both the practical approach to the realisation of physiological computing systems and formal verification of reconfigurations into Scrum, a modern and agile software development methodology. The surrounding methodological approach is intended to provide a frame for the systematic development of physiological computing systems from first psychological findings to a working software system with both satisfactory functionality and software quality aspects. By integrating practical and theoretical aspects of software engineering into a self-contained development methodology, this thesis proposes a roadmap and guidelines for the creation of new physiological computing applications.Physiologisches Rechnen ist ein interessantes und vielversprechendes Konzept zur Erweiterung des Kommunikationskanals zwischen (menschlichen) Nutzern und Rechnern, und dadurch die Berücksichtigung des Nutzerkontexts in Software-Systemen zu verbessern und damit Software-Systeme intelligenter zu gestalten, als sie es heute sind. Physiologische Eingangssignale in ubiquitären Rechensystemen zu verwenden, ermöglicht eine Neujustierung der Informationsasymmetrie, die heute zwischen Menschen und Rechensystemen existiert: Während ubiquitäre Rechensysteme sehr wohl in der Lage sind, den Menschen mit Informationen und sensorischen Reizen zu überfluten (z.B. durch Töne, Licht und visuelle Animationen), hat der Mensch nur sehr begrenzte Einflussmöglichkeiten zu Rechensystemen. Meistens stehen nur Tastaturen, die Maus, berührungsempfindliche Bildschirme, Beschleunigungsmesser und GPS-Empfänger (zum Beispiel durch Mobiltelefone oder digitale Assistenten) zur Verfügung. Diese Informationsasymmetrie zwingt die Benutzer zur Unterwerfung unter die Usancen der Rechensysteme, um ihre Ziele zu erreichen - zum Beispiel müssen Nutzer Daten manuell eingeben, die auch aus Sensordaten des menschlichen Körpers auf unauffällige weise erhoben werden können. Physiologisches Rechnen ist eine Möglichkeit, diese Beschränkung zu umgehen. Allerdings fehlt eine systematische Methodik für die Entwicklung physiologischer Rechensysteme bis zu fertiger Software. Diese Dissertation präsentiert einen methodischen Ansatz zur Entwicklung physiologischer Rechenanwendungen, der auf der komponentenbasierten Softwareentwicklung aufbaut. Der komponentenbasierte Ansatz hilft im Allgemeinen dabei, eine klare Architektur des Software-Systems zu definieren, und kann deshalb auch für physiologische Rechensysteme angewendet werden. Als zusätzlichen Vorteil erlaubt die Komponentenorientierung in physiologischen Rechensystemen, Rekonfigurationen als Mittel zur Kontrolle und Anpassung des Verhaltens von physiologischen Rechensystemen zu verwenden. Diese Adaptionstechnik kann genutzt werden um das Verhalten von physiologischen Rechensystemen an den Benutzer anzupassen, sowie an die verfügbare Recheninfrastruktur im Sinne von Systemressourcen und Geräten - eine Maßnahme, die in komplexen physiologischen Rechensystemen entscheidend ist. Mit Hilfe der Komponentenorientierung und von Rekonfigurationen wird es möglich, die Funktionalität von physiologischen Rechensystemen so zu strukturieren, dass das System wartbar und erweiterbar bleibt. Dadurch wird eine schrittweise und systematische Erweiterung der Funktionalität des Systems möglich. Die Verwendung von Rekonfigurationen birgt allerdings Probleme. Das Systemverhalten eines Software-Systems, das Rekonfigurationen unterworfen ist zu verstehen und vollständig einzufangen ist herausfordernd, da das System seine Struktur auf schwer vorhersehbare Weise verändern kann. Aus diesem Grund führt diese Arbeit eine Methode zur formalen Verifikation von Rekonfigurationen auf Grundlage von Annahme-Zusicherungs-Verträgen ein. Mit dem vorgeschlagenen Annahme-Zusicherungs-Vertragssystem ist es möglich zu beweisen, dass ein gegebener Systementwurf (mitsamt Komponentenverhalten und Spezifikation des Rekonfigurationsverhaltens) eine als Annahme-Zusicherungs-Vertrag spezifizierte Echtzeiteigenschaft erfüllt. Für die Spezifikation von Echtzeiteigenschaften kann eine Variante von linearer Temporallogik für Echtzeit verwendet werden, die in dieser Arbeit eingeführt wird: Die metrische Intervall-Temporallogik für rekonfigurierbare Systeme. Schließlich wird in dieser Arbeit sowohl ein praktischer Ansatz zur Realisierung von physiologischen Rechensystemen als auch die formale Verifikation von Rekonfigurationen in Scrum eingebettet, einer modernen und agilen Softwareentwicklungsmethodik. Der methodische Ansatz bietet einen Rahmen für die systematische Entwicklung physiologischer Rechensysteme von Erkenntnissen zur menschlichen Physiologie hin zu funktionierenden physiologischen Softwaresystemen mit zufriedenstellenden funktionalen und qualitativen Eigenschaften. Durch die Integration sowohl von praktischen wie auch theoretischen Aspekten der Softwaretechnik in eine vollständige Entwicklungsmethodik bietet diese Arbeit einen Fahrplan und Richtlinien für die Erstellung neuer physiologischer Rechenanwendungen

    A component-based framework for certification of components in a cloud of HPC services

    Get PDF
    HPC Shelfis a proposal of a cloud computing platform to provide component-oriented services for High Performance Computing (HPC) applications. This paper presents a Verification-as-a-Service (VaaS) framework for component certification onHPC Shelf. Certification is aimed at providing higher confidence that components of parallel computing systems ofHPC Shelfbehave as expected according to one or more requirements expressed in their contracts. To this end, new abstractions are introduced, starting with certifier components. They are designed to inspect other components and verify them for different types of functional, non-functional and behavioral requirements. The certification framework is naturally based on parallel computing techniques to speed up verification tasks.NORTE-01-0145- FEDER-000037

    Comprehensive Monitor-Oriented Compensation Programming

    Full text link
    Compensation programming is typically used in the programming of web service compositions whose correct implementation is crucial due to their handling of security-critical activities such as financial transactions. While traditional exception handling depends on the state of the system at the moment of failure, compensation programming is significantly more challenging and dynamic because it is dependent on the runtime execution flow - with the history of behaviour of the system at the moment of failure affecting how to apply compensation. To address this dynamic element, we propose the use of runtime monitors to facilitate compensation programming, with monitors enabling the modeller to be able to implicitly reason in terms of the runtime control flow, thus separating the concerns of system building and compensation modelling. Our approach is instantiated into an architecture and shown to be applicable to a case study.Comment: In Proceedings FESCA 2014, arXiv:1404.043

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system

    Monitoring Networks through Multiparty Session Types

    Get PDF
    In large-scale distributed infrastructures, applications are realised through communications among distributed components. The need for methods for assuring safe interactions in such environments is recognized, however the existing frameworks, relying on centralised verification or restricted specification methods, have limited applicability. This paper proposes a new theory of monitored π-calculus with dynamic usage of multiparty session types (MPST), offering a rigorous foundation for safety assurance of distributed components which asynchronously communicate through multiparty sessions. Our theory establishes a framework for semantically precise decentralised run-time enforcement and provides reasoning principles over monitored distributed applications, which complement existing static analysis techniques. We introduce asynchrony through the means of explicit routers and global queues, and propose novel equivalences between networks, that capture the notion of interface equivalence, i.e. equating networks offering the same services to a user. We illustrate our static-dynamic analysis system with an ATM protocol as a running example and justify our theory with results: satisfaction equivalence, local/global safety and transparency, and session fidelity
    corecore