11 research outputs found

    Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

    Full text link
    We present a graphical and dynamic framework for binding and execution of business) process models. It is tailored to integrate 1) ad hoc processes modeled graphically, 2) third party services discovered in the (Inter)net, and 3) (dynamically) synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component) processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business) process modeling scenario: the development of Springer's browser-based Online Conference Service (OCS). The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    DSL-based Interoperability and Integration in the Smart Manufacturing Digital Thread

    Get PDF
    In the industry 4.0 ecosystem, a Digital Thread connects the data and processes for smarter manufacturing. It provides an end to end integration of the various digital entities thus fostering interoperability, with the aim to design and deliver complex and heterogeneous interconnected systems. We develop a service oriented domain specific Digital Thread platform in a Smart Manufacturing research and prototyping context. We address the principles, architecture and individual aspects of a growing Digital Thread platform. It conforms to the best practices of coordination languages, integration and interoperability of external services from various platforms, and provides orchestration in a formal methods based, low-code and graphical model driven fashion. We chose the Cinco products DIME and Pyrus as the underlying IT platforms for our Digital Thread solution to serve the needs of the applications addressed: manufacturing analytics and predictive maintenance are in fact core capabilities for the success of smart manufacturing operations. In this regard, we extend the capabilities of these two platforms in the vertical domains of data persistence, IoT connectivity and analytics, to support the basic operations of smart manufacturing. External native DSLs provide the data and capability integrations through families of SIBs. The small examples constitute blueprints for the methodology, addressing the knowledge, terminology and concerns of domain stakeholders. Over time, we expect reuse to increase, reducing the new integration and development effort to a progressively smaller portion of the models and code needed for at least the most standard application

    Aligned and collaborative language-driven engineering

    Get PDF
    Today's software development is increasingly performed with the help of low- and no-code platforms that follow model-driven principles and use domain-specific languages (DSLs). DSLs support the different aspects of the development and the user's mindset by a tailored and intuitive language. By combining specific languages with real-time collaboration, development environments can be provided whose users no longer need to be programmers. This way, domain experts can develop their solution independently without the need for a programmer's translation and the associated semantic gap. However, the development and distribution of collaborative mindset-supporting IDEs (mIDEs) is enormously costly. Besides the basic challenge of language development, a specialized IDE has to be provided, which should work equally well on all common platforms and individual heterogeneous system setups. This dissertation describes the conception and realization of the web-based, unified environment CINCO Cloud, in which DSLs can be collaboratively developed, used, transformed and executed. By providing full support at all steps, the philosophy of language-driven engineering is enabled and realized for the first time. As a foundation for the unified environment, the infrastructure of cloud development IDEs is analyzed and extended so that new languages can be distributed on-the-fly. Subsequently, concepts for language specialization, refinement and concretization are developed and described to realize the language-driven engineering approach, in a dynamic cluster-based environments. In addition, synchronization mechanisms and authorization structures are designed to enable collaboration between the users of the environment. Finally, the central aligned processes within the CINCO Cloud for developing, using, transforming and executing a DSL are illustrated to clarify how the dynamic system behaves

    Automating the referral pathways for Multiple Myeloma through a Web Application and XMDD

    Get PDF
    Multiple Myeloma (MM), a type of bone marrow cancer, is diagnosed by measuring monoclonal proteins, paraproteins (PP), and serum-free light chains (SFLC) in the blood. These proteins can be detected in healthy individuals at a lower level. This condition is called Monoclonal Gammopathy of Uncertain Significance (MGUS). MGUS is associated with a risk of progression to MM at a rate of 1-2% per year. Early diagnosis of MM correlates with improved overall survival for patients, so early referral of suspect cases is important. Two risk factors determine the risk of progression: a high-level PP (>15g/l) and an abnormal SFLC ratio. This risk stratification process enables General Practitioners (essentially, the family doctors) to manage the patients with low-risk MGUS and provides clear referral pathways for intermediate and high-risk MGUS patients. There are a reference algorithm and a scoring system for patient referrals with possible Multiple Myeloma, that in the current practice are processed manually by trained healthcare staff. In collaboration with the Haematology experts at the University Hospital Limerick and the SCCE group in Computer Science, we designed and implemented a software application that improves and streamlines the current process. This (online) application is developed with modern XMDD technology, using the DIME low-code application development tool. The application faithfully maps the reference algorithm in an automated way and applies it to a consultation data-set. The novelty consists in the adopted technologies, that improve the early validation and correctness of the software, and ease the human understanding and the modification turnaround of the application

    Evolution of ecosystems for Language-Driven Engineering

    Get PDF
    Language-Driven Engineering (LDE) is a means to model-driven software development by creating Integrated Modeling Environments (IMEs) with Domain/Purpose-Specific Languages (PSLs), each tailored towards a specific aspect of the respective system to be modeled, thereby taking the specific needs of developers and other stakeholders into account. Combined with the powerful potential of full code generation, these IMEs can generate complete executable software applications from descriptive models. As these products themselves may again be IMEs, this approach leads to LDE Ecosystems of modeling environments with meta-level dependencies. This thesis describes new challenges emerging from changes that affect single components, multiple parts or even the whole LDE ecosystem. From a top-down perspective, this thesis discusses the necessary support by language definition technology to ensure that corresponding IMEs can be validated, generated and tested on demand. From a bottom-up perspective, the formulation of change requests, their upwards propagation and generalization is presented. Finally, the imposed cross-project knowledge sharing and transfer is motivated, fostering interdisciplinary teamwork and cooperation. Based on multifaceted contributions to full-blown projects on different meta-levels of an exemplary LDE ecosystem, this thesis presents specific challenges in creating and continuously evolving LDE ecosystems and deduces a concept of PUTD effects to systematically address various dynamics and appropriate actions to manage both product-level requests that propagate upwards in the meta-level hierarchy as well as the downward propagation of changes to ensure product quality and adequate migration of modeled artifacts along the dependency paths. Finally, the effect of language-driven modeling on the increasingly blurred line between building and using software applications is illustrated to emphasize that the distinction between programming and modeling becomes a mere matter of perspective

    Synthesis of Scientific Workflows: Theory and Practice of an Instance-Aware Approach

    Get PDF
    The last two decades brought an explosion of computational tools and processes in many scientific domains (e.g., life-, social- and geo-science). Scientific workflows, i.e., computational pipelines, accompanied by workflow management systems, were soon adopted as a de-facto standard among non-computer scientists for orchestrating such computational processes. The goal of this dissertation is to provide a framework that would automate the orchestration of such computational pipelines in practice. We refer to such problems as scientific workflow synthesis problems. This dissertation introduces the temporal logic SLTLx, and presents a novel SLTLx-based synthesis approach that overcomes limitations in handling data object dependencies present in existing synthesis approaches. The new approach uses transducers and temporal goals, which keep track of the data objects in the synthesised workflow. The proposed SLTLx-based synthesis includes a bounded and a dynamic variant, which are shown in Chapter 3 to be NP-complete and PSPACE-complete, respectively. Chapter 4 introduces a transformation algorithm that translates the bounded SLTLx-based synthesis problem into propositional logic. The transformation is implemented as part of the APE (Automated Pipeline Explorer) framework, presented in Chapter 5. It relies on highly efficient SAT solving techniques, using an off-the-shelf SAT solver to synthesise a solution for the given propositional encoding. The framework provides an API (application programming interface), a CLI (command line interface), and a web-based GUI (graphical user interface). The development of APE was accompanied by four concrete application scenarios as case studies for automated workflow composition. The studies were conducted in collaboration with domain experts and presented in Chapter 6. Each of the case studies is used to assess and illustrate specific features of the SLTLx-based synthesis approach. (1) A case study on cartographic map generation demonstrates the ability to distinguish data objects as a key feature of the framework. It illustrates the process of annotating a new domain, and presents the iterative workflow synthesis approach, where the user tries to narrow down the desired specification of the problem in a few intuitive steps. (2) A case study on geo-analytical question answering as part of the QuAnGIS project shows the benefits of using data flow dependencies to describe a synthesis problem. (3) A proteomics case study demonstrates the usability of APE as an “off-the-shelf” synthesiser, providing direct integration with existing semantic domain annotations. In addition, a manual evaluation of the synthesised results shows promising results even on large real-life domains, such as the EDAM ontology and the complete bio.tools registry. (4) A geo-event question-answering study demonstrates the usability of APE within a larger question-answering system. This dissertation answers the goals it sets to solve. It provides a formal framework, accompanied by a lightweight library, which can solve real-life scientific workflow synthesis problems. Finally, the development of the library motivated an upcoming collaborative project in the life sciences domain. The aim of the project is to develop a platform which would automatically compose (using APE) and benchmark workflows in computational proteomics

    Architectures and technologies for quality of service provisioning in next generation networks

    Get PDF
    A NGN is a telecommunication network that differs from classical dedicated networks because of its capability to provide voice, video, data and cellular services on the same infrastructure (Quadruple-Play). The ITU-T standardization body has defined the NGN architecture in three different and well-defined strata: the transport stratum which takes care of maintaining end-to-end connectivity, the service stratum that is responsible for enabling the creation and the delivery of services, and finally the application stratum where applications can be created and executed. The most important separation in this architecture is relative to transport and service stratum. The aim is to enable the flexibility to add, maintain and remove services without any impact on the transport layer; to enable the flexibility to add, maintain and remove transport technologies without any impact on the access to service, application, content and information; and finally the efficient cohesistence of multiple terminals, access technologies and core transport technologies. The Service Oriented Architecture (SOA) is a paradigm often used in systems deployment and integration for organizing and utilizing distributed capabilities under the control of different ownership domains. In this thesis, the SOA technologies in network architetures are surveyed following the NGN functional architecture as defined by the ITU-T. Within each stratum, the main logical functions that have been the subject of investigation according to a service-oriented approach have been highlighted. Moreover, a new definition of the NGN transport stratum functionalities according to the SOA paradigm is proposed; an implementation of the relevant services interfaces to analyze this approach with experimental results shows some insight on the potentialities of the proposed strategy. Within NGN architectures research topic, especially in IP-based network architectures, Traffic Engineering (TE) is referred to as a set of policies and algorithms aimed at balancing network traffic load so as to improve network resource utilization and guarantee the service specific end-to-end QoS. DS-TE technology extends TE functionalities to a per-class basis implementation by introducing a higher level of traffic classification which associates to each class type (CT) a constraint on bandwidth utilization. These constraints are set by defining and configuring a bandwidth constraint (BC) model whih drives resource utilization aiming to higher load balancing, higher QoS performance and lower call blocking rate. Default TE implementations relies on a centralized approach to bandwidth and routing management, that require external management entities which periodically collect network status information and provide management actions. However, due to increasing network complexity, it is desiderable that nodes automatically discover their environment, self-configure and update to adapt to changes. In this thesis the bandwidth management problem is approached adopting an autonomic and distributed approach. Each node has a self-management module, which monitors the unreserved bandwidth in adjacent nodes and adjusts the local bandwidth constraints so as to reduce the differences in the unreserved bandwidth of neighbor nodes. With this distributed and autonomic algorithm, BC are dinamically modified to drive routing decision toward the traffic balancing respecting the QoS constraints for each class-type traffic requests. Finally, Video on Demand (VoD) is a service that provides a video whenever the customer requests it. Realizing a VoD system by means of the Internet network requires architectures tailored to video features such as guaranteed bandwidths and constrained transmission delays: these are hard to be provided in the traditional Internet architecture that is not designed to provide an adequate quality of service (QoS) and quality of experience (QoE) to the final user. Typical VoD solutions can be grouped in four categories: centralized, proxy-based, Content Delivery Network(CDN) and Hybrid architectures. Hybrid architectures combine the employment of a centralized server with that of a Peer-to-peer (P2P) network. This approach can effectively reduce the server load and avoid network congestions close to the server site because the peers support the delivery of the video to other peers using a cache-and-relay strategy making use of their upload bandwidth. Anyway, in a peer-to-peer network each peer is free to join and leave the network without notice, bringing to the phenomena of peer churns. These dynamics are dangerous for VoD architectures, affecting the integrity and retainability of the service. In this thesis, a study aimed to evaluate the impact of the peer churn on the system performance is proposed. Starting from important relationships between system parameters such as playback buffer length, peer request rate, peer average lifetime and server upload rate, four different analytic models are proposed

    Architectures and technologies for quality of service provisioning in next generation networks

    Get PDF
    A NGN is a telecommunication network that differs from classical dedicated networks because of its capability to provide voice, video, data and cellular services on the same infrastructure (Quadruple-Play). The ITU-T standardization body has defined the NGN architecture in three different and well-defined strata: the transport stratum which takes care of maintaining end-to-end connectivity, the service stratum that is responsible for enabling the creation and the delivery of services, and finally the application stratum where applications can be created and executed. The most important separation in this architecture is relative to transport and service stratum. The aim is to enable the flexibility to add, maintain and remove services without any impact on the transport layer; to enable the flexibility to add, maintain and remove transport technologies without any impact on the access to service, application, content and information; and finally the efficient cohesistence of multiple terminals, access technologies and core transport technologies. The Service Oriented Architecture (SOA) is a paradigm often used in systems deployment and integration for organizing and utilizing distributed capabilities under the control of different ownership domains. In this thesis, the SOA technologies in network architetures are surveyed following the NGN functional architecture as defined by the ITU-T. Within each stratum, the main logical functions that have been the subject of investigation according to a service-oriented approach have been highlighted. Moreover, a new definition of the NGN transport stratum functionalities according to the SOA paradigm is proposed; an implementation of the relevant services interfaces to analyze this approach with experimental results shows some insight on the potentialities of the proposed strategy. Within NGN architectures research topic, especially in IP-based network architectures, Traffic Engineering (TE) is referred to as a set of policies and algorithms aimed at balancing network traffic load so as to improve network resource utilization and guarantee the service specific end-to-end QoS. DS-TE technology extends TE functionalities to a per-class basis implementation by introducing a higher level of traffic classification which associates to each class type (CT) a constraint on bandwidth utilization. These constraints are set by defining and configuring a bandwidth constraint (BC) model whih drives resource utilization aiming to higher load balancing, higher QoS performance and lower call blocking rate. Default TE implementations relies on a centralized approach to bandwidth and routing management, that require external management entities which periodically collect network status information and provide management actions. However, due to increasing network complexity, it is desiderable that nodes automatically discover their environment, self-configure and update to adapt to changes. In this thesis the bandwidth management problem is approached adopting an autonomic and distributed approach. Each node has a self-management module, which monitors the unreserved bandwidth in adjacent nodes and adjusts the local bandwidth constraints so as to reduce the differences in the unreserved bandwidth of neighbor nodes. With this distributed and autonomic algorithm, BC are dinamically modified to drive routing decision toward the traffic balancing respecting the QoS constraints for each class-type traffic requests. Finally, Video on Demand (VoD) is a service that provides a video whenever the customer requests it. Realizing a VoD system by means of the Internet network requires architectures tailored to video features such as guaranteed bandwidths and constrained transmission delays: these are hard to be provided in the traditional Internet architecture that is not designed to provide an adequate quality of service (QoS) and quality of experience (QoE) to the final user. Typical VoD solutions can be grouped in four categories: centralized, proxy-based, Content Delivery Network(CDN) and Hybrid architectures. Hybrid architectures combine the employment of a centralized server with that of a Peer-to-peer (P2P) network. This approach can effectively reduce the server load and avoid network congestions close to the server site because the peers support the delivery of the video to other peers using a cache-and-relay strategy making use of their upload bandwidth. Anyway, in a peer-to-peer network each peer is free to join and leave the network without notice, bringing to the phenomena of peer churns. These dynamics are dangerous for VoD architectures, affecting the integrity and retainability of the service. In this thesis, a study aimed to evaluate the impact of the peer churn on the system performance is proposed. Starting from important relationships between system parameters such as playback buffer length, peer request rate, peer average lifetime and server upload rate, four different analytic models are proposed

    Engineering Sustainability for the Future

    Get PDF
    The 38th International Manufacturing Conference, IMC38, showcases current research in the field of "manufacturing engineering" undertaken in Ireland by postgraduate students and experienced researchers. Indicative topics, in line with the contents of these proceedings, include; sustainable and energy efficient manufacturing, additive manufacturing, Industry 4.0 and digital manufacturing, machine tool, automation and manufacturing system design, surface engineering, forming and joining process research. The IMC community is also involved in research aimed at improving the learning experience of undergraduate and graduate engineers and developing high level skills for the manufacturing engineer of the future. The theme for this year’s conference is Sustainable Manufacturing, with a particular emphasis on a) Digitalisation of Manufacturing – its impact on sustainability and b) Addressing sustainability in Engineering Education, Industrial Training and CPD.Science Foundation Irelan

    Generator-Composition for Aspect-Oriented Domain-Specific Languages

    Get PDF
    Software systems are complex, as they must cover a diverse set of requirements describing functionality and the environment. Software engineering addresses this complexity with Model-Driven Engineering ( MDE ). MDE utilizes different models and metamodels to specify views and aspects of a software system. Subsequently, these models must be transformed into code and other artifacts, which is performed by generators. Information systems and embedded systems are often used over decades. Over time, they must be modified and extended to fulfill new and changed requirements. These alterations can be triggered by the modeling domain and by technology changes in both the platform and programming languages. In MDE these alterations result in changes of syntax and semantics of metamodels, and subsequently of generator implementations. In MDE, generators can become complex software applications. Their complexity depends on the semantics of source and target metamodels, and the number of involved metamodels. Changes to metamodels and their semantics require generator modifications and can cause architecture and code degradation. This can result in errors in the generator, which have a negative effect on development costs and time. Furthermore, these errors can reduce quality and increase costs in projects utilizing the generator. Therefore, we propose the generator construction and evolution approach GECO, which supports decoupling of generator components and their modularization. GECO comprises three contributions: (a) a method for metamodel partitioning into views, aspects, and base models together with partitioning along semantic boundaries, (b) a generator composition approach utilizing megamodel patterns for generator fragments, which are generators depending on only one source and one target metamodel, (c) an approach to modularize fragments along metamodel semantics and fragment functionality. All three contributions together support modularization and evolvability of generators
    corecore