12 research outputs found

    Execution/Simulation of Context/Constraint-aware Composite Services using GIPSY

    Get PDF
    For fulfilling a complex requirement comprising of several sub-tasks, a composition of simple web services, each of which is dedicated to performing a specific sub-task involved, proves to be a more competent solution in comparison to an equivalent atomic web service. Owing to advantages such as re-usability of components, broader options for composition requesters and liberty to specialize for component providers, for over two decades now, composite services have been extensively researched to the point of being perfected in many aspects. Yet, most of the studies undertaken in this field fail to acknowledge that every web service has a limited context in which it can successfully perform its tasks, the boundaries of which are defined by the internal constraints placed on the service by its providers. When used as part of a composition, the restricted context-spaces of all such component services together define the contextual boundaries of the composite service as a unit, which makes internal constraints an influential factor for composite service functionality. However, due to the limited exposure received by them, no systems have yet been proposed to cater to the specific verification of internal constraints imposed on components of a composite service. In an attempt to address this gap in service composition research, in this thesis, we propose a multi-faceted solution capable of not only automatically constructing context-aware composite web services with their internal constraints positioned for optimum resource-utilization but also of validating the generated compositions using the General Intensional Programming SYstem (GIPSY) as a time- and cost-efficient simulation/execution environment

    Scalable Automatic Service Composition using Genetic Algorithms

    Get PDF
    A composition of simple web services, each dedicated to performing a specific sub- task involved, proves to be a more competitive solution than an equivalent atomic web service for a complex requirement comprised of several sub-tasks. Composite services have been extensively researched and perfected in many aspects for over two decades, owing to benefits such as component re-usability, broader options for composition requesters, and the liberty to specialize for component providers. However, most studies in this field must acknowledge that each web service has a limited context in which it can successfully perform its tasks, the boundaries defined by the internal constraints imposed on the service by its providers. The restricted context-spaces of all such component services define the contextual boundaries of the composite service as a whole when used in a composition, making internal constraints an essential factor in composite service functionality. Due to their limited exposure, no systems have yet been proposed on the large-scale solution repository to cater to the specific verification of internal constraints imposed on components of a composite service. In this thesis, we propose a scalable automatic service composition capable of not only automatically constructing context-aware composite web services with internal constraints positioned for optimal resource utilization but also validating the generated compositions on a large-scale solution repository using the General Intensional Programming System (GIPSY) as a time- and cost-efficient simulation/execution environment

    Applications of Blockchain in Business Processes: A Comprehensive Review

    Get PDF
    Blockchain (BC), as an emerging technology, is revolutionizing Business Process Management (BPM) in multiple ways. The main adoption is to serve as a trusted infrastructure to guarantee the trust of collaborations among multiple partners in trustless environments. Especially, BC enables trust of information by using Distributed Ledger Technology (DLT). With the power of smart contracts, BC enforces the obligations of counterparties that transact in a business process (BP) by programming the contracts as transactions. This paper aims to study the state-of-the-art of BC technologies by (1) exploring its applications in BPM with the focus on how BC provides the trust of BPs in their lifecycles; (2) identifying the relations of BPM as the need and BC as the solution with the assessment towards BPM characteristics; (3) discussing the up-to-date progresses of critical BC in BPM; (4) identifying the challenges and research directions for future advancement in the domain. The main conclusions of our comprehensive review are (1) the study of adopting BC in BPM has attracted a great deal of attention that has been evidenced by a rapidly growing number of relevant articles. (2) The paradigms of BPM over Internet of Things (IoT) have been shifted from persistent to transient, from static to dynamic, and from centralized to decentralized, and new enabling technologies are highly demanded to fulfill some emerging functional requirements (FRs) at the stages of design, configuration, diagnosis, and evaluation of BPs in their lifecycles. (3) BC has been intensively studied and proven as a promising solution to assure the trustiness for both of business processes and their executions in decentralized BPM. (4) Most of the reported BC applications are at their primary stages, future research efforts are needed to meet the technical challenges involved in interoperation, determination of trusted entities, confirmation of time-sensitive execution, and support of irreversibility

    Model Transformation Testing and Debugging: A Survey

    Get PDF
    Model transformations are the key technique in Model-Driven Engineering (MDE) to manipulate and construct models. As a consequence, the correctness of software systems built with MDE approaches relies mainly on the correctness of model transformations, and thus, detecting and locating bugs in model transformations have been popular research topics in recent years. This surge of work has led to a vast literature on model transformation testing and debugging, which makes it challenging to gain a comprehensive view of the current state of the art. This is an obstacle for newcomers to this topic and MDE practitioners to apply these approaches. This paper presents a survey on testing and debugging model transformations based on the analysis of \nPapers~papers on the topics. We explore the trends, advances, and evolution over the years, bringing together previously disparate streams of work and providing a comprehensive view of these thriving areas. In addition, we present a conceptual framework to understand and categorise the different proposals. Finally, we identify several open research challenges and propose specific action points for the model transformation community.This work is partially supported by the European Commission (FEDER) and Junta de Andalucia under projects APOLO (US-1264651) and EKIPMENT-PLUS (P18-FR-2895), by the Spanish Government (FEDER/Ministerio de Ciencia e Innovación – Agencia Estatal de Investigación) under projects HORATIO (RTI2018-101204-B-C21), COSCA (PGC2018-094905-B-I00) and LOCOSS (PID2020-114615RB-I00), by the Austrian Science Fund (P 28519-N31, P 30525-N31), and by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development (CDG

    The 3rd Reactive Synthesis Competition (SYNTCOMP 2016): Benchmarks, Participants & Results

    Get PDF
    We report on the benchmarks, participants and results of the third reactive synthesis competition(SYNTCOMP 2016). The benchmark library of SYNTCOMP 2016 has been extended to benchmarks in the new LTL-based temporal logic synthesis format (TLSF), and 2 new sets of benchmarks for the existing AIGER-based format for safety specifications. The participants of SYNTCOMP 2016 can be separated according to these two classes of specifications, and we give an overview of the 6 tools that entered the competition in the AIGER-based track, and the 3 participants that entered the TLSF-based track. We briefly describe the benchmark selection, evaluation scheme and the experimental setup of SYNTCOMP 2016. Finally, we present and analyze the results of our experimental evaluation, including a comparison to participants of previous competitions and a legacy tool.Comment: In Proceedings SYNT 2016, arXiv:1611.0717

    Control of colocated geostationary satellites

    No full text
    Control of the inter-satellite distances within a cluster of colocated satellites located in the same GEO window is examined with regards to the close approaches between pairs of satellites. Firstly, the orbital evolution and station keeping control of a single GEO satellite is examined and a new IBM PC based software program capable of performing both these functions autonomously from initial values of the orbital position and date is detailed and validated. Cluster design ideas are then examined in detail and the propagation software is used to generate data for a cluster of four satellites. Two test cases are examined to quantify the frequency of close approaches between individual satellite pairs, each test case using a different orbital element separation strategy but the same station keeping control scheme. The results of the study are then compared with previous research and discussions are presented on the advantages of each method. Finally, a cluster geometry correction manoeuvre, based on Hill's equations of relative motion, is presented which requires only those thrusters used by typical station keeping. This manoeuvre is integrated into the computer software and the two test cases noted previously are again propagated and the close approach results analysed to demonstrate the reduction in the number of close approaches below 5 km

    A Survey of Challenges for Runtime Verification from Advanced Application Domains (Beyond Software)

    Get PDF
    Runtime verification is an area of formal methods that studies the dynamic analysis of execution traces against formal specifications. Typically, the two main activities in runtime verification efforts are the process of creating monitors from specifications, and the algorithms for the evaluation of traces against the generated monitors. Other activities involve the instrumentation of the system to generate the trace and the communication between the system under analysis and the monitor. Most of the applications in runtime verification have been focused on the dynamic analysis of software, even though there are many more potential applications to other computational devices and target systems. In this paper we present a collection of challenges for runtime verification extracted from concrete application domains, focusing on the difficulties that must be overcome to tackle these specific challenges. The computational models that characterize these domains require to devise new techniques beyond the current state of the art in runtime verification
    corecore