176 research outputs found

    Service orchestration with priority constraints

    Get PDF
    Business process management is an operational management approach that focuses on improving business processes. Business processes, i.e., collections of important activities in an organization, are represented in the form of a workflow, an orchestrated and repeatable pattern of activities amenable to automated analysis and control. Priority is an important concept in modeling workflows. We need priority to model cancelable and compensable tasks within transactional business processes. We use the Reo coordination language to model and formally analyze workflows. In this paper, we propose a constraint-based approach to formalize priority in Reo. We introduce special channels to propagate and block priority flows, define their semantics as constraints, and model priority propagation as a constraint satisfaction problem

    Specification and verification of network algorithms using temporal logic

    Get PDF
    In software engineering, formal methods are mathematical-based techniques that are used in the specification, development and verification of algorithms and programs in order to provide reliability and robustness of systems. One of the most difficult challenges for software engineering is to tackle the complexity of algorithms and software found in concurrent systems. Networked systems have come to prominence in many aspects of modern life, and therefore software engineering techniques for treating concurrency in such systems has acquired a particular importance. Algorithms in the software of concurrent systems are used to accomplish certain tasks which need to comply with the properties required of the system as a whole. These properties can be broadly subdivided into `safety properties', where the requirement is `nothing bad will happen', and `liveness properties', where the requirement is that `something good will happen'. As such, specifying network algorithms and their safety and liveness properties through formal methods is the aim of the research presented in this thesis. Since temporal logic has proved to be a successful technique in formal methods, which have various practical applications due to the availability of powerful model-checking tools such as the NuSMV model checker, we will investigate the specification and verification of network algorithms using temporal logic and model checking. In the first part of the thesis, we specify and verify safety properties for network algorithms. We will use temporal logic to prove the safety property of data consistency or serializability for a model of the execution of an unbounded number of concurrent transactions over time, which could represent software schedulers for an unknown number of transactions being present in a network. In the second part of the thesis, we will specify and verify the liveness properties of networked flooding algorithms. Considering the above in more detail, the first part of this thesis specifies a model of the execution of an unbounded number of concurrent transactions over time in propositional Linear Temporal Logic (LTL) in order to prove serializability. This is made possible by assuming that data items are ordered and that the transactions accessing these data items respects this order, as then there is a bound on the number of transactions that need to be considered to prove serializability. In particular, we make use of recent work which places such bounds on the number of transactions needed when data items are accessed in order, but do not have to be accessed contiguously, i.e., there may be `gaps' in the data items being accessed by individual transactions. Our aim is to specify the concurrent modification of data held on routers in a network as a transactional model. The correctness of the routing protocol and ensuring safety and reliability then corresponds to the serializability of the transactions. We specify an example of routing in a network and the corresponding serializability condition in LTL. This is then coded up in the NuSMV model checker and proofs are performed. The novelty of this part is that no previous research has used a method for detecting serializablity and cycles for unlimited number of transactions accessing the data on routers where the transactions way of accessing the data items on the routers have a gap. In addition to this, linear temporal logic has not been used in this scenario to prove correctness of the network system. This part is very helpful in network administrative protocols where it is critical to maintain correctness of the system. This safety property can be maintained using the presented work where detection of cycles in transactions accessing the data items can be detected by only checking a limited number of cycles rather than checking all possible cycles that can be caused by the network transactions. The second part of the thesis offers two contributions. Firstly, we specify the basic synchronous network flooding algorithm, for any fixed size of network, in LTL. The specification can be customized to any single network topology or class of topologies. A specification for the termination problem is formulated and used to compare different topologies with regards to earlier termination. We give a worked example of one topology resulting in earlier termination than another, for which we perform a formal verification using the NuSMV model checker. The novelty of the second part comes in using linear temporal logic and the NuSMV model checker to specify and verify the liveness property of the flooding algorithm. The presented work shows a very difficult scenario where the network nodes are memoryless. This makes detecting the termination of network flooding very complicated especially with networks of complex topologies. In the literature, researchers focussed on using testing and simulations to detect flooding termination. In this work, we used a robust technique and a rigorous method to specify and verify the synchronous flooding algorithm and its termination. We also showed that we can use linear temporal logic and the model checker NuSMV to compare synchronous flooding termination between topologies. Adding to the novelty of the second contribution, in addition to the synchronous form of the network flooding algorithm, we further provide a formal model of bounded asynchronous network flooding by extending the synchronous flooding model to allow a sent message, non-deterministically, to either be received instantaneously, or enter a transit phase prior to being received. A generalization of `rounds' from synchronous flooding to the asynchronous case is used as a unit of time to provide a measure of time to termination, as the number of rounds taken, for a run of an asynchronous system. The model is encoded into temporal logic and a proof obligation is given for comparing the termination times of asynchronous and synchronous systems. Worked examples are formally verified using the NuSMV model checker. This work offers a constraint-based methodology for the verification of liveness properties of software algorithms distributed across the nodes in a network.</div

    Correct-by-Construction Web Service Architecture

    Full text link
    Abstract—Service-Oriented Computing aims to facilitate devel-opment of large-scale applications out of loosely coupled services. The service architecture sets the framework for achieving cohe-rence and interoperability despite service autonomy and the hete-rogeneity in data representation and protocols. Service-Oriented Architectures are based on standardized service contracts, in order to infuse characteristic properties (stateless interactions, atomicity etc). However, contracts cannot ensure correctness of services if essential operational details are overlooked, as is usually the case. We introduce a modeling framework for the specification of Web Service architectures, in terms of formal operational semantics. Our approach aims to enable rigorous design of Web Services, based on the Behaviour Interaction Priorities (BIP) component framework and the principles of correctness-by-construction. We provide executable BIP models for SOAP-based and RESTful Web Services and for a service ar-chitecture with session replication. The architectures are treated as reusable design artifacts that may be composed, such that their characteristic properties are preserved

    Timing verification in transaction modeling

    Full text link
    Les systèmes Matériels/Logiciels deviennent indispensables dans tous les aspects de la vie quotidienne. La présence croissante de ces systèmes dans les différents produits et services incite à trouver des méthodes pour les développer efficacement. Mais une conception efficace de ces systèmes est limitée par plusieurs facteurs, certains d'entre eux sont: la complexité croissante des applications, une augmentation de la densité d'intégration, la nature hétérogène des produits et services, la diminution de temps d’accès au marché. Une modélisation transactionnelle (TLM) est considérée comme un paradigme prometteur permettant de gérer la complexité de conception et fournissant des moyens d’exploration et de validation d'alternatives de conception à des niveaux d’abstraction élevés. Cette recherche propose une méthodologie d’expression de temps dans TLM basée sur une analyse de contraintes temporelles. Nous proposons d'utiliser une combinaison de deux paradigmes de développement pour accélérer la conception: le TLM d'une part et une méthodologie d’expression de temps entre différentes transactions d’autre part. Cette synergie nous permet de combiner dans un seul environnement des méthodes de simulation performantes et des méthodes analytiques formelles. Nous avons proposé un nouvel algorithme de vérification temporelle basé sur la procédure de linéarisation des contraintes de type min/max et une technique d'optimisation afin d'améliorer l'efficacité de l'algorithme. Nous avons complété la description mathématique de tous les types de contraintes présentées dans la littérature. Nous avons développé des méthodes d'exploration et raffinement de système de communication qui nous a permis d'utiliser les algorithmes de vérification temporelle à différents niveaux TLM. Comme il existe plusieurs définitions du TLM, dans le cadre de notre recherche, nous avons défini une méthodologie de spécification et simulation pour des systèmes Matériel/Logiciel basée sur le paradigme de TLM. Dans cette méthodologie plusieurs concepts de modélisation peuvent être considérés séparément. Basée sur l'utilisation des technologies modernes de génie logiciel telles que XML, XSLT, XSD, la programmation orientée objet et plusieurs autres fournies par l’environnement .Net, la méthodologie proposée présente une approche qui rend possible une réutilisation des modèles intermédiaires afin de faire face à la contrainte de temps d’accès au marché. Elle fournit une approche générale dans la modélisation du système qui sépare les différents aspects de conception tels que des modèles de calculs utilisés pour décrire le système à des niveaux d’abstraction multiples. En conséquence, dans le modèle du système nous pouvons clairement identifier la fonctionnalité du système sans les détails reliés aux plateformes de développement et ceci mènera à améliorer la "portabilité" du modèle d'application.Hardware/Software (Hw/Sw) systems are likely to become essential in all aspects of everyday life. The increasing penetration of Hw/Sw systems in products and services creates a necessity of their efficient development. However, the productive design of these systems is limited by several factors, some of them being the increasing complexity of applications, the increasing degree of integration, the heterogeneous nature of products and services as well as the shrinking of the time-to-market delay. Transaction Level Modeling (TLM) paradigm is considered as one of the most promising simulation paradigms to break down the design complexity by allowing the exploration and validation of design alternatives at high levels of abstraction. This research proposes a timing expression methodology in TLM based on temporal constraints analysis. We propose to use a combination of two paradigms to accelerate the design process: TLM on one hand and a methodology to express timing between different transactions on the other hand. Using a timing specification model and underlining timing constraints verification algorithms can decrease the time needed for verification by simulation. Combining in one framework the simulation and analytical design exploration methods can improve the analytical power of design verification and validation. We have proposed a new timing verification algorithm based on the linearization procedure and an optimization technique to improve its efficiency. We have completed the mathematical representation of all constraint types discussed in the literature creating in this way a unified timing specification methodology that can be used in the expression of a wider class of applications than previously presented ones. We have developed the methods for communication structure exploration and refinement that permitted us to apply the timing verification algorithms in system exploration at different TLM levels. As there are many definitions of TLM and many development environments proposing TLM in their design cycle with several pro and contra, in the context of our research we define a hardware/software (Hw/Sw) specification and simulation methodology which supports TLM in such a way that several modeling concepts can be seen separately. Relying on the use of modern software engineering technologies such as XML, XSLT, XSD, object oriented programming and others supported by the .Net Framework, an approach that makes an intermediate design model reuse possible in order to cope with time-to-market constraint is presented. The proposed TLM design methodology provides a general approach in system modeling that separates various application modeling aspects from system specification: computational models, used in application modeling, supported by the language used for the functional specification and provided by simulator. As a result, in the system model we can clearly identify system functionality without details related to the development platform thereby leading to a better “portability” of the application model

    Comparative Study Of Implementing The On-Premises and Cloud Business Intelligence On Business Problems In a Multi-National Software Development Company

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceNowadays every enterprise wants to be competitive. In the last decade, the data volumes are increased dramatically. As each year data in the market increases, the ability to extract, analyze and manage the data become the backbone condition for the organization to be competitive. In this condition, organizations need to adapt their technologies to the new business reality in order to be competitive and provide new solutions that meet new requests. Business Intelligence by the main definition is the ability to extract analyze and manage the data through which an organization gain a competitive advantage. Before using this approach, it’s important to decide on which computing system it will base on, considering the volume of data, business context of the organization and technologies requirements of the market. In the last 10 years, the popularity of cloud computing increased and divided the computing Systems into On-Premises and cloud. The cloud benefits are based on providing scalability, availability and fewer costs. On another hand, traditional On-Premises provides independence of software configuration, control over data and high security. The final decision as to which computing paradigm to follow in the organization it’s not an easy task as well as depends on the business context of the organization, and the characteristics of the performance of the current On-Premises systems in business processes. In this case, Business Intelligence functions and requires in-depth analysis in order to understand if cloud computing technologies could better perform in those processes than traditional systems. The objective of this internship is to conduct a comparative study between 2 computing systems in Business Intelligence routine functions. The study will compare the On-Premises Business Intelligence Based on Oracle Architecture with Cloud Business Intelligence based on Google Cloud Services. A comparative study will be conducted through participation in activities and projects in the Business Intelligence department, of a company that develops software digital solutions to serve the telecommunications market for 12 months, as an internship student in the 2nd year of a master’s degree in Information Management, with a specialization in Knowledge Management and Business Intelligence at Nova Information Management School (NOVA IMS)

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    A patient agent controlled customized blockchain based framework for internet of things

    Get PDF
    Although Blockchain implementations have emerged as revolutionary technologies for various industrial applications including cryptocurrencies, they have not been widely deployed to store data streaming from sensors to remote servers in architectures known as Internet of Things. New Blockchain for the Internet of Things models promise secure solutions for eHealth, smart cities, and other applications. These models pave the way for continuous monitoring of patient’s physiological signs with wearable sensors to augment traditional medical practice without recourse to storing data with a trusted authority. However, existing Blockchain algorithms cannot accommodate the huge volumes, security, and privacy requirements of health data. In this thesis, our first contribution is an End-to-End secure eHealth architecture that introduces an intelligent Patient Centric Agent. The Patient Centric Agent executing on dedicated hardware manages the storage and access of streams of sensors generated health data, into a customized Blockchain and other less secure repositories. As IoT devices cannot host Blockchain technology due to their limited memory, power, and computational resources, the Patient Centric Agent coordinates and communicates with a private customized Blockchain on behalf of the wearable devices. While the adoption of a Patient Centric Agent offers solutions for addressing continuous monitoring of patients’ health, dealing with storage, data privacy and network security issues, the architecture is vulnerable to Denial of Services(DoS) and single point of failure attacks. To address this issue, we advance a second contribution; a decentralised eHealth system in which the Patient Centric Agent is replicated at three levels: Sensing Layer, NEAR Processing Layer and FAR Processing Layer. The functionalities of the Patient Centric Agent are customized to manage the tasks of the three levels. Simulations confirm protection of the architecture against DoS attacks. Few patients require all their health data to be stored in Blockchain repositories but instead need to select an appropriate storage medium for each chunk of data by matching their personal needs and preferences with features of candidate storage mediums. Motivated by this context, we advance third contribution; a recommendation model for health data storage that can accommodate patient preferences and make storage decisions rapidly, in real-time, even with streamed data. The mapping between health data features and characteristics of each repository is learned using machine learning. The Blockchain’s capacity to make transactions and store records without central oversight enables its application for IoT networks outside health such as underwater IoT networks where the unattended nature of the nodes threatens their security and privacy. However, underwater IoT differs from ground IoT as acoustics signals are the communication media leading to high propagation delays, high error rates exacerbated by turbulent water currents. Our fourth contribution is a customized Blockchain leveraged framework with the model of Patient-Centric Agent renamed as Smart Agent for securely monitoring underwater IoT. Finally, the smart Agent has been investigated in developing an IoT smart home or cities monitoring framework. The key algorithms underpinning to each contribution have been implemented and analysed using simulators.Doctor of Philosoph

    Embedded System Design

    Get PDF
    A unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Implementation and performance aspects of Kahn process networks : an investigation of problem modeling, implementation techniques, and scheduling strategies

    Get PDF
    For å spare strøm og redusere oppheting kjører moderne prosessorer på lavere frekvens enn de tidligere prosessorene. Produsentene kompenserer performansetapet ved å innpakke flere kjerner i en brikke som da kan kjøre flere programmer samtidig. Selv om prosessorer med flere kjerner har større total regnekraft enn de tidligere prosessorer, kjører likevel de fleste eksisterende programmer tregere enn på de eldre prosessorer. Dette skjer fordi programmer flest er skrevet på en måte som tillater dem å utnytte kun en av flere kjerner. For at et program skal kunne utnytte flere kjerner, må det omskrives nesten fra bunnen av, som er tidskrevende og dyrt. Ikke minst, utivklerne må lære en helt ny tankemåte. I dette arbeidet, som ble utført i perioden 2005-2009 ved Institutt for informatikk og Simula, har vi undersøkt hvordan vi kan gjøre det lettere å utvikle parallelle programmer som bruker flere kjerner. Vi tok utgangspunktet i det matematiske rammeverket av ”Kahn process networks”, som stammer fra 1970-tallet, og implementerte et bibliotek som gjør det mulig at eksisterende programmer kan lett utvides til å bruke flere kjerner. Med bruk av vårt bibliotek vil programmer automatisk kunne bruke alle tilgjengelige kjerner i en datamaskin, uten noen endringer. Våre eksperimenter har også vist at tilpasning av eksisterende programmer til vårt bibliotek krever minimale endringer i eksisterende kode
    • …
    corecore