1,348 research outputs found

    A Visual Language for Modeling Business Process Compliance Rules

    Get PDF
    A fundamental challenge for enterprises is to ensure compliance of their business processes with imposed compliance rules stemming from various sources, e.g., corporate guidelines, best practices, standards, and laws. In general, a compliance rule may refer to multiple process perspectives including control flow, time, data, resources, and interactions with business partners. On one hand, compliance rules should be comprehensible for domain experts who must define, verify and apply them. On the other, these rules should have a precise semantics to avoid ambiguities and enable their automated processing. Providing a visual language is advantageous in this context as it allows hiding formal details and offering an intuitive way of modeling the compliance rules. However, existing visual languages for compliance rule modeling have focused on the control flow perspective so far, but lack proper support for the other process perspectives. To remedy this drawback, this paper introduces the extended Compliance Rule Graph language, which enables the visual modeling of compliance rules with the support of multiple perspectives. Overall, this language will foster the modeling and verification of compliance rules in practice

    Update Consistency for Wait-free Concurrent Objects

    Get PDF
    In large scale systems such as the Internet, replicating data is an essential feature in order to provide availability and fault-tolerance. Attiya and Welch proved that using strong consistency criteria such as atomicity is costly as each operation may need an execution time linear with the latency of the communication network. Weaker consistency criteria like causal consistency and PRAM consistency do not ensure convergence. The different replicas are not guaranteed to converge towards a unique state. Eventual consistency guarantees that all replicas eventually converge when the participants stop updating. However, it fails to fully specify the semantics of the operations on shared objects and requires additional non-intuitive and error-prone distributed specification techniques. This paper introduces and formalizes a new consistency criterion, called update consistency, that requires the state of a replicated object to be consistent with a linearization of all the updates. In other words, whereas atomicity imposes a linearization of all of the operations, this criterion imposes this only on updates. Consequently some read operations may return out-dated values. Update consistency is stronger than eventual consistency, so we can replace eventually consistent objects with update consistent ones in any program. Finally, we prove that update consistency is universal, in the sense that any object can be implemented under this criterion in a distributed system where any number of nodes may crash.Comment: appears in International Parallel and Distributed Processing Symposium, May 2015, Hyderabad, Indi

    Model checking: Algorithmic verification and debugging

    Get PDF
    Turing Lecture from the winners of the 2007 ACM A.M. Turing Award.In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model-representing, for example, a hardware or software design-satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem.The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms.The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future.-Moshe Y. Vardi, Editor-in-Chief

    A unified approach for static and runtime verification : framework and applications

    Get PDF
    Static verification of software is becoming ever more effective and efficient. Still, static techniques either have high precision, in which case powerful judgements are hard to achieve automatically, or they use abstractions supporting increased automation, but possibly losing important aspects of the concrete system in the process. Runtime verification has complementary strengths and weaknesses. It combines full precision of the model (including the real deployment environment) with full automation, but cannot judge future and alternative runs. Another drawback of runtime verification can be the computational overhead of monitoring the running system which, although typically not very high, can still be prohibitive in certain settings. In this paper we propose a framework to combine static analysis techniques and runtime verification with the aim of getting the best of both techniques. In particular, we discuss an instantiation of our framework for the deductive theorem prover KeY, and the runtime verification tool Larva. Apart from combining static and dynamic verification, this approach also combines the data centric analysis of KeY with the control centric analysis of Larva. An advantage of the approach is that, through the use of a single specification which can be used by both analysis techniques, expensive parts of the analysis could be moved to the static phase, allowing the runtime monitor to make significant assumptions, dropping parts of expensive checks at runtime. We also discuss specific applications of our approach.peer-reviewe

    The Nature of Physical Knowledge

    Get PDF
    https://epublications.marquette.edu/mupress-book/1020/thumbnail.jp

    Cryptology in the Crowd

    Get PDF
    Uhell skjer: Kanskje mistet du nøkkelen til huset, eller hadde PIN-koden til innbruddsalarmen skrevet på en dårlig plassert post-it lapp. Og kanskje endte de slik opp i hendene på feil person, som nå kan påføre livet ditt all slags ugagn: Sikkerhetssystemer gir ingen garantier når nøkler blir stjålet og PIN-koder lekket. Likevel burde naboen din, hvis nøkkel-og-PIN-kode rutiner er heller vanntette, kunne føle seg trygg i vissheten om at selv om du ikke evner å sikre huset ditt mot innbrudd, så forblir deres hjem trygt. Det er tilsvarende for kryptologi, som også lener seg på at nøkkelmateriale hemmeligholdes for å kunne garantere sikkerhet: Intuitivt forventer man at kjennskap til ett systems hemmelige nøkkel ikke burde være til hjelp for å bryte inn i andre, urelaterte systemer. Men det har vist seg overraskende vanskelig å sette denne intuisjonen på formell grunn, og flere konkurrerende sikkerhetsmodeller av varierende styrke har oppstått. Det blir dermed naturlig å spørre seg: Hvilken formalisme er den riktige når man skal modellere realistiske scenarioer med mange brukere og mulige lekkasjer? Eller: hvordan bygger man kryptografi i en folkemengde? Artikkel I begir seg ut på reisen mot et svar ved å sammenligne forskjellige flerbrukervarianter av sikkerhetsmodellen IND-CCA, med og uten evnen til å motta hemmelige nøkler tilhørende andre brukere. Vi finner et delvis svar ved å vise at uten denne evnen, så er noen modeller faktisk å foretrekke over andre. Med denne evnen, derimot, forblir situasjonen uavklart. Artikkel II tar et sidesteg til et sett relaterte sikkerhetsmodeller hvor, heller enn å angripe én enkelt bruker (ut fra en mengde av mulige ofre), angriperen ønsker å bryte kryptografien til så mange brukere som mulig på én gang. Man ser for seg en uvanlig mektig motstander, for eksempel en statssponset aktør, som ikke har problemer med å bryte kryptografien til en enkelt bruker: Målet skifter dermed fra å garantere trygghet for alle brukerne, til å gjøre masseovervåking så vanskelig som mulig, slik at det store flertall av brukere kan forbli sikret. Artikkel III fortsetter der Artikkel I slapp ved å sammenligne og systematisere de samme IND-CCA sikkerhetsmodellene med en større mengde med sikkerhetsmodeller, med det til felles at de alle modellerer det samme (eller lignende) scenarioet. Disse modellene, som går under navnene SOA (Selective Opening Attacks; utvalgte åpningsangrep) og NCE (Non-Committing Encryption; ikke-bindende kryptering), er ofte vesentlig sterkere enn modellene studert i Artikkel I. Med et system på plass er vi i stand til å identifisere en rekke hull i litteraturen; og dog vi tetter noen, etterlater vi mange som åpne problemer.Accidents happen: you may misplace the key to your home, or maybe the PIN to your home security system was written on an ill-placed post-it note. And so they end up in the hands of a bad actor, who is then granted the power to wreak all kinds of havoc in your life: the security of your home grants no guarantees when keys are stolen and PINs are leaked. Nonetheless your neighbour, whose key-and-pin routines leave comparatively little to be desired, should feel safe that just because you can’t keep your house safe from intruders, their home remains secured. It is likewise with cryptography, whose security also relies on the secrecy of key material: intuitively, the ability to recover the secret keys of other users should not help an adversary break into an uncompromised system. Yet formalizing this intuition has turned out tricky, with several competing notions of security of varying strength. This begs the question: when modelling a real-world scenario with many users, some of which may be compromised, which formalization is the right one? Or: how do we build cryptology in a crowd? Paper I embarks on the quest to answer the above questions by studying how various notions of multi-user IND-CCA compare to each other, with and without the ability to adaptively compromise users. We partly answer the question by showing that, without compromise, some notions of security really are preferable over others. Still, the situation is left largely open when compromise is accounted for. Paper II takes a detour to a related set of security notions in which, rather than attacking a single user, an adversary seeks to break the security of many. One imagines an unusually powerful adversary, for example a state-sponsored actor, for whom brute-forcing a single system is not a problem. Our goal then shifts from securing every user to making mass surveillance as difficult as possible, so that the vast majority of uncompromised users can remain secure. Paper III picks up where Paper I left off by comparing and systemizing the same security notions with a wider array of security notions that aim to capture the same (or similar) scenarios. These notions appear under the names of Selective Opening Attacks (SOA) and Non-Committing Encryption (NCE), and are typically significantly stronger than the notions of IND-CCA studied in Paper I. With a system in place, we identify and highlight a number of gaps, some of which we close, and many of which are posed as open problems.Doktorgradsavhandlin

    Empiricism: An Environment for Humanist Mathematics

    Get PDF
    Humanists have extended some links between mathematics and the physical world, but most mathematicians still believe they operate in an immaterial realm of the mind, with unquestionable logic and abstract thought. By rehabilitating the empiricism of John Stuart Mill and combining it with growing knowledge of the character of the human mind, we can escape from the indefinable Platonic universe of immaterial consciousness and abandon the futile quest for certainty that has plagued philosophy since the time of the Greeks
    corecore