199 research outputs found

    Constrained Polymorphic Types for a Calculus with Name Variables

    Get PDF
    We extend the simply-typed lambda-calculus with a mechanism for dynamic rebinding of code based on parametric nominal interfaces. That is, we introduce values which represent single fragments, or families of named fragments, of open code, where free variables are associated with names which do not obey alpha-equivalence. In this way, code fragments can be passed as function arguments and manipulated, through their nominal interface, by operators such as rebinding, overriding and renaming. Moreover, by using name variables, it is possible to write terms which are parametric in their nominal interface and/or in the way it is adapted, greatly enhancing expressivity. However, in order to prevent conflicts when instantiating name variables, the name-polymorphic types of such terms need to be equipped with simple {inequality} constraints. We show soundness of the type system

    Autonomous Architectural Assembly And Adaptation

    No full text
    An increasingly common solution for systems which are deployed in unpredictable or dangerous environments is to provide the system with an autonomous or selfmanaging capability. This capability permits the software of the system to adapt to the environmental conditions encountered at runtime by deciding what changes need to be made to the system’s behaviour in order to continue meeting the requirements imposed by the designer. The chief advantage of this approach comes from a reduced reliance on the brittle assumptions made at design time. In this work, we describe mechanisms for adapting the software architecture of a system using a declarative expression of the functional requirements (derived from goals), structural constraints and preferences over the space of non-functional properties possessed by the components of the system. The declarative approach places this work in contrast to existing schemes which require more fine-grained, often procedural, specifications of how to perform adaptations. Our algorithm for assembling and re-assembling configurations chooses between solutions that meet both the functional requirements and the structural constraints by comparing the non-functional properties of the selected components against the designer’s preferences between, for example, a high-performance or a highly reliable solution. In addition to the centralised algorithm, we show how the approach can be applied to a distributed system with no central or master node that is aware of the full space of solutions. We use a gossip protocol as a mechanism by which peer nodes can propose what they think the component configuration is (or should be). Gossip ensures that the nodes will reach agreement on a solution, and will do so in a logarithmic number of steps. This latter property ensures the approach can scale to very large systems. Finally, the work is validated on a number of case studies

    Towards data-aware cost-driven adaptation for service orchestrations.

    Get PDF
    Several activities in service oriented computing, such as automatic composition, monitoring, and adaptation, can benefit from knowing properties of a given service composition before executing them. Among these properties we will focus on those related to execution cost and resource usage, in a wide sense, as they can be linked to QoS characteristics. In order to attain more accuracy, we formulate execution costs / resource usage as functions on input data (or appropriate abstractions thereof) and show how these functions can be used to make better, more informed decisions when performing composition, adaptation, and proactive monitoring. We present an approach to, on one hand, synthesizing these functions in an automatic fashion from the definition of the different orchestrations taking part in a system and, on the other hand, to effectively using them to reduce the overall costs of non-trivial service-based systems featuring sensitivity to data and possibility of failure. We validate our approach by means of simulations of scenarios needing runtime selection of services and adaptation due to service failure. A number of rebinding strategies, including the use of cost functions, are compared

    An Examination of Various Dimensions Associated with Nonprofit Board Member Diversity: The Significance of Organizational Factors

    Get PDF
    The sarco/endoplasmic reticulum Ca2+-ATPase (SERCA) sequesters Ca2+ into the endoplasmic reticulum of cells to establish a reservoir for Ca2+ signaling. In the heart, the activity of this transporter is tightly controlled via direct interactions with two competing regulatory micropeptides: phospholamban (PLB) and dwarf open reading frame (DWORF). PLB inhibits SERCA, while DWORF activates SERCA. These competing interactions determine cardiac performance by modulating the Ca2+ signals that drive the contraction/relaxation cycle. Previous studies indicated these SERCA-micropeptide interactions are Ca2+-sensitive; SERCA binds PLB more avidly at low cytoplasmic [Ca2+] but binds DWORF better when [Ca2+] is high. Here, FRET-microscopy demonstrated that this opposing Ca2+-sensitivity drives dynamic shifts in SERCA-micropeptide binding during cellular Ca2+ elevations. Evaluating the rates of these equilibrium shifts revealed that PLB monomers freed from SERCA during Ca2+ elevations rapidly oligomerize into PLB pentamers. These stable oligomers unbind slowly, delaying the rebinding of inhibitory PLB monomers to SERCA after Ca2+ elevations. In contrast, DWORF is exchanged rapidly on and off SERCA with respect to the rise and fall of transient Ca2+ signals. Computational modeling revealed that the slow unbinding of PLB pentamers causes PLB monomers to accumulate in these complexes during accelerated cardiac pacing. We propose that this accumulation of PLB pentamers decreases availability of inhibitory PLB monomers to bind SERCA and contributes to an increase in the contractile force of cardiac muscle at faster heart rates. Moreover, we demonstrated that a mutation of PLB, Arginine 14 deletion, which is associated with lethal dilated cardiomyopathy, further stabilizes PLB pentamers and blunts these dynamics adjustments to Ca2+ handling. It was also determined that the reciprocal Ca2+ sensitivity of PLB and DWORF results from their preference for binding different intermediate conformations that SERCA samples during Ca2+ transport. Specifically, PLB had the highest affinity for the ATP-bound state of SERCA, which prevails at low [Ca2+]. This result led us to hypothesize that tight binding of PLB to the ATP-bound state of SERCA may relate to its inhibitory effect on SERCA, decreasing the pump’s apparent Ca2+ affinity. Using a 2-color SERCA biosensor to report changes in SERCA conformation during Ca2+ binding by changes in intramolecular FRET, we tested whether PLB reduces SERCA Ca2+ affinity in the presence and absence of nucleotide. The results suggest that PLB inhibits SERCA through reversing an allosteric activation of the pump by ATP

    Damage function for historic paper. Part II: Wear and tear

    Get PDF
    Background: As a result of use of library and archival documents, defined as reading with handling in the context of general access, mechanical degradation (wear and tear) accumulates. In contrast to chemical degradation of paper, the accumulation of wear and tear is less well studied. Previous work explored the threshold of mechanical degradation at which a paper document is no longer considered to be fit for the purpose of use by a reader, while in this paper we explore the rate of accumulation of such damage in the context of object handling. Results: The degree of polymerisation (DP) of historic paper of European origin from mid-19th–mid-20th Century was shown to affect the rate of accumulation of wear and tear. While at DP > 800, this accumulation no longer depends on the number of handlings (the process is random), a wear-out function could be developed for documents with DP between 300 and 800. For objects with DP < 300, one large missing piece (i.e. such that contains text) developed on average with each instance of handling, which is why we propose this DP value as a threshold value for safe handling. Conclusions: The developed model of accumulation of large missing pieces per number of handlings of a document depending on DP, enables us to calculate the time required for an object to become unfit for use by readers in the context of general access. In the context of the average frequency of document use at The UK National Archives (Kew), this period is 60 years for the category of papers with DP 300, and 450 years for papers with DP 500. At higher DP values, this period of time increases beyond the long-term planning horizon of 500 years, leading to the conclusion that for such papers, accumulation of wear and tear is not a significant collection management concern

    Damage Function for Historic Paper: Part II

    Get PDF
    Background As a result of use of library and archival documents, defined as reading with handling in the context of general access, mechanical degradation (wear and tear) accumulates. In contrast to chemical degradation of paper, the accumulation of wear and tear is less well studied. Previous work explored the threshold of mechanical degradation at which a paper document is no longer considered to be fit for the purpose of use by a reader, while in this paper we explore the rate of accumulation of such damage in the context of object handling. Results The degree of polymerisation (DP) of historic paper of European origin from mid-19th–mid-20th Century was shown to affect the rate of accumulation of wear and tear. While at DP > 800, this accumulation no longer depends on the number of handlings (the process is random), a wear-out function could be developed for documents with DP between 300 and 800. For objects with DP < 300, one large missing piece (i.e. such that contains text) developed on average with each instance of handling, which is why we propose this DP value as a threshold value for safe handling. Conclusions The developed model of accumulation of large missing pieces per number of handlings of a document depending on DP, enables us to calculate the time required for an object to become unfit for use by readers in the context of general access. In the context of the average frequency of document use at The UK National Archives (Kew), this period is 60 years for the category of papers with DP 300, and 450 years for papers with DP 500. At higher DP values, this period of time increases beyond the long-term planning horizon of 500 years, leading to the conclusion that for such papers, accumulation of wear and tear is not a significant collection management concern

    Run-time Variability with First-class Contexts

    Get PDF
    Software must be regularly updated to keep up with changing requirements. Unfortunately, to install an update, the system must usually be restarted, which is inconvenient and costly. In this dissertation, we aim at overcoming the need for restart by enabling run-time changes at the programming language level. We argue that the best way to achieve this goal is to improve the support for encapsulation, information hiding and late binding by contextualizing behavior. In our approach, behavioral variations are encapsulated into context objects that alter the behavior of other objects locally. We present three contextual language features that demonstrate our approach. First, we present a feature to evolve software by scoping variations to threads. This way, arbitrary objects can be substituted over time without compromising safety. Second, we present a variant of dynamic proxies that operate by delegation instead of forwarding. The proxies can be used as building blocks to implement contextualization mechanisms from within the language. Third, we contextualize the behavior of objects to intercept exchanges of references between objects. This approach scales information hiding from objects to aggregates. The three language features are supported by formalizations and case studies, showing their soundness and practicality. With these three complementary language features, developers can easily design applications that can accommodate run-time changes

    Glycoprotein IIb/IIIa Antagonists in Acute Coronary Syndromes Undergoing PCI: A Long Way to Select Optimal Agent and Route

    Get PDF
    Antiplatelet treatment in patients with an acute coronary syndrome (ACS), without or with ST segment elevation myocardial infarction (STEMI), forces to keep the balance between potential threats and optimal clinical advantages. Apart from clopidogrel, glycoprotein (GP) IIb/IIIa inhibitors (abciximab and 2 small molecules, tirofiban and eptifibatide) have come to the clinical scene. Recent evidence (2009–2011) is reviewed pointing to pharmacoeconometric considerations of concern in times of budget restrictions worldwide. In ACS, when clopidogrel plus aspirin are on, there might be no advantage to add small molecules. Whereas in STEMI patients treated by primary PCI, all 3 GP IIb/IIIa antagonists might be superimposable, when only ACS is present and PCI is elective, definite distinction among the 3 agents, both pharmacoeconomically and pharmacodynamically, might be invoked. There are still points open to debate. Among these the route (upstream versus downstream) is still a matter of uncertainties. Moreover, theoretically, there might be differences not only between abciximab and small molecules (mostly superimposable) but also between tirofiban and eptifibatide (the former being potentially more potent). Thus, a long way is needed before a prominent agent among GPIIb/IIIa inhibitors may be selected. The game is still open, a role will be played soon by new agents

    Code-injection Verwundbarkeiten in Web Anwendungen am Beispiel von Cross-site Scripting

    Get PDF
    The majority of all security problems in today's Web applications is caused by string-based code injection, with Cross-site Scripting (XSS)being the dominant representative of this vulnerability class. This thesis discusses XSS and suggests defense mechanisms. We do so in three stages: First, we conduct a thorough analysis of JavaScript's capabilities and explain how these capabilities are utilized in XSS attacks. We subsequently design a systematic, hierarchical classification of XSS payloads. In addition, we present a comprehensive survey of publicly documented XSS payloads which is structured according to our proposed classification scheme. Secondly, we explore defensive mechanisms which dynamically prevent the execution of some payload types without eliminating the actual vulnerability. More specifically, we discuss the design and implementation of countermeasures against the XSS payloads Session Hijacking'', Cross-site Request Forgery'', and attacks that target intranet resources. We build upon this and introduce a general methodology for developing such countermeasures: We determine a necessary set of basic capabilities an adversary needs for successfully executing an attack through an analysis of the targeted payload type. The resulting countermeasure relies on revoking one of these capabilities, which in turn renders the payload infeasible. Finally, we present two language-based approaches that prevent XSS and related vulnerabilities: We identify the implicit mixing of data and code during string-based syntax assembly as the root cause of string-based code injection attacks. Consequently, we explore data/code separation in web applications. For this purpose, we propose a novel methodology for token-level data/code partitioning of a computer language's syntactical elements. This forms the basis for our two distinct techniques: For one, we present an approach to detect data/code confusion on run-time and demonstrate how this can be used for attack prevention. Furthermore, we show how vulnerabilities can be avoided through altering the underlying programming language. We introduce a dedicated datatype for syntax assembly instead of using string datatypes themselves for this purpose. We develop a formal, type-theoretical model of the proposed datatype and proof that it provides reliable separation between data and code hence, preventing code injection vulnerabilities. We verify our approach's applicability utilizing a practical implementation for the J2EE application server.Cross-site Scripting (XSS) ist eine der häufigsten Verwundbarkeitstypen im Bereich der Web Anwendungen. Die Dissertation behandelt das Problem XSS ganzheitlich: Basierend auf einer systematischen Erarbeitung der Ursachen und potentiellen Konsequenzen von XSS, sowie einer umfassenden Klassifikation dokumentier Angriffsarten, wird zunächst eine Methodik vorgestellt, die das Design von dynamischen Gegenmaßnahmen zur Angriffseingrenzung erlaubt. Unter Verwendung dieser Methodik wird das Design und die Evaluation von drei Gegemaßnahmen für die Angriffsunterklassen "Session Hijacking", "Cross-site Request Forgery" und "Angriffe auf das Intranet" vorgestellt. Weiterhin, um das unterliegende Problem grundsätzlich anzugehen, wird ein Typ-basierter Ansatz zur sicheren Programmierung von Web Anwendungen beschrieben, der zuverlässigen Schutz vor XSS Lücken garantiert
    • …
    corecore