427 research outputs found

    Lazy Repairing Backtracking for Dynamic Constraint Satisfaction Problems

    Get PDF
    Extended Partial Dynamic Backtracking (EPDB) is a repair algorithm based on PDB. It deals with Dynamic CSPs based on ordering heuristics and retroactive data structures, safety conditions, and nogoods which are saved during the search process. In this paper, we show that the drawback of both EPDB and PDB is the exhaustive verification of orders, saved in safety conditions and nogoods, between variables. This verification affects remarkably search time, especially since orders are often indirectly deduced. Therefore, we propose a new approach for dynamically changing environments, the Lazy Repairing Backtracking (LRB), which is a fast version of EPDB insofar as it deduces orders directly through the used ordering heuristic. We evaluate LRB on various kinds of problems, and compare it, on the one hand, with EPDB to show its effectiveness compared to this approach, and, on the other hand, with MAC-2001 in order to conclude, from what perturbation rate resolving a DCSP with an efficient approach can be more advantageous than repair

    Improving the Policy Specification for Practical Access Control Systems

    Get PDF
    Access control systems play a crucial role in protecting the security of information systems by ensuring that only authorized users are granted access to sensitive resources, and the protection is only as good as the access control policies. For enabling a security administrator to express her desired policy conveniently, it is paramount that a policy specification is expressive, comprehensible, and free of inconsistencies. In this dissertation, we study the policy specifications for three practical access control systems (i.e., obligation systems, firewalls, and Security-Enhanced Linux in Android) and improve their expressiveness, comprehensibility, and consistency. First, we improve the expressiveness of obligation policies for handling different types of obligations. We propose a language for specifying obligations as well as an architecture for handling access control policies with these obligations, by extending XACML (i.e., the de facto standard for specifying access control policies). We also implement our design into a prototype system named ExtXACML to handle various obligations. Second, we improve the comprehensibility of firewall policies enabling administrators to better understand and manage the policies. We introduce the tri-modularized design of firewall policies for elevating them from monolithic to modular. To support legacy firewall policies, we also define a five-step process and present algorithms for converting them into their modularized form. Finally, we improve the consistency of Security-Enhanced Linux in Android (SEAndroid) policies for reducing the attack surface in Android systems. We propose a systematic approach as well as a semiautomatic tool for uncovering three classes of policy misconfigurations. We also analyze SEAndroid policies from four Android versions and seven Android phone vendors, and in all of them we observe examples of potential policy misconfigurations

    Metareasoning about propagators for constraint satisfaction

    Get PDF
    Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances

    Decoupling contention management from scheduling

    Full text link

    Decoupling contention management from scheduling

    Get PDF
    Many parallel applications exhibit unpredictable communication between threads, leading to contention for shared objects. The choice of contention management strategy impacts strongly the performance and scalability of these applications: spinning provides maximum performance but wastes significant processor resources, while blocking-based approaches conserve processor resources but introduce high overheads on the critical path of computation. Under situations of high or changing load, the operating system complicates matters further with arbitrary scheduling decisions which often preempt lock holders, leading to long serialization delays until the preempted thread resumes execution. We observe that contention management is orthogonal to the problems of scheduling and load management and propose to decouple them so each may be solved independently and effectively. To this end, we propose a load control mechanism which manages the number of active threads in the system separately from any contention which may exist. By isolating contention management from damaging interactions with the OS scheduler, we combine the efficiency of spinning with the robustness of blocking. The proposed load control mechanism results in stable, high performance for both lightly and heavily loaded systems, requires no special privileges or modifications at the OS level, and can be implemented as a library which benefits existing code

    Header Parsing Logic in Network Switches Using Fine and Coarse-Grained Dynamic Reconfiguration Strategies

    Get PDF
    Current ASIC only designs which interface with a general purpose processor are fairly restricted as far as their ability to be upgraded after fabrication. The primary intent of the research documented in this thesis is to determine if the inclusion of FPGAs in existing ASIC designs can be considered as an option for alleviating this constraint by analyzing the performance of such a framework as a replacement for the parsing logic in a typical network switch. This thesis also covers an ancilliary goal of the research which is to compare the various methods used to reconfigure modern FPGAs, including the use of self initiated dynamic partial reconfiguration, in regards to the degree in which they interrupt the operation of the device in which an FPGA is embedded. This portion of the research is also conducted in the context of a network switch and focuses on the ability of the network switch to reconfigure itself dynamically when presented with a new type of network traffic

    Change-centric improvement of team collaboration

    Get PDF
    In software development, teamwork is essential to the successful delivery of a final product. The software industry has historically built software utilizing development teams that share the workplace. Process models, tools, and methodologies have been enhanced to support the development of software in a collocated setting. However, since the dawn of the 21st century, this scenario has begun to change: an increasing number of software companies are adopting global software development to cut costs and speed up the development process. Global software development introduces several challenges for the creation of quality software, from the adaptation of current methods, tools, techniques, etc., to new challenges imposed by the distributed setting, including physical and cultural distance between teams, communication problems, and coordination breakdowns. A particular challenge for distributed teams is the maintenance of a level of collaboration naturally present in collocated teams. Collaboration in this situation naturally d r ops due to low awareness of the activity of the team. Awareness is intrinsic to a collocated team, being obtained through human interaction such as informal conversation or meetings. For a distributed team, however, geographical distance and a subsequent lack of human interaction negatively impact this awareness. This dissertation focuses on the improvement of collaboration, especially within geographically dispersed teams. Our thesis is that by modeling the evolution of a software system in terms of fine-grained changes, we can produce a detailed history that may be leveraged to help developers collaborate. To validate this claim, we first c r eate a model to accurately represent the evolution of a system as sequences of fine- grained changes. We proceed to build a tool infrastructure able to capture and store fine-grained changes for both immediate and later use. Upon this foundation, we devise and evaluate a number of applications for our work with two distinct goals: 1. To assist developers with real-time information about the activity of the team. These applications aim to improve developers’ awareness of team member activity that can impact their work. We propose visualizations to notify developers of ongoing change activity, as well as a new technique for detecting and informing developers about potential emerging conflicts. 2. To help developers satisfy their needs for information related to the evolution of the software system. These applications aim to exploit the detailed change history generated by our approach in order to help developers find answers to questions arising during their work. To this end, we present two new measurements of code expertise, and a novel approach to replaying past changes according to user-defined criteria. We evaluate the approach and applications by adopting appropriate empirical methods for each case. A total of two case studies – one controlled experiment, and one qualitative user study – are reported. The results provide evidence that applications leveraging a fine-grained change history of a software system can effectively help developers collaborate in a distributed setting

    An FPGA implementation of an investigative many-core processor, Fynbos : in support of a Fortran autoparallelising software pipeline

    Get PDF
    Includes bibliographical references.In light of the power, memory, ILP, and utilisation walls facing the computing industry, this work examines the hypothetical many-core approach to finding greater compute performance and efficiency. In order to achieve greater efficiency in an environment in which Moore’s law continues but TDP has been capped, a means of deriving performance from dark and dim silicon is needed. The many-core hypothesis is one approach to exploiting these available transistors efficiently. As understood in this work, it involves trading in hardware control complexity for hundreds to thousands of parallel simple processing elements, and operating at a clock speed sufficiently low as to allow the efficiency gains of near threshold voltage operation. Performance is there- fore dependant on exploiting a new degree of fine-grained parallelism such as is currently only found in GPGPUs, but in a manner that is not as restrictive in application domain range. While removing the complex control hardware of traditional CPUs provides space for more arithmetic hardware, a basic level of control is still required. For a number of reasons this work chooses to replace this control largely with static scheduling. This pushes the burden of control primarily to the software and specifically the compiler, rather not to the programmer or to an application specific means of control simplification. An existing legacy tool chain capable of autoparallelising sequential Fortran code to the degree of parallelism necessary for many-core exists. This work implements a many-core architecture to match it. Prototyping the design on an FPGA, it is possible to examine the real world performance of the compiler-architecture system to a greater degree than simulation only would allow. Comparing theoretical peak performance and real performance in a case study application, the system is found to be more efficient than any other reviewed, but to also significantly under perform relative to current competing architectures. This failing is apportioned to taking the need for simple hardware too far, and an inability to implement static scheduling mitigating tactics due to lack of support for such in the compiler
    • …
    corecore