5,568 research outputs found

    Practical Fine-grained Privilege Separation in Multithreaded Applications

    Full text link
    An inherent security limitation with the classic multithreaded programming model is that all the threads share the same address space and, therefore, are implicitly assumed to be mutually trusted. This assumption, however, does not take into consideration of many modern multithreaded applications that involve multiple principals which do not fully trust each other. It remains challenging to retrofit the classic multithreaded programming model so that the security and privilege separation in multi-principal applications can be resolved. This paper proposes ARBITER, a run-time system and a set of security primitives, aimed at fine-grained and data-centric privilege separation in multithreaded applications. While enforcing effective isolation among principals, ARBITER still allows flexible sharing and communication between threads so that the multithreaded programming paradigm can be preserved. To realize controlled sharing in a fine-grained manner, we created a novel abstraction named ARBITER Secure Memory Segment (ASMS) and corresponding OS support. Programmers express security policies by labeling data and principals via ARBITER's API following a unified model. We ported a widely-used, in-memory database application (memcached) to ARBITER system, changing only around 100 LOC. Experiments indicate that only an average runtime overhead of 5.6% is induced to this security enhanced version of application

    Closed Timelike Curves in Relativistic Computation

    Full text link
    In this paper, we investigate the possibility of using closed timelike curves (CTCs) in relativistic hypercomputation. We introduce a wormhole based hypercomputation scenario which is free from the common worries, such as the blueshift problem. We also discuss the physical reasonability of our scenario, and why we cannot simply ignore the possibility of the existence of spacetimes containing CTCs.Comment: 17 pages, 5 figure

    Aggregating raster polygons derived from large remotely sensed images

    Get PDF

    Process, System, Causality, and Quantum Mechanics, A Psychoanalysis of Animal Faith

    Full text link
    We shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution J on a set of random variables containing x and y, define a link between x and y to be the condition x=y on J. Define the {\it state} D of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrodinger's equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PD) and D'T = TD respectively, where P is a projection, D and D' are (von Neumann) density matrices, and T is a unitary transformation. We'll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all D's are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we'll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.Comment: LaTex, 86 page

    Task-based Augmented Contour Trees with Fibonacci Heaps

    Full text link
    This paper presents a new algorithm for the fast, shared memory, multi-core computation of augmented contour trees on triangulations. In contrast to most existing parallel algorithms our technique computes augmented trees, enabling the full extent of contour tree based applications including data segmentation. Our approach completely revisits the traditional, sequential contour tree algorithm to re-formulate all the steps of the computation as a set of independent local tasks. This includes a new computation procedure based on Fibonacci heaps for the join and split trees, two intermediate data structures used to compute the contour tree, whose constructions are efficiently carried out concurrently thanks to the dynamic scheduling of task parallelism. We also introduce a new parallel algorithm for the combination of these two trees into the output global contour tree. Overall, this results in superior time performance in practice, both in sequential and in parallel thanks to the OpenMP task runtime. We report performance numbers that compare our approach to reference sequential and multi-threaded implementations for the computation of augmented merge and contour trees. These experiments demonstrate the run-time efficiency of our approach and its scalability on common workstations. We demonstrate the utility of our approach in data segmentation applications

    Storage management in Ada. Three reports. Volume 1: Storage management in Ada as a risk to the development of reliable software. Volume 2: Relevant aspects of language. Volume 3: Requirements of the language versus manifestations of current implementations

    Get PDF
    The risk to the development of program reliability is derived from the use of a new language and from the potential use of new storage management techniques. With Ada and associated support software, there is a lack of established guidelines and procedures, drawn from experience and common usage, which assume reliable behavior. The risk is identified and clarified. In order to provide a framework for future consideration of dynamic storage management on Ada, a description of the relevant aspects of the language is presented in two sections: Program data sources, and declaration and allocation in Ada. Storage-management characteristics of the Ada language and storage-management characteristics of Ada implementations are differentiated. Terms that are used are defined in a narrow and precise sense. The storage-management implications of the Ada language are described. The storage-management options available to the Ada implementor and the implications of the implementor's choice for the Ada programmer are also described

    MxTasks: a novel processing model to support data processing on modern hardware

    Get PDF
    The hardware landscape has changed rapidly in recent years. Modern hardware in today's servers is characterized by many CPU cores, multiple sockets, and vast amounts of main memory structured in NUMA hierarchies. In order to benefit from these highly parallel systems, the software has to adapt and actively engage with newly available features. However, the processing models forming the foundation for many performance-oriented applications have remained essentially unchanged. Threads, which serve as the central processing abstractions, can be considered a "black box" that hardly allows any transparency between the application and the system underneath. On the one hand, applications are aware of the knowledge that could assist the system in optimizing the execution, such as accessed data objects and access patterns. On the other hand, the limited opportunities for information exchange cause operating systems to make assumptions about the applications' intentions to optimize their execution, e.g., for local data access. Applications, on the contrary, implement optimizations tailored to specific situations, such as sophisticated synchronization mechanisms and hardware-conscious data structures. This work presents MxTasking, a task-based runtime environment that assists the design of data structures and applications for contemporary hardware. MxTasking rethinks the interfaces between performance-oriented applications and the execution substrate, streamlining the information exchange between both layers. By breaking patterns of processing models designed with past generations of hardware in mind, MxTasking creates novel opportunities to manage resources in a hardware- and application-conscious way. Accordingly, we question the granularity of "conventional" threads and show that fine-granular MxTasks are a viable abstraction unit for characterizing and optimizing the execution in a general way. Using various demonstrators in the context of database management systems, we illustrate the practical benefits and explore how challenges like memory access latencies and error-prone synchronization of concurrency can be addressed straightforwardly and effectively

    The massive production of iron in the Sahelian belt: Archaeological investigations at Korsimoro (Sanmatenga – Burkina Faso)

    Get PDF
    The large smelting site of Korsimoro was investigated during two fieldwork campaigns in 2011 and 2012. Four different technical traditions are identified. Each is characterized by the spatial organization of the working area, the architecture of the furnace, and the assemblages of wastes. Each technical tradition corresponds to one chronological phase. Phase KRS 1 lasted between 600 and 1000 AD and is characterized by small-scale production. Phases KRS 2 and 3, between 1000 and 1450 AD, showed a very significant increase of the production with an important impact on the organization of the society. There is a collapse of the industry at the time of the installation of the Nakomse conquerors followed by a recovery of the production at a small scale during the 17th century
    • 

    corecore