531 research outputs found

    Outlook Magazine, Winter 2003

    Get PDF
    https://digitalcommons.wustl.edu/outlook/1158/thumbnail.jp

    Beyond The Internet: The Digital Universal Network

    Full text link
    Present world's realities, so deeply interconnected technified and hypercomplex, can only be understood by means of an interdisciplinary and systemic approach

    Advancing Operating Systems via Aspect-Oriented Programming

    Get PDF
    Operating system kernels are among the most complex pieces of software in existence to- day. Maintaining the kernel code and developing new functionality is increasingly compli- cated, since the amount of required features has risen significantly, leading to side ef fects that can be introduced inadvertedly by changing a piece of code that belongs to a completely dif ferent context. Software developers try to modularize their code base into separate functional units. Some of the functionality or “concerns” required in a kernel, however, does not fit into the given modularization structure; this code may then be spread over the code base and its implementation tangled with code implementing dif ferent concerns. These so-called “crosscutting concerns” are especially dif ficult to handle since a change in a crosscutting concern implies that all relevant locations spread throughout the code base have to be modified. Aspect-Oriented Software Development (AOSD) is an approach to handle crosscutting concerns by factoring them out into separate modules. The “advice” code contained in these modules is woven into the original code base according to a pointcut description, a set of interaction points (joinpoints) with the code base. To be used in operating systems, AOSD requires tool support for the prevalent procedu- ral programming style as well as support for weaving aspects. Many interactions in kernel code are dynamic, so in order to implement non-static behavior and improve performance, a dynamic weaver that deploys and undeploys aspects at system runtime is required. This thesis presents an extension of the “C” programming language to support AOSD. Based on this, two dynamic weaving toolkits – TOSKANA and TOSKANA-VM – are presented to permit dynamic aspect weaving in the monolithic NetBSD kernel as well as in a virtual- machine and microkernel-based Linux kernel running on top of L4. Based on TOSKANA, applications for this dynamic aspect technology are discussed and evaluated. The thesis closes with a view on an aspect-oriented kernel structure that maintains coherency and handles crosscutting concerns using dynamic aspects while enhancing de- velopment methods through the use of domain-specific programming languages

    Toward least-privilege isolation for software

    Get PDF
    Hackers leverage software vulnerabilities to disclose, tamper with, or destroy sensitive data. To protect sensitive data, programmers can adhere to the principle of least-privilege, which entails giving software the minimal privilege it needs to operate, which ensures that sensitive data is only available to software components on a strictly need-to-know basis. Unfortunately, applying this principle in practice is dif- ïżœcult, as current operating systems tend to provide coarse-grained mechanisms for limiting privilege. Thus, most applications today run with greater-than-necessary privileges. We propose sthreads, a set of operating system primitives that allows ïżœne-grained isolation of software to approximate the least-privilege ideal. sthreads enforce a default-deny model, where software components have no privileges by default, so all privileges must be explicitly granted by the programmer. Experience introducing sthreads into previously monolithic applications|thus, partitioning them|reveals that enumerating privileges for sthreads is diïżœcult in practice. To ease the introduction of sthreads into existing code, we include Crowbar, a tool that can be used to learn the privileges required by a compartment. We show that only a few changes are necessary to existing code in order to partition applications with sthreads, and that Crowbar can guide the programmer through these changes. We show that applying sthreads to applications successfully narrows the attack surface by reducing the amount of code that can access sensitive data. Finally, we show that applications using sthreads pay only a small performance overhead. We applied sthreads to a range of applications. Most notably, an SSL web server, where we show that sthreads are powerful enough to protect sensitive data even against a strong adversary that can act as a man-in-the-middle in the network, and also exploit most code in the web server; a threat model not addressed to date

    Spartan Daily, February 22, 1971

    Get PDF
    Volume 58, Issue 70https://scholarworks.sjsu.edu/spartandaily/5472/thumbnail.jp

    Locality Awareness for Task Parallel Computation

    Get PDF
    The task parallel programming model allows programmers to express concurrency at a high level of abstraction and places the burden of scheduling parallel execution on the run time system. Efficient scheduling of tasks on multi-socket multicore shared memory systems requires careful consideration of an increasingly complex memory hierarchy, including shared caches and non-uniform memory access (NUMA) characteristics. In this dissertation, we study the performance impact of these issues and other performance factors that limit parallel speedup in task parallel program executions and propose new scheduling strategies to improve performance. Our performance model characterizes lost efficiency in terms of overhead time, idle time, and work time inflation due to increased data access costs. We introduce a hierarchical run time scheduler that combines the benefits of work stealing and parallel depth-first schedulers. Matching the scheduler design to the memory hierarchy of multicore NUMA systems limits costly remote data accesses while maintaining load balance and exploiting constructive data sharing among threads that share a cache. We also propose a locality- based scheduling framework based on locality domains and comprising an API for programmers to specify application locality and a scheduler that honors those specifications. Implementations of the hierarchical and locality-based schedulers in our OpenMP run time system exhibit performance improvements on several task parallel benchmark applications over existing scheduling strategies and production OpenMP run time systems.Doctor of Philosoph

    A platform for distributed learning and teaching of algorithmic concepts

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (p. 119-121).by Nathan D.T. Boyd.M.Eng

    A container orchestration development that optimizes the etherpad collaborative editing tool through a novel management system

    Full text link
    The use of collaborative tools has notably increased recently. It is common to see distinct users that need to work simultaneously on shared documents. In most cases, large companies provide tools whose implementations have been a very complicated and expensive task. Likewise, their platform deployment requirements should be robust hardware infrastructures. It becomes even more critical when their main target is to reach scalability and highavailability. Therefore, this study aims to design and implement a microservices-based collaborative architecture using assembled containers in the cloud, enabling them to deploy Etherpad instances to guarantee high availability. To ensure such a task, we developed and optimized a central management system that creates Etherpad instances and continuously interacts with other Etherpad tools running on Docker containers. This design goes from the monolithic Etherpad instantiation and handling towards a service architecture, where every Etherpad is offered as a microservice. Furthermore, the management system follows (implements) the Observer, Factory Method, Proxy, and Service Layerpopular design patterns. This allows users to gain more privacy through access to validations and shared resources. Our results indicate both the correct operation in the automation of containers’ creation for new users who register in the system and quantifiable improvement in performance.The funding of this research is provided by the Mobility Regulation of the Universidad de las Fuerzas Armadas ESPE, from Sangolquí, Ecuado

    Repeating the past experimental and empirical methods in system and software security

    Get PDF
    I propose a new method of analyzing intrusions: instead of analyzing evidence and deducing what must have happened, I find the intrusion-causing circumstances by a series of automatic experiments. I first capture process';s system calls, and when an intrusion has been detected, I use these system calls to replay some of the captured processes in order to find the intrusion-causing processes—the cause-effect chain that led to the intrusion. I extend this approach to find also the inputs to those processes that cause the intrusion—the attack signature. Intrusion analysis is a minimization problem—how to find a minimal set of circumstances that makes the intrusion happen. I develop several efficient minimization algorithms and show their theoretical properties, such as worst-case running times, as well as empirical evidence for a comparison of average running times. Our evaluations show that the approach is correct and practical; it finds the 3 processes out of 32 that are responsible for a proof-of-concept attack in about 5 minutes, and it finds the 72 out of 168 processes in a large, complicated, and difficult to detect multi-stage attack involving Apache and suidperl in about 2.5 hours. I also extract attack signatures in proof-of-concept attacks in reasonable time. I have also considered the problem of predicting before deployment which components in a software system are most likely to contain vulnerabilities. I present empirical evidence that vulnerabilities are connected to a component';s imports. In a case study on Mozilla, I correctly predicted one half of all vulnerable components, while more than two thirds of our predictions were correct.Ich stelle eine neue Methode der Einbruchsanalyse vor: Anstatt Spuren zu analysieren und daraus den Ereignisverlauf zu erschließen, finde ich die einbruchsverursachenden UmstĂ€nde durch automatische Experimente. ZunĂ€chst zeichne ich die Systemaufrufe von Prozessen auf. Nachdem ein Einbruch entdeckt wird, benutze ich diese Systemaufrufe, um Prozesse teilweise wieder einzuspielen, so dass ich herausfinden kann, welche Prozesse den Einbruch verursacht haben —die Ursache-Wirkungs-Kette. Ich erweitere diesen Ansatz, um auch die einbruchsverursachenden Eingaben dieser Prozesse zu finden — die Angriffs-Signatur. Einbruchsanalyse ist ein Minimierungsproblem — wie findet man eine minimale Menge von UmstĂ€nden, die den Einbruch passieren lassen? Ich entwickle einige effiziente Algorithmen und gebe sowohl theroretische Eigenschaften an, wie z.B. die Laufzeit im ungĂŒnstigsten Fall, als auch empirische Ergebnisse, die das mittlere Laufzeitverhalen beleuchten. Meine Evaluierung zeigt, dass unser Ansatz korrekt und praktikabel ist; er findet die 3 aus 32 Prozessen, die fĂŒr einen konstruierten Angriff verantwortlich sind, in etwa 5 Minuten, und er findet die 72 von 168 Prozessen, die fĂŒr einen echten, komplizierten, mehrstufigen und schwer zu analysierenden Angriff auf Apache und suidperl verantwortlich sind, in 2,5 Stunden. Ich kann ebenfalls Angriffs-Signaturen eines konstruierten Angriffs in vernĂŒnftiger Zeit erstellen. Ich habe mich auch mit dem Problem beschĂ€ftigt, vor der Auslieferung von Software diejenigen Komponenten vorherzusagen, die besonders anfĂ€llig fĂŒr Schwachstellen sind. Ich bringe empirische Anhaltspunkte, dass Schwachstellen mit Importen korrelieren. In einer Fallstudie ĂŒber Mozilla konnte ich die HĂ€lfte aller fehlerhaften Komponenten korrekt vorhersagen, wobei etwa zwei Drittel aller Vorhersagen richtig war

    Improving programmability and performance for scientific applications

    Get PDF
    With modern advancements in hardware and software technology scaling towards new limits, our compute machines are reaching new potentials to tackle more challenging problems. While the size and complexity of both the problems and solutions increases, the programming methodologies must remain at a level that can be understood by programmers and scientists alike. In our work, this problem is encountered when developing an optimized framework to best exploit the semantic properties of a finite-element solver. In efforts to address this problem, we explore programming and runtime models which decouple algorithmic complexity, parallelism concerns, and hardware mapping. We build upon these frameworks to exploit domain-specific semantics using high-level transformations and modifications to obtain performance through algorithmic and runtime optimizations. We first discusses optimizations performed on a computational mechanics solver using a novel coupling technique for multi-time scale methods for discrete finite element domains. We exploit domain semantics using a high-level dynamic runtime scheme to reorder and balance workloads to greatly improve runtime performance. The framework presented automatically chooses a near-optimal coupling solution and runs a work-stealing parallel executor to run effectively on multi-core systems. In my latter work, I focus on the parallel programming model, Concurrent Collections (CnC), to seamlessly bridge the gap between performance and programmability. Because challenging problems in various domains, not limited to computation mechanics, requires both domain expertise and programming prowess, there is a need for ways to separate those concerns. This thesis describes methods and techniques to obtain scalable performance using CnC programming while limiting the burden of programming. These high level techniques are presented for two high-performance applications corresponding to hydrodynamics and multi-grid solvers
    • 

    corecore