112 research outputs found

    Proceedings of the 1st EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    3rd Many-core Applications Research Community (MARC) Symposium. (KIT Scientific Reports ; 7598)

    Get PDF
    This manuscript includes recent scientific work regarding the Intel Single Chip Cloud computer and describes approaches for novel approaches for programming and run-time organization

    Homology sequence analysis using GPU acceleration

    Get PDF
    A number of problems in bioinformatics, systems biology and computational biology field require abstracting physical entities to mathematical or computational models. In such studies, the computational paradigms often involve algorithms that can be solved by the Central Processing Unit (CPU). Historically, those algorithms benefit from the advancements of computing power in the serial processing capabilities of individual CPU cores. However, the growth has slowed down over recent years, as scaling out CPU has been shown to be both cost-prohibitive and insecure. To overcome this problem, parallel computing approaches that employ the Graphics Processing Unit (GPU) have gained attention as complementing or replacing traditional CPU approaches. The premise of this research is to investigate the applicability of various parallel computing platforms to several problems in the detection and analysis of homology in biological sequence. I hypothesize that by exploiting the sheer amount of computation power and sequencing data, it is possible to deduce information from raw sequences without supplying the underlying prior knowledge to come up with an answer. I have developed such tools to perform analysis at scales that are traditionally unattainable with general-purpose CPU platforms. I have developed a method to accelerate sequence alignment on the GPU, and I used the method to investigate whether the Operational Taxonomic Unit (OTU) classification problem can be improved with such sheer amount of computational power. I have developed a method to accelerate pairwise k-mer comparison on the GPU, and I used the method to further develop PolyHomology, a framework to scaffold shared sequence motifs across large numbers of genomes to illuminate the structure of the regulatory network in yeasts. The results suggest that such approach to heterogeneous computing could help to answer questions in biology and is a viable path to new discoveries in the present and the future.Includes bibliographical reference

    Parallel Processing for VLSI CAD Applications a Tutorial

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratorySemiconductor Research CorporationAuthor's name appears in front matter as Prithviraj Banerje

    Secure large-scale outsourced services founded on trustworthy code executions

    Get PDF
    Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2017The Cloud Computing model has incentivized companies to outsource services to third-party providers. Service owners can use third-party computational, storage and network resources while avoiding the cost of acquiring an IT infrastructure. However, they have to rely on the trustworthiness of the third-party providers, who ultimately need to guarantee that the services run as intended. The fundamental security challenge is how to empower companies that own and outsource such services, or clients using them, to check service execution on the remote cloud platform. A promising approach is based on hardware-enforced isolation and attestation of the service execution. Assuming that hardware attacks are infeasible, this protects the service from other malicious software or untrusted system administrators. Also, it allows clients to check that the results were produced as intended. While this paradigm is well known, previous work does not scale with large code and data sizes, lacks generality both with respect to hardware (e.g., either uses Trusted PlatformModules, TPMs, or Intel SGX) and software (e.g., only supports MapReduce applications), and makes undesirable security tradeoffs (e.g., resorts to a large Trusted Computing base, or TCB, to run unmodified services, or a small TCB but with limited functionality). This thesis shows how to secure the execution of large-scale services efficiently and without these compromises. From the perspective of a client that sends a request and receives a response, trust can be established by verifying a small proof of correct execution that is attached to the result. On the remote provider’s platform, a small trusted computing base enables the secure execution of generic services composed of a large source code base and/orworking on large data sets, using an abstraction layer that is implementable on diverse trusted hardware architectures. Our small TCB implements three orthogonal techniques that are the core contributions of this thesis. The first one targets the identification (and the execution) of only the part of code that is necessary to fulfill a client’s request. This allows an increase both in security and efficiency by leaving any code that is not required to run the service outside the execution environment. The second contribution enables terabyte-scale data processing by means of a secure in-memory data handling mechanism. This allows a service to retrieve data that is validated on access and before use. Notably, data I/O is performed using virtual memory mechanisms that do not require any system call from the trusted execution environment, thereby reducing the attack surface. The third contribution is a novel fully-passive secure replication scheme that is tolerant to software attacks. Fault-tolerance delivers availability guarantees to clients, while passive replication allows for computationally efficient processing. Interestingly, all of our techniques are based on the same abstraction layer of the trusted hardware. In addition, our implementation and experimental evaluation demonstrate the practicality of these approaches.O modelo de computação baseado em Nuvem incentivou as empresas a externalizar serviços a fornecedores terceiros. Os proprietários destes serviços podem utilizar recursos externos de computação, armazenamento e rede, evitando o custo de aquisição¸ de uma infraestrutura IT. No entanto, têm de confiar que os serviços de fornecedores terceiros funcionem como planeado. O desafio fundamental da segurança ´e fazer com que as empresas que possuem e externalizam serviços, ou clientes que utilizam estes, possam controlar a execução do serviço na plataforma remota baseada em Nuvem. Uma abordagem promissora é o isolamento e a atestacão da execucão do serviço a n´nível hardware. Assumindo que os ataques ao hardware não são possíveis, o servic¸o fica protegido contra software malicioso ou administradores de sistema suspeitos. Além disso, permite aos clientes controlarem que os resultados tenham sido produzidos como planeado. Embora esta abordagem seja bem conhecida, os trabalhos anteriores não escalam com grandes quantidades de código e dados, carecem de generalidade em relacão ao hardware (e.g., utilizam TPMs ou SGX) e ao software (e.g., recorrem a uma Trusted Computing base, ou TCB, complexa para a execução de serviços não modificados, ou a uma TCB simplificada que tem funcionalidades limitadas). Esta tese propõe uma proteção para a execucão de serviços de grande escala de forma eficiente e sem as limitações anteriores. Da perspectiva de um cliente que envia um pedido e recebe uma resposta, a confiança pode ser estabelecida através de uma pequena prova de que a execução foi correcta que é anexada à resposta. Na plataforma do fornecedor remoto, um pequeno dispositivo de computação fiável permite a execução segura de serviços genéricos constituídos por uma grande quantidade de código e/ou que processam grandes conjuntos de dados, utilizando um nível de abstração que pode ser implementado em diversas arquitecturas de hardware fiável. A nossa TCB simplificada implementa três técnicas independentes que são os contributos centrais desta tese. A primeira foca-se na identificação (e na execução) apenas da parte de código que ´e precisa para completar um pedido de um cliente. Isto permite um aumento de segurança e eficiência porque o código que não é necessário para executar o serviço fica fora do ambiente de execução. A segunda contribuição permite o processamento de dados na escala de um terabyte através de um mecanismo seguro de gestão dos dados em memória. Isso permite a um serviço carregar dados que são validados quando são acedidos e antes de serem utilizados. Em particular, a inserção e a saída dos dados é feita utilizando mecanismos de memória virtual que não necessitam de chamadas de sistema a partir do ambiente de execução fiável, reduzindo portanto a superfície de ataque. A terceira contribuição é um novo esquema de replicação seguro completamente passivo que é tolerante a ataques de software. A tolerância a faltas garante disponibilidade aos clientes, enquanto a replicação passiva permite um processamento eficiente do ponto de vista computacional. Curiosamente, todas as técnicas são baseadas no mesmo nível de abstração do hardware fiável. Além disso, a nossa implementação e avaliação ao experimental demonstram a praticidade destas abordagens
    • …
    corecore