297 research outputs found

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    Software defined applications: a DevOps approach to monitoring

    Get PDF
    DissertaĆ§Ć£o de mestrado integrado em Informatics EngineeringDevOps presents a mix of agile methodologies that allow an applicationā€™s release cycle to be shortened. This translates into a faster delivery of value to the stakeholders. However, the value creation chain does not finish at the end of that cycle. It is necessary to monitor the artifacts produced at a system level, and at the application level, in order to ensure the compliance of the functional and non functional requirements. Today, there seems to be a clear separation between the monitoring process and the application development process. As the development and operations processes have merged in DevOps, this dissertation pretends to investigate how to integrate several aspects of monitoring into the regular lifecycle of an applicationā€™s development. The inclusion of external services further emphasizes the need to include an observability component into an infrastructure. The main goal of this dissertation is to develop a solution for the deployment of an infrastructure using stateof- the-art technologies and frameworks, while also providing observability to the system and to the applications running on it. To do so, it required the investigation of the methodologies and concepts that are the base of the software development lifecycle, focusing on the latter stages of that process: the deployment, and monitoring phases. These methodologies and concepts were complemented with the study of state-of-the-art technologies and frameworks that aim to ease the burden of setting up an infrastructure quickly and with the necessary tools to evolve it after the initial setup and with each new software release. Furthermore, it also involved the research of tools that enable the collection of metrics from applications, as well as processing such data and displaying it in useful ways for operators and stakeholders. In this context, this dissertation aims to provide a solution for the deployment of MobileID applications at INESC TEC, using the Mobile Driving Licence as the primary case study. The proposed design and implementation with a container orchestration framework and CI/CD pipelines, enables faster development of different MobileID applications, while also providing continuous monitoring to the deployments. With this implementation, it was possible to assess how container orchestration frameworks provide greater flexibility to applications, and how this observability can be augmented with the use of dedicated monitoring systems.DevOps baseia-se na utilizaĆ§Ć£o de um conjunto de metolodogias Ć”geis que permitem encurtar o ciclo de desenvolvimento de uma aplicaĆ§Ć£o de forma a que as alteraƧƵes efetuadas pelos programadores se traduzam no valor desejado pelas partes interessadas. No entanto, a criaĆ§Ć£o de valor nĆ£o termina na parte final desse ciclo. Ɖ necessĆ”rio monitorizar os artefactos produzidos tanto a nĆ­vel de sistema, como a nĆ­vel aplicacional, de forma a garantir o cumprimento de requisitos funcionais e nĆ£o funcionais. Todavia, parece existir uma separaĆ§Ć£o entre o processo de monitorizaĆ§Ć£o e o processo de desenvolvimento de aplicaƧƵes. Tal como os processos de desenvolvimento e de operaƧƵes se uniram no conceito de DevOps, pretende-se tambĆ©m investigar como serĆ” possĆ­vel integrar vĆ”rios aspetos de monitorizaĆ§Ć£o no ciclo normal de desenvolvimento de uma aplicaĆ§Ć£o. O principal objetivo desta dissertaĆ§Ć£o Ć© desenvolver uma soluĆ§Ć£o de operacionalizaĆ§Ć£o de infraestruturas de suporte a aplicaƧƵes com recurso Ć s tecnologias e ferramentas mais adequadas. Esta soluĆ§Ć£o deverĆ” ser acompanhada, em paralelo, por mecanismos de observabilidade dessa infraestrutura e das aplicaƧƵes que nela sĆ£o executadas. Para isso, foi necessĆ”ria a investigaĆ§Ć£o de metodologias e conceitos que formam a base do processo de desenvolvimento de software. O foco esteve nas partes finais do processo: a fase de deployment e a de monitorizaĆ§Ć£o. Estas metodolodogias e conceitos foram complementados com o estudo de tecnologias e ferramentas que pretendem facilitar o processo de montar uma infraestrutura rapidamente, bem como permitir a evoluĆ§Ć£o da arquitetura inicial consoante os subsequentes lanƧamentos de aplicaƧƵes. Para alĆ©m disso, tambĆ©m envolveu a pesquisa de ferramentas que permitem extrair e armazenar mĆ©tricas de aplicaƧƵes, bem como processar essa informaĆ§Ć£o e disponibilizĆ”-la em formato Ćŗtil quer para operadores, quer para outras partes interessadas. Neste contexto, esta dissertaĆ§Ć£o pretende desenvolver uma soluĆ§Ć£o que permita efetuar o deployment de aplicaƧƵes de Identidade Digital no INESC TEC, utilizando a Carta de ConduĆ§Ć£o MĆ³vel como caso de estudo. A arquitetura proposta, e a respetiva implementaĆ§Ć£o com recurso a um orquestrador de containers e pipelines de CI/CD, permite o desenvolvimento mais Ć”gil de novas aplicaƧƵes de Identidade Digital, e proporciona monitorizaĆ§Ć£o contĆ­nua a cada iteraĆ§Ć£o do desenvolvimento. A partir do resultado prĆ”tico obtido, foi possĆ­vel aferir de que forma os orquestradores de containers permitem melhorar a observabilidade de aplicaƧƵes, e de que forma ela pode ser aumentada com recurso a sistemas dedicados de monitorizaĆ§Ć£o contĆ­nua

    Not invented here: Power and politics in public key infrastructure (PKI) institutionalisation at two global organisations.

    Get PDF
    This dissertation explores the impact of power and politics in Public Key Infrastructure (PKI) institutionalisation. We argue that this process can be understood in power and politics terms because the infrastructure skews the control of organisational action in favour of dominant individuals and groups. Indeed, as our case studies show, shifting power balances is not only a desired outcome of PKI deployment, power drives institutionalisation. Therefore, despite the rational goals of improving security and reducing the total cost of ownership for IT, the PKIs in our field organisations have actually been catalysts for power and politics. Although current research focuses on external technical interoperation, we believe emphasis should be on the interaction between the at once restrictive and flexible PKI technical features, organisational structures, goals of sponsors and potential user resistance. We use the Circuits of Power (CoP) framework to explain how a PKI conditions and is conditioned by power and politics. Drawing on the concepts of infrastructure and institution, we submit that PKIs are politically explosive in pluralistic, distributed global organisations because by limiting freedom of action in favour of stability and security, they set a stage for disaffection. The result of antipathy towards the infrastructure would not be a major concern if public key cryptography, which underpins PKI, had a centralised mechanism for enforcing the user discipline it relies on to work properly. However, since this discipline is not automatic, a PKI bereft of support from existing power arrangements faces considerable institutionalisation challenges. We assess these ideas in two case studies in London and Switzerland. In London, we explain how an oil company used its institutional structures to implement PKI as part of a desktop standard covering 105,000 employees. In Zurich and London, we give a power analysis of attempts by a global financial services firm to roll out PKI to over 70,000 users. Our dissertation makes an important contribution by showing that where PKI supporters engage in a shrewdly orchestrated campaign to knit the infrastructure with the existing institutional order, it becomes an accepted part of organisational life without much ceremony. In sum, we both fill gaps in information security literature and extend knowledge on the efficacy of the Circuits of Power framework in conducting IS institutionalisation studies

    Electronic security - risk mitigation in financial transactions : public policy issues

    Get PDF
    This paper builds on a previous series of papers (see Claessens, Glaessner, and Klingebiel, 2001, 2002) that identified electronic security as a key component to the delivery of electronic finance benefits. This paper and its technical annexes (available separately at http://www1.worldbank.org/finance/) identify and discuss seven key pillars necessary to fostering a secure electronic environment. Hence, it is intended for those formulating broad policies in the area of electronic security and those working with financial services providers (for example, executives and management). The detailed annexes of this paper are especially relevant for chief information and security officers responsible for establishing layered security. First, this paper provides definitions of electronic finance and electronic security and explains why these issues deserve attention. Next, it presents a picture of the burgeoning global electronic security industry. Then it develops a risk-management framework for understanding the risks and tradeoffs inherent in the electronic security infrastructure. It also provides examples of tradeoffs that may arise with respect to technological innovation, privacy, quality of service, and security in designing an electronic security policy framework. Finally, it outlines issues in seven interrelated areas that often need attention in building an adequate electronic security infrastructure. These are: 1) The legal framework and enforcement. 2) Electronic security of payment systems. 3) Supervision and prevention challenges. 4) The role of private insurance as an essential monitoring mechanism. 5) Certification, standards, and the role of the public and private sectors. 6) Improving the accuracy of information on electronic security incidents and creating better arrangements for sharing this information. 7) Improving overall education on these issues as a key to enhancing prevention.Knowledge Economy,Labor Policies,International Terrorism&Counterterrorism,Payment Systems&Infrastructure,Banks&Banking Reform,Education for the Knowledge Economy,Knowledge Economy,Banks&Banking Reform,International Terrorism&Counterterrorism,Governance Indicators

    Final report of design workshop : (10.-15.7.2000)

    Get PDF
    This document is the final report of the first design workshop of the NeXus research group of the University of Stuttgart, held from 10th to 15th July 2000. It contains a basic description of the Nexus platform, which is an open, global infrastructure for mobile, spatially aware applications

    Hardware-Assisted Secure Computation

    Get PDF
    The theory community has worked on Secure Multiparty Computation (SMC) for more than two decades, and has produced many protocols for many settings. One common thread in these works is that the protocols cannot use a Trusted Third Party (TTP), even though this is conceptually the simplest and most general solution. Thus, current protocols involve only the direct players---we call such protocols self-reliant. They often use blinded boolean circuits, which has several sources of overhead, some due to the circuit representation and some due to the blinding. However, secure coprocessors like the IBM 4758 have actual security properties similar to ideal TTPs. They also have little RAM and a slow CPU.We call such devices Tiny TTPs. The availability of real tiny TTPs opens the door for a different approach to SMC problems. One major challenge with this approach is how to execute large programs on large inputs using the small protected memory of a tiny TTP, while preserving the trust properties that an ideal TTP provides. In this thesis we have investigated the use of real TTPs to help with the solution of SMC problems. We start with the use of such TTPs to solve the Private Information Retrieval (PIR) problem, which is one important instance of SMC. Our implementation utilizes a 4758. The rest of the thesis is targeted at general SMC. Our SMC system, Faerieplay, moves some functionality into a tiny TTP, and thus avoids the blinded circuit overhead. Faerieplay consists of a compiler from high-level code to an arithmetic circuit with special gates for efficient indirect array access, and a virtual machine to execute this circuit on a tiny TTP while maintaining the typical SMC trust properties. We report on Faerieplay\u27s security properties, the specification of its components, and our implementation and experiments. These include comparisons with the Fairplay circuit-based two-party system, and an implementation of the Dijkstra graph shortest path algorithm. We also provide an implementation of an oblivious RAM which supports similar tiny TTP-based SMC functionality but using a standard RAM program. Performance comparisons show Faerieplay\u27s circuit approach to be considerably faster, at the expense of a more constrained programming environment when targeting a circuit

    Integrating legacy mainframe systems: architectural issues and solutions

    Get PDF
    For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most. In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged. Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment. However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications. The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies

    Secure service proxy : a CoAP(s) intermediary for a securer and smarter web of things

    Get PDF
    As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things

    Evolving a secure grid-enabled, distributed data warehouse : a standards-based perspective

    Get PDF
    As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkitā€™s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions
    • ā€¦
    corecore