1,692 research outputs found
Keys in the Clouds: Auditable Multi-device Access to Cryptographic Credentials
Personal cryptographic keys are the foundation of many secure services, but
storing these keys securely is a challenge, especially if they are used from
multiple devices. Storing keys in a centralized location, like an
Internet-accessible server, raises serious security concerns (e.g. server
compromise). Hardware-based Trusted Execution Environments (TEEs) are a
well-known solution for protecting sensitive data in untrusted environments,
and are now becoming available on commodity server platforms.
Although the idea of protecting keys using a server-side TEE is
straight-forward, in this paper we validate this approach and show that it
enables new desirable functionality. We describe the design, implementation,
and evaluation of a TEE-based Cloud Key Store (CKS), an online service for
securely generating, storing, and using personal cryptographic keys. Using
remote attestation, users receive strong assurance about the behaviour of the
CKS, and can authenticate themselves using passwords while avoiding typical
risks of password-based authentication like password theft or phishing. In
addition, this design allows users to i) define policy-based access controls
for keys; ii) delegate keys to other CKS users for a specified time and/or a
limited number of uses; and iii) audit all key usages via a secure audit log.
We have implemented a proof of concept CKS using Intel SGX and integrated this
into GnuPG on Linux and OpenKeychain on Android. Our CKS implementation
performs approximately 6,000 signature operations per second on a single
desktop PC. The latency is in the same order of magnitude as using
locally-stored keys, and 20x faster than smart cards.Comment: Extended version of a paper to appear in the 3rd Workshop on
Security, Privacy, and Identity Management in the Cloud (SECPID) 201
An index based load balancer to enhance transactional application servers' QoS
The Web is the preferred interface for the new generation of applications. Web services are an ubiquitous concept for both, developers and managers. These Web applications require distribution systems of web requests that allow and support the dynamism of these environments, to provide service availability and resource usage, commonly heterogeneous. Web services provide an entry point to the Web application business logic. Therefore, the design of appropriate load balancing strategies, taking into account the dynamic nature of the application servers' activity, is essential. In this work we present a load balancing policy and its integration inbetween static and dynamic layers of any web application that uses application servers. The strategy gets the status report of each application server, used to later distribute web requests. Results that show how the strategy succeed are presented
Transparent code authentication at the processor level
The authors present a lightweight authentication mechanism that verifies the authenticity of code and thereby addresses the virus and malicious code problems at the hardware level eliminating the need for trusted extensions in the operating system. The technique proposed tightly integrates the authentication mechanism into the processor core. The authentication latency is hidden behind the memory access latency, thereby allowing seamless on-the-fly authentication of instructions. In addition, the proposed authentication method supports seamless encryption of code (and static data). Consequently, while providing the software users with assurance for authenticity of programs executing on their hardware, the proposed technique also protects the software manufacturers’ intellectual property through encryption. The performance analysis shows that, under mild assumptions, the presented technique introduces negligible overhead for even moderate cache sizes
LightBox: Full-stack Protected Stateful Middlebox at Lightning Speed
Running off-site software middleboxes at third-party service providers has
been a popular practice. However, routing large volumes of raw traffic, which
may carry sensitive information, to a remote site for processing raises severe
security concerns. Prior solutions often abstract away important factors
pertinent to real-world deployment. In particular, they overlook the
significance of metadata protection and stateful processing. Unprotected
traffic metadata like low-level headers, size and count, can be exploited to
learn supposedly encrypted application contents. Meanwhile, tracking the states
of 100,000s of flows concurrently is often indispensable in production-level
middleboxes deployed at real networks.
We present LightBox, the first system that can drive off-site middleboxes at
near-native speed with stateful processing and the most comprehensive
protection to date. Built upon commodity trusted hardware, Intel SGX, LightBox
is the product of our systematic investigation of how to overcome the inherent
limitations of secure enclaves using domain knowledge and customization. First,
we introduce an elegant virtual network interface that allows convenient access
to fully protected packets at line rate without leaving the enclave, as if from
the trusted source network. Second, we provide complete flow state management
for efficient stateful processing, by tailoring a set of data structures and
algorithms optimized for the highly constrained enclave space. Extensive
evaluations demonstrate that LightBox, with all security benefits, can achieve
10Gbps packet I/O, and that with case studies on three stateful middleboxes, it
can operate at near-native speed.Comment: Accepted at ACM CCS 201
Intel TDX Demystified: A Top-Down Approach
Intel Trust Domain Extensions (TDX) is a new architectural extension in the
4th Generation Intel Xeon Scalable Processor that supports confidential
computing. TDX allows the deployment of virtual machines in the
Secure-Arbitration Mode (SEAM) with encrypted CPU state and memory, integrity
protection, and remote attestation. TDX aims to enforce hardware-assisted
isolation for virtual machines and minimize the attack surface exposed to host
platforms, which are considered to be untrustworthy or adversarial in the
confidential computing's new threat model. TDX can be leveraged by regulated
industries or sensitive data holders to outsource their computations and data
with end-to-end protection in public cloud infrastructure.
This paper aims to provide a comprehensive understanding of TDX to potential
adopters, domain experts, and security researchers looking to leverage the
technology for their own purposes. We adopt a top-down approach, starting with
high-level security principles and moving to low-level technical details of
TDX. Our analysis is based on publicly available documentation and source code,
offering insights from security researchers outside of Intel
Hardening an Open-Source Governance Risk and Compliance Software: Eramba
Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2020Lições históricas como Chernobyl, Fukushima ou o colapso da ponte de Mississípi revelam a vital importância da gestão de risco. Para além de saber gerir o risco, as empresas têm de desenvolver planos para se precaverem e oferecerem resiliência a qualquer ameaça que possam enfrentar, desde desastres naturais e terrorismo a ciberataques e propagação de vírus. Estes planos são denominados de planos de continuidade de negócio. A crucialidade destes planos e a introdução de novas leis como Lei Sarbanes-Oxley, Diretiva Europeia 2006/43/EC VIII e recentemente do Regulamento de Protecção de Dados geraram uma maior preocupação e sensibilidade nas empresas em aglomerar todos estes processos de governança, risco e conformidade (GRC). GRC integra a implementação da gestão de risco, planos de continuidade de negócio, conformidade com as leis e boas práticas de auditoria externa e interna. As empresas necessitam de uma ferramenta que ofereça uma visão global da Governança, Risco e Conformidade. No entanto, estas ferramentas são por norma dispendiosas, o que faz com que pequenas e médias empresas não tenham meios para suportar o custo. Consequentemente, estas empresas tendem a adoptar ferramentas de código aberto, como SimpleRisk, Envelop ou Eramba. Apesar de suportarem o GRC, existem vários problemas com as aplicações deste tipo, como a falta de manutenção, problemas de migração, dificuldade de escalabilidade, a necessidade constante de fazer atualizações e a grande curva de aprendizagem associada. A Ernst & Young agora conhecida como EY oferece serviços de Consulting, Assurance, Tax e de Strategy and Transaction para ajudar a resolver desafios mais difíceis dos seus clientes e criar valor. Para se preparar para uma futura auditoria, um cliente da EY pertencente ao sector bancário procura ser certificado em ISO/IEC 27001 e ISO/IEC 22301, referentes a Sistema de Gestão de Segurança de Informação (SGSI) e Sistema de Gestão de Continuidade de Negócio (SGCN), respectivamente. Adicionalmente, o cliente visa migrar a sua infraestrutura no local para uma infraestrutura na cloud. Com todos estes fatores em conta, a EY recomendou uma ferramenta de código aberto de GRC chamada Eramba. Esta tese propõe um estudo profundo das vulnerabilidades que o Eramba pode oferecer assim como uma solução para as resolver através de armazenamento em nuvem. Seguindo uma metodologia de pentesting chamada PTES para o estudo de vulnerabilidades foi possível identificar dez vulnerabilidades sendo quase todas de baixo nível. A metodologia PTES recomenda o uso de adoção de modelo de ameaças de modo a perceber como os processos estão correlacionados, onde estão armazenados dados importantes, quais são os principais ativos e como é processado um pedido na aplicação. Para fazer esta modelação foi seguido uma metodologia proposta pela Microsof nomeada de STRIDE, esta metodologia é uma mnemónica para Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service e Elevation of Privilege. A Microsoft propõe um modelo de ameaças em quatro passos: modelação do sistema através de Data Flow Diagrams; encontrar ameaças e consequentemente classificá-las através da nomenclatura STRIDE; endereçar ameaças mitigando e eliminando-as e validar se cada uma foi realmente endereçada com sucesso. De modo a endereçar estes dois últimos passos e para conjugar com os requisitos da empresa de migração para armazenamento na nuvem foi desenvolvido uma solução de tornar o Eramba num container para então usufruir da orquestração de containers que é o Kubernetes. Como resultado, a partir do trabalho desenvolvido é possível que qualquer organização adapte esta solução de GRC e consiga hospedar na nuvem sem enfrentar dificuldades. Este trabalho proporcionou analisar a viabilidade da ferramenta Eramba a longo prazo por qualquer organização e perceber se este é escalável.Historical lessons such as Chernobyl, Fukushima or the collapse of the Mississippi bridge showcase the vital importance of risk management. In addition to managing risk, companies must develop plans to safeguard against and offer resilience to any threat they may face, from natural disasters and terrorism to cyber-attacks and the spread of viruses. These plans are called business continuity plans. The cruciality of these plans and the introduction of new laws such as the Sarbanes-Oxley Act, European Directive 2006/43/EC VIII and recently the Data Protection Regulation have generated greater concern and sensitivity in companies, leading them to agglomerate all these governance, risk and compliance processes (GRC). GRC integrates the implementation of risk management, business continuity plans, law compliance and good external and internal auditory practices. Companies need a tool that provides an overall view of Governance, Risk and Compliance. However, such tools are usually expensive, which means that small and mediumsized companies cannot afford the cost. Consequently, these companies tend to adopt open source tools such as SimpleRisk, Envelop or Eramba. Despite being compliant with GRC, there are several problems with applications of this type, such as lack of maintenance, migration problems, difficulty in scalability, the constant need to make updates and the large learning curve associated. Ernst & Young now known as EY offers Consulting, Assurance, Tax and Strategy and Transaction services to help solve more difficult challenges for its clients and create value. To prepare for a future audit, an EY client within the banking sector seeks to be certified in Business Continuity and Information Security. Additionally, the client aims to migrate its onsite infrastructure to a cloud infrastructure. With all these factors in mind, EY has recommended an open source tool called Eramba. This thesis proposes an in-depth study of the vulnerabilities that Eramba can face as well as a solution to solve them through cloud storage. Following a pentesting methodology called PTES for the study of vulnerabilities it was possible to identify ten vulnerabilities, almost all of which are low level. The PTES methodology recommends the use of a threat model in order to understand how processes are correlated, where important data are stored, what are the main assets and how a request is processed in the application. To make this modeling was followed a methodology proposed by Microsoft named STRIDE, this methodology is a mnemonic for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of Privilege. Microsoft proposes a four-step threat model: modeling the system through Data Flow Diagrams; finding threats and consequently classifying them through STRIDE nomenclature; addressing threats by mitigating and reducing them and validating whether each one has actually been successfully addressed. In order to address these last two steps and to combine them with the company’s requirements for migration to cloud storage, a solution has been developed to turn Eramba into a container to then make use of orchestration that is the Kubernetes. As a result, from the work done it is possible for any organization that is an EY customer to adapt this solution and be able to host in the cloud without facing difficulties. This project also provided an overview to analyze if Eramba is secure and scalable
A Study of the Relative Costs of Network Security Protocols
While the benefits of using IPsec to solve a significant number of network security problems are well known and its adoption is gaining ground, very little is known about the communication overhead that it introduces. Quantifying this overhead will make users aware of the price of the added security, and will assist them in making well-informed IPsec deployment decisions. In this paper, we investigate the performance of IPsec using micro- and macro-benchmarks. Our tests explore how the various modes of operation and encryption algorithms affect its performance and the benefits of using cryptographic hardware to accelerate IPsec processing. Finally, we compare against other secure data transfer mechanisms, such as SSL, scp(1), and sftp(1)
- …