19 research outputs found

    COSPO/CENDI Industry Day Conference

    Get PDF
    The conference's objective was to provide a forum where government information managers and industry information technology experts could have an open exchange and discuss their respective needs and compare them to the available, or soon to be available, solutions. Technical summaries and points of contact are provided for the following sessions: secure products, protocols, and encryption; information providers; electronic document management and publishing; information indexing, discovery, and retrieval (IIDR); automated language translators; IIDR - natural language capabilities; IIDR - advanced technologies; IIDR - distributed heterogeneous and large database support; and communications - speed, bandwidth, and wireless

    Supporting NAT traversal and secure communications in a protocol implementation framework

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Electrotécnica e de ComputadoresThe DOORS framework is a versatile, lightweight message-based framework developed in ANSI C++. It builds upon research experience and subsequent knowledge garnered from the use and development of CVOPS and OVOPS, two well known protocol development frameworks that have obtained widespread acceptance and use in both the Finnish industry and academia. It conceptually resides between the operating system and the application, and provides a uniform development environment shielding the developer from operating system speci c issues. It can be used for developing network services, ranging from simple socket-based systems, to protocol implementations, to CORBA-based applications and object-based gateways. Originally, DOORS was conceived as a natural extension from the OVOPS framework to support generic event-based, distributed and client-server network applications. However, DOORS since then has evolved as a platform-level middleware solution for researching the provision of converged services to both packet-based and telecommunications networks, enterprise-level integration and interoperability in future networks, as well as studying application development, multi-casting and service discovery protocols in heterogeneous IPv6 networks. In this thesis, two aspects of development work with DOORS take place. The rst is the investigation of the Network Address Translation (NAT) traversal problem to give support to applications in the DOORS framework that are residing in private IP networks to interwork with those in public IP networks. For this matter this rst part focuses on the development of a client in the DOORS framework for the Session Traversal Utilities for NAT (STUN) protocol, to be used for IP communications behind a NAT. The second aspect involves secure communications. Application protocols in communication networks are easily intercepted and need security in various layers. For this matter the second part focuses on the investigation and development of a technique in the DOORS framework to support the Transport Layer Security (TLS) protocol, giving the ability to application protocols to rely on secure transport layer services

    Regulating the technological actor: how governments tried to transform the technology and the market for cryptography and cryptographic services and the implications for the regulation of information and communications technologies

    Get PDF
    The formulation, adoption, and transformation of policy involves the interaction of actors as they negotiate, accept, and reject proposals. Traditional studies of policy discourse focus on social actors. By studying cryptography policy discourses, I argue that considering both social and technological actors in detail enriches our understanding of policy discourse. The case-based research looks at the various cryptography policy strategies employed by the governments of the United States of America and the United Kingdom. The research method is qualitative, using hermeneutics to elucidate the various actors’ interpretations. The research aims to understand policy discourse as a contest of principles involving various government actors advocating multiple regulatory mechanisms to maintain their surveillance capabilities, and the reactions of industry actors, non-governmental organisations, parliamentarians, and epistemic communities. I argue that studying socio-technological discourse helps us to understand the complex dynamics involved in regulation and regulatory change. Interests and alignments may be contingent and unstable. As a result, technologies can not be regarded as mere representations of social interests and relationships. By capturing the interpretations and articulations of social and technological actors we may attain a better understanding of the regulatory landscape for information and communications technologies

    NA

    Get PDF
    United States policy requires that access to and dissemination of classified information be controlled. Separate networks and workstations for each classification do not meet user requirements. Users also need commercially available office productivity tools. Traditional multilevel systems are costly and are unable support an evolving suite of Commercial Off-The-Shelf (COTS) applications. This thesis presents a design for a Trusted Computing Base Extension (TCBE) that allows COTS workstations to function securely as part of a multilevel network that uses high assurance multilevel servers as the backbone. The TCBE will allow COTS workstations to use commercially available software applications, while providing a Trusted Path to a high assurance multilevel server. The research resulted in a design of a TCBE system that can be employed with COTS workstations, allowing them to function as untrusted clients in the context of a secure multilevel network.http://archive.org/details/designoftrustedc1094532753NAU.S. Marine Corps (U.S.M.C.) author.Approved for public release; distribution is unlimited

    Enhancing the security of electronic commerce transactions

    Get PDF
    This thesis looks at the security of electronic commerce transaction process- ing. It begins with an introduction to security terminology used in the thesis. Security requirements for card payments via the Internet are then described, as are possible protocols for electronic transaction processing. It appears that currently the Secure Socket Layer (SSL) protocol together with its standardised version Transport Layer Security (TLS) are the most widely used means to se- cure electronic transactions made over the Internet. Therefore, the analysis and discussions presented in the remainder of the thesis are based on the assumption that this protocol provides a `baseline' level of security, against which any novel means of security should be measured. The SSL and TLS protocols are analysed with respect to how well they satisfy the outlined security requirements. As SSL and TLS provide transport layer security, and some of the security requirements are at the application level, it is not surprising that they do not address all the identi¯ed security requirements. As a result, in this thesis, we propose four protocols that can be used to build upon the security features provided by SSL/TLS. The main goal is to design schemes that enhance the security of electronic transaction processing whilst imposing minimal overheads on the involved parties. In each case, a description of the new scheme is given, together with its advantages and limitations. In the ¯rst protocol, we propose a way to use an EMV card to improve the security of online transactions. The second protocol involves the use of the GSM subscriber authentication service to provide user authentication over the Internet. Thirdly, we propose the use of GSM data con¯dentiality service to protect sensitive information as well as to ensure user authentication. Regardless of the protection scheme employed for the transactions, there exist threats to all PCs used to conduct electronic commerce transactions. These residual threats are examined, and motivate the design of the fourth protocol, proposed speci¯cally to address cookie threats

    Dynamic infrastructure for federated identity management in open environments

    Get PDF
    Centralized identity management solutions were created to deal with user and data security where the user and the systems they accessed were within the same network or domain of control. Nevertheless, the decentralization brought about by the integration of the Internet into every aspect of life is leading to an increasing separation of the user from the systems requiring access. Identity management has been continually evolving in order to adapt to the changing systems, and thus posing new challenges. In this sense, the challenges associated with cross-domain issues have given rise to a new approach of identity management, called Federated Identity Management (FIM), because it removes the largest barriers for achieving a common understanding. Due to the importance of the federation paradigm for online identity management, a lot of work has been done so far resulting in a set of standards and specifications. According to them, under the FIM paradigm a person’s electronic identity stored across multiple distinct domains can be linked, shared and reused. This concept allows interesting use-cases, such as Single Sign-on (SSO), which allows users to authenticate at a single service and gain access to multiple ones without providing additional information. But also provides means for cross-domain user account provisioning, cross-domain entitlement management and cross-domain user attribute exchange. However, for the federated exchange of user information to be possible in a secure way, a trust relationship must exist between the separated domains. The establishment of these trust relationships, if addressed in the federation specifications, is based on complex agreements and configurations that are usually manually set up by an administrator. For this reason, the “internet-like” scale of identity federations is still limited. Hence, there is a need to move from static configurations towards more flexible and dynamic federations in which members can join and leave more frequently and trust decisions can be dynamically computed on the fly. In this thesis, we address this issue. The main goal is contributing to improve the trust layer in FIM in order to achieve dynamic federation. And for this purpose, we propose an architecture that extends current federation systems. The architecture is based on two main pillars, namely a reputation-based trust computation module, and a risk assessment module. In regard to trust, we formalize a model to compute and represent trust as a number, which provides a basis for easy implementation and automation. It captures the features of current FIM systems and introduces new dimensions to add flexibility and richness. The model includes the definition of a trustworthiness metric, detailing the evidences used, and how they are combined to obtain a quantitative value. Basically, authentication information is merged with behavior data, i.e., reputation or history of interactions. In order to include reputation data in the model we contributed with the definition of a generic protocol to exchange reputation information between FIM entities, and its integration with the most widely deployed specification, i.e., Security Assertion Markup Language (SAML). In regard to risk, we define an assessment model that allow entities to calculate how much risk is involved in transacting with another entity according to its configuration, policies, operation rules, cryptographic algorithms, etc. The methodology employed to define the risk model consists of three steps. Firstly, we design a taxonomy to capture the different aspects of a relationship in FIM that may contribute to risk. Secondly, based on the taxonomy and aiming at developing a computational model, we propose a set of metrics as a basis to quantify risk. Finally, we describe how to combine the metrics into a meaningful risk figure by using the Multiattribute Utility Theory (MAUT) methodology, which has been applied and adapted to define the risk aggregation model. Furthermore, an also under the MAUT theory, we propose a fuzzy aggregation system to combine trust and risk into a final value that is the basis for dynamic federation decisions. Formal validation of the above mentioned ideas has been carried out. The risk assessment and decision making are analytically validated ensuring their correct behavior, the reputation protocol included in the trust management proposal is tested through simulations, and the architecture is verified through the development of prototypes. In addition, dissemination activities were performed in projects, journals and conferences. Summarizing, the contributions here constitute a step towards the realization of dynamic federation, based on the flexibilization of the underlying trust frameworks. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Históricamente el diseño de soluciones de gestión de identidad centralizada ha estado orientado a proteger la seguridad de usuarios y datos en entornos en los que tanto los usuarios como los sistemas se encuentran en la misma red o dominio. Sin embargo, la creciente descentralización acaecida al integrar Internet en muchos aspectos de la vida cotidiana está dando lugar a una separación cada vez mayor entre los usuarios y los sistemas a los que acceden. La gestión de identidad ha ido evolucionando para adaptarse a estos cambios, dando lugar a nuevos e interesantes retos. En este sentido, los retos relacionados con el acceso a diferentes dominios han dado lugar a una nueva aproximación en la gestión de identidad conocida como Federación de Identidad o Identidad Federada. Debido a la importancia de este paradigma, se ha llevado a cabo un gran trabajo que se refleja en la definición de varios estándares y especificaciones. De acuerdo con estos documentos, bajo el paradigma de identidad federada, la identidad digital de un usuario almacenada en múltiples dominios diferentes puede ser enlazada, compartida y reutilizada. Este concepto hace posibles interesantes casos de uso, tales como el Single Sign-on (SSO), que permite a un usuario autenticarse una sola vez en un servicio y obtener acceso a múltiples servicios sin necesidad de proporcionar información adicional o repetir el proceso. Pero además, también se proporcionan mecanismos para muchos otros casos, como el intercambio de atributos entre dominios o la creación automática de cuentas a partir de la información proporcionada por otro dominio. No obstante, para que el intercambio de información personal del usuario entre dominios federados se pueda realizar de forma segura, debe existir una relación de confianza entre dichos dominios. Pero el establecimiento de estas relaciones de confianza, a veces ni siquiera recogido en las especificaciones, suele estar basado en acuerdos rígidos que requieren gran trabajo de configuración por parte de un administrador. Por esta razón, la escalabilidad de las federaciones de identidad es todavía limitada. Como puede deducirse, existe una necesidad clara de cambiar los acuerdos estáticos que rigen las federaciones actuales por un modelo más flexible que permita federaciones dinámicas en las que los miembros puedan unirse y marcharse más frecuentemente y las decisiones de confianza sean tomadas dinámicamente on-the-fly. Este es el problema que tratamos en la presente tesis. Nuestro objetivo principal es contribuir a mejorar la capa de confianza en federación de identidad de manera que el establecimiento de relaciones pueda llevarse a cabo de forma dinámica. Para alcanzar este objetivo, proponemos una arquitectura basada en dos pilares fundamentales: un módulo de cómputo de confianza basado en reputación, y un módulo de evaluación de riesgo. Por un lado, formalizamos un modelo para calcular y representar la confianza como un número, lo cual supone una base para una fácil implementación y automatización. El modelo captura las características de los sistemas de gestión de identidad federada actuales e introduce nuevas dimensiones para dotarlos de una mayor flexibilidad y riqueza expresiva. Se lleva a cabo pues una definición de la métrica de confianza, detallando las evidencias utilizadas y el método para combinarlas en un valor cuantitativo. Básicamente, se fusiona la información de autenticación disponible con datos de comportamiento, es decir, con reputación o historia de transacciones. Para la inclusión de datos de reputación en el modelo, contribuimos con la definición de un protocolo genérico que permite el intercambio de esta información entre las entidades de un sistema de gestión de identidad federada, que ha sido además integrado en el estándar más conocido y ampliamente desplegado (Security Assertion Markup Language, SAML). Por otro lado, en lo que se refiere al riesgo, proponemos un modelo que permite a las entidades calcular en cuánto riesgo se incurre al realizar una transacción con otra entidad, teniendo en cuenta su configuración, políticas, reglas de operación, algoritmos criptográficos en uso, etc. La metodología utilizada para definir el modelo de riesgo abarca tres pasos. En primer lugar, diseñamos una taxonomía que captura los distintos aspectos de una relación en el contexto de federación de identidad que puedan afectar al riesgo. En segundo lugar, basándonos en la taxonomía, proponemos un conjunto de métricas que serán la base para cuantificar el riesgo. En tercer y último lugar, describimos cómo combinar las métricas en una cifra final representativa utilizando el método Multiattribute Utility Theory (MAUT), que ha sido adaptado para definir el proceso de agregación de riesgo. Además, y también bajo la metodología MAUT, proponemos un sistema de agregación difuso que combina los valores de riesgo y confianza en un valor final que será el utilizado en la toma de decisiones dinámicas sobre si establecer o no una relación de federación. La validación de todas las ideas mencionadas ha sido llevada a cabo a través del análisis formal, simulaciones, desarrollo e implementación de prototipos y actividades de diseminación. En resumen, las contribuciones en esta tesis constituyen un paso hacia el establecimiento dinámico de federaciones de identidad, basado en la flexibilización de los modelos de confianza subyacentes

    L'authentification sécurisée utilisant le trusted computing

    Get PDF
    La sécurité des systèmes informatiques demeure un sujet pertinent malgré les efforts sans cesse apportés pour l’améliorer. À cet effet, un nouveau concept prometteur, qui touche simultanément le matériel et le logiciel, a été déployé. Il s’agit du trusted computing. L’utilisation de ce concept a pour but de s’assurer de l’intégrité du système que nous utilisons à travers différents mécanismes, comme la vérification d’intégrité ou l’isolation logicielle et matérielle. Ce concept pourrait être utilisé dans différentes applications nécessitant une sécurité accrue. L’un des principaux domaines où nous sommes encore concernés par la sécurité est l’authentification en ligne, particulièrement pour les services critiques comme les services financiers ou le courriel. En effet, la sécurité des données d’authentification en ligne peut être compromise par des logiciels malveillants, tels les chevaux de Troie et les enregistreurs de frappes, ou encore par des attaques matérielles, les enregistreurs de frappes physiques à titre d’exemple. Dans le cadre de ce mémoire, nous utilisons le concept de trusted computing afin de proposer une solution d’authentification offrant un compromis entre la sécurité, la facilité d’utilisation et la facilité de déploiement tout en mettant l’accent sur la sécurité. Dans notre schéma nous voulons prévenir les attaques sur les données l’authentifications effectuées à partir de l’ordinateur de l’utilisateur. Un autre but est s’assurer de la compatibilité de notre solution avec la méthode d’authentification actuelle qui consiste en un nom d’utilisateur et un mot de passe. Notre objectif principal est d’implémenter une application utilisant les mécanismes du trusted computing sur un système embarqué. Cette application a pour rôle de fournir une authentification sécurisée aux sites web pour les utilisateurs. Nous avons débuté par étudier les différentes approches possibles en termes de trusted computing et d’exécution isolée. Ensuite, nous avons démontré l’utilisation du trusted computing sur un système embarqué. Nous avons alors implémenté notre application d’authentification sécurisée en utilisant les possibilités offertes par le trusted computing. Cette implémentation a permis d’étudier la viabilité de l’utilisation du trusted computing pour régler les problèmes de sécurité. Finalement, nous avons discuté des différents modèles d’attaques possibles ainsi que des limitations de notre système pour l’évaluer. Notre système nous protège contre plusieurs attaques possibles dans les schémas d’authentification actuels, nous pouvons encore le rendre plus résistant et plus sécuritaire. Nous constatons aussi que le trusted computing peut être utilisé dans d’autres contextes de sécurité, car même avec quelques difficultés de développement, il offre beaucoup d’avantages.----------ABSTRACT: Computer systems security remains a top priority for researchers and industry as cyber attacks are improving and costing millions of dollars to companies. One of the recent concepts introduced is trusted computing. It is used to guarantee the integrity of the system we are using. The main mechanisms used by trusted computing are integrity verification and software and hardware isolation; they are achieved via hardware and software implementations. The application domain of trusted computing is large and usually used in sensitive contexts. One of the areas where security is critical but with no definitive solution is online authentication, in particular for sensitive services like online banking or email. Currently, online authentication is vulnerable to multiple software and hardware attacks such as trojans, software and hardware keyloggers. We propose a scheme for online authentication offering a compromise between security, ease of use and backward compatibility, but focusing more on security. We use trusted computing concepts for our solution. We try to prevent attacks aiming user credentials exchanged from the computer of the user. Another objective was to assure backward compatibility with current authentication methods which usually consist of username and password through websites. Our goal throughout this dissertation is to implement an application using the mechanisms of trusted computing on an embedded system. This application will provide users with a secure authentication mechanism to connect to websites. We began by studying the different possible approaches in terms of trusted computing and isolated execution. Then we ran trusted computing on an embedded system. Next, we implemented the secure authentication application using trusted computing. Finally, we discussed the possible threat models and limitations of our solution. Our system protects us against a range of attacks under current authentication schemes, although there is room for improvement to make it even more resistant and secure. The concept of trusted computing can be explored further to solve security-related problems. Despite some development difficulties, it can offer many advantages

    Security Enhanced Applications for Information Systems

    Get PDF
    Every day, more users access services and electronically transmit information which is usually disseminated over insecure networks and processed by websites and databases, which lack proper security protection mechanisms and tools. This may have an impact on both the users’ trust as well as the reputation of the system’s stakeholders. Designing and implementing security enhanced systems is of vital importance. Therefore, this book aims to present a number of innovative security enhanced applications. It is titled “Security Enhanced Applications for Information Systems” and includes 11 chapters. This book is a quality guide for teaching purposes as well as for young researchers since it presents leading innovative contributions on security enhanced applications on various Information Systems. It involves cases based on the standalone, network and Cloud environments

    Foresight and flexibility in cryptography and voice over IP policy

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Political Science, 2008."February 2008."Includes bibliographical references (p. 235-248).This main question in this dissertation is under what conditions government agencies show foresight in formulating strategies for managing emerging technologies. A secondary question is when they are capable of adaptation. Conventional wisdom and most organization theory literature suggest that organizations are reactive rather than proactive, reluctant to change, and responsive only to threats to their core mission or autonomy. The technological, economic, social, political, and sometimes security uncertainties that often accompany emerging technologies further complicate decision-making. More generally, organizations must often make decisions under conditions of limited information while guarding against lock-in effects that can constrain future choices. The two cases examined in this dissertation suggest that contrary to conventional wisdom, organizations can show foresight and flexibility in the management of emerging technologies. Key factors that promote foresight are: an organizational focus on technology, with the emerging technology in question being highly relevant to the organization's mission; technical expertise and a recognition of the limits of that knowledge; and experience dealing with other emerging technologies. The NSA recognized the inevitability of mass market encryption early on and adopted a sophisticated strategy of weakening the strength of, reducing the use of, and slowing down the deployment of mass market encryption in order to preserve its ability to easily monitor communications. The Agency showed considerable tactical adaptation in pursuit of this goal. The FCC adopted a rather unusual policy of forbearance toward VoIP. The Commission deliberately refrained from regulating VoIP in order to allow the technology to mature, innovation to occur, uncertainties to resolve, and to avoid potential market distortions due to too early or suboptimally formulated regulation. Eventually, however, pressure from outside interests such as law enforcement forced the Commission to act.by Shirley K. Hung.Ph.D
    corecore