3,913 research outputs found

    Enhancing cyber assets visibility for effective attack surface management : Cyber Asset Attack Surface Management based on Knowledge Graph

    Get PDF
    The contemporary digital landscape is filled with challenges, chief among them being the management and security of cyber assets, including the ever-growing shadow IT. The evolving nature of the technology landscape has resulted in an expansive system of solutions, making it challenging to select and deploy compatible solutions in a structured manner. This thesis explores the critical role of Cyber Asset Attack Surface Management (CAASM) technologies in managing cyber attack surfaces, focusing on the open-source CAASM tool, Starbase, by JupiterOne. It starts by underlining the importance of comprehending the cyber assets that need defending. It acknowledges the Cyber Defense Matrix as a methodical and flexible approach to understanding and addressing cyber security challenges. A comprehensive analysis of market trends and business needs validated the necessity of asset security management tools as fundamental components in firms' security journeys. CAASM has been selected as a promising solution among various tools due to its capabilities, ease of use, and seamless integration with cloud environments using APIs, addressing shadow IT challenges. A practical use case involving the integration of Starbase with GitHub was developed to demonstrate the CAASM's usability and flexibility in managing cyber assets in organizations of varying sizes. The use case enhanced the knowledge graph's aesthetics and usability using Neo4j Desktop and Neo4j Bloom, making it accessible and insightful even for non-technical users. The thesis concludes with practical guidelines in the appendices and on GitHub for reproducing the use case

    Agnostic cloud services with kubernetes

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Informática e de ComputadoresA computação na nuvem é frequentemente associada a restrições de dependência de fornecedor (Vendor Lock-In), motivado pelas diferentes tecnologias e implementações proprietárias que cada fornecedor de serviços em nuvem estabelece. Estas restrições consistem na dependência de um cliente relativamente a determinado fornecedor, o que dificulta a transição para outro fornecedor. Num contributo para uma Nuvem Agnóstica, o desafio descrito neste trabalho consiste na definição de um modelo de implantação e gestão do ciclo de vida de elementos computacionais em contexto de Nuvem. Por conseguinte, o objetivo do trabalho centra-se no desenvolvimento de um modelo que desacople a implantação e a gestão de sistemas informáticos do fornecedor de Nuvem, permitindo que sejam executados de forma agnóstica em diferentes plataformas de Nuvem. Neste âmbito, recorrer-se á a contentores, enquanto solução eficiente e padronizada de implantação de serviços computacionais em diferentes infraestruturas. Adicionalmente, pretende-se que o modelo automatize a geração de ficheiros de implantação, definindo as condições de execução do(s) serviço(s). Atualmente, as plataformas de orquestração de contentores são importantes aliados das organizações, sendo responsáveis pela gestão da implantação e configuração dos sistemas informáticos formados por múltiplos contentores. Existem diversas plataformas que surgem neste contexto, capazes de monitorizar o desempenho e controlar dinamicamente as configurações dos sistemas. Um exemplo paradigmático é a plataforma Kubernetes, que emerge como um standard aberto para serviços de Nuvem,cujo componente Cloud Controller Manager contribui para a abstração de fornecedores de Nuvem. Neste sentido, é considerada uma contribuição valiosa para atingir um modelo agnóstico de Nuvem. O sistema desenvolvido é validado através da implantação de aplicações (sistemas xi xii informáticos) contentorizadas, em múltiplos fornecedores de serviços em Nuvem, públicos ou on-premises (locais). Para este efeito, o quadro Informatics System of Systems é adotado, enquanto validador, como o modelo apropriado para estruturar e organizar os artefactos tecnológicos heterogéneos que podem ser considerados.The vendor lock-in concept represents a customer’s dependency on a particular supplier or vendor, eventually becoming unable to easily migrate to a different provider. Cloud computing is frequently associated with vendor lock-in restrictions, motivated by the proprietary technological arrangements of each provider. This work proposes an agnostic cloud provider model that addresses such challenges, focusing on the establishment of a model for deploying and managing computational services in cloud environments. Concretely, it aims to enable informatics systems to be executed agnostically on multiple cloud platforms and infrastructures, thereby decoupling them from any cloud provider. Moreover, this model intends to automate servisse deployment by defining and generating the running configurations for the services.Within this context, container technology is deemed as an efficient and standard strategy for deploying computational services across cloud providers, promoting the migration of informatics systems between vendors. Additionally, container orchestration platforms, which are becoming increasingly adopted by organizations, are essential to effectively manage the life-cycle of multi-container informatics systems by monitoring their performance, and dynamically controlling their behavior. In particular, the Kubernetes platform, an emerging open standard for cloud services, is proving to be a valuable contribution on achieving service agnostic deployment, namely with its Cloud Controller Manager mechanism, helping abstracting specific cloud providers. As validation for the proposed approach, it is intended to prove the model’s adaptability to different services and technologies supplied by heterogeneous organizations through the deployment of containerized applications (informatics systems) in multiple cloud service providers, public or on-premises. For this purpose, the Informatics System of Systems framework is adopted as a validator for structuring and organize heterogeneous technology artifacts from different suppliers.N/

    SciTokens: Capability-Based Secure Access to Remote Scientific Data

    Full text link
    The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Component-aware Orchestration of Cloud-based Enterprise Applications, from TOSCA to Docker and Kubernetes

    Full text link
    Enterprise IT is currently facing the challenge of coordinating the management of complex, multi-component applications across heterogeneous cloud platforms. Containers and container orchestrators provide a valuable solution to deploy multi-component applications over cloud platforms, by coupling the lifecycle of each application component to that of its hosting container. We hereby propose a solution for going beyond such a coupling, based on the OASIS standard TOSCA and on Docker. We indeed propose a novel approach for deploying multi-component applications on top of existing container orchestrators, which allows to manage each component independently from the container used to run it. We also present prototype tools implementing our approach, and we show how we effectively exploited them to carry out a concrete case study

    ESTABLISHING BLOCKCHAIN-RELATED SECURITY CONTROLS

    Get PDF
    Blockchain technology is a secure and relatively new technology of distributed digital ledgers which is based on interlinked blocks of transactions. There is a rapid growth in the adoption of the blockchain technology in different solutions and applications and within different industries throughout the world, such as but not limited to, finance, supply chain, digital identity, energy, healthcare, real estate and government. Blockchain technology has great benefits such as decentralization, transparency, immutability and automation. Like any other emerging technology, the blockchain technology has also several risks and threats associated with its expected benefits which in turns could have a negative impact on individuals, entities and/or countries. This is mainly due to the absence of a solid governance foundation for managing and mitigating such risks and the shortage of published standards to govern the blockchain technology along with its associated applications. In line with the “Dubai blockchain Strategy 2020” and “Emirates blockchain Strategy 2021” initiatives, this thesis aims to achieve the following: first, preservation of the confidentiality, integrity and availability of information and information assets in relevance to blockchain applications and solutions implementation across entities, and second, mitigation and reduction of related information security risks and threats; through the establishment of new information security controls specifically related to the blockchain technology which have not been covered in International and National Information Security Standards which are ISO 27001:2013 Standard and UAE Information Assurance Standards by the Signals Intelligence Agency (formerly known as the National Electronic Security Authority). Finally, Risk Assessment and Risk Treatment have been performed on five blockchain use cases; to determine their involved risks with respective to security controls appropriately. The assessment/analysis results showed that the proposed security controls can mitigate relevant information security risks on the blockchain solutions and applications and consequently protect the information and information assets from unauthorized disclosure, modification, and destruction

    Sierra : cooperative request-response for resource management in disasters using semantic web principles

    Get PDF
    Disasters cause widespread harm and disrupt the normal functioning of society, and effective management requires the participation and cooperation of many actors. While advances in information and networking technology have made transmission of data easier than it ever has been before, communication and coordination of activities between actors remain exceptionally difficult. This paper employs semantic web technology and Linked Data principles to create a network of intercommunicating and inter-dependent on-line sites for managing resources. Each site publishes available resources openly and a lightweight opendata protocol is used to request and respond to requests for resources between sites in the network
    corecore