1,542 research outputs found
Configuration Management of Distributed Systems over Unreliable and Hostile Networks
Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems.
This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration.
Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture.
The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn.
Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts
Cloud Forensic: Issues, Challenges and Solution Models
Cloud computing is a web-based utility model that is becoming popular every
day with the emergence of 4th Industrial Revolution, therefore, cybercrimes
that affect web-based systems are also relevant to cloud computing. In order to
conduct a forensic investigation into a cyber-attack, it is necessary to
identify and locate the source of the attack as soon as possible. Although
significant study has been done in this domain on obstacles and its solutions,
research on approaches and strategies is still in its development stage. There
are barriers at every stage of cloud forensics, therefore, before we can come
up with a comprehensive way to deal with these problems, we must first
comprehend the cloud technology and its forensics environment. Although there
are articles that are linked to cloud forensics, there is not yet a paper that
accumulated the contemporary concerns and solutions related to cloud forensic.
Throughout this chapter, we have looked at the cloud environment, as well as
the threats and attacks that it may be subjected to. We have also looked at the
approaches that cloud forensics may take, as well as the various frameworks and
the practical challenges and limitations they may face when dealing with cloud
forensic investigations.Comment: 23 pages; 6 figures; 4 tables. Book chapter of the book titled "A
Practical Guide on Security and Privacy in Cyber Physical Systems
Foundations, Applications and Limitations", World Scientific Series in
Digital Forensics and Cybersecurit
Secure storage systems for untrusted cloud environments
The cloud has become established for applications that need to be scalable and highly
available. However, moving data to data centers owned and operated by a third party,
i.e., the cloud provider, raises security concerns because a cloud provider could easily
access and manipulate the data or program flow, preventing the cloud from being
used for certain applications, like medical or financial.
Hardware vendors are addressing these concerns by developing Trusted Execution
Environments (TEEs) that make the CPU state and parts of memory inaccessible from
the host software. While TEEs protect the current execution state, they do not provide
security guarantees for data which does not fit nor reside in the protected memory
area, like network and persistent storage.
In this work, we aim to address TEEs’ limitations in three different ways, first we
provide the trust of TEEs to persistent storage, second we extend the trust to multiple
nodes in a network, and third we propose a compiler-based solution for accessing
heterogeneous memory regions. More specifically,
• SPEICHER extends the trust provided by TEEs to persistent storage. SPEICHER
implements a key-value interface. Its design is based on LSM data structures, but
extends them to provide confidentiality, integrity, and freshness for the stored
data. Thus, SPEICHER can prove to the client that the data has not been tampered
with by an attacker.
• AVOCADO is a distributed in-memory key-value store (KVS) that extends the
trust that TEEs provide across the network to multiple nodes, allowing KVSs to
scale beyond the boundaries of a single node. On each node, AVOCADO carefully
divides data between trusted memory and untrusted host memory, to maximize
the amount of data that can be stored on each node. AVOCADO leverages the
fact that we can model network attacks as crash-faults to trust other nodes with
a hardened ABD replication protocol.
• TOAST is based on the observation that modern high-performance systems
often use several different heterogeneous memory regions that are not easily
distinguishable by the programmer. The number of regions is increased by the
fact that TEEs divide memory into trusted and untrusted regions. TOAST is a
compiler-based approach to unify access to different heterogeneous memory
regions and provides programmability and portability. TOAST uses a
load/store interface to abstract most library interfaces for different memory
regions
AI: Limits and Prospects of Artificial Intelligence
The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence
Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study
This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives
Priority-Driven Differentiated Performance for NoSQL Database-As-a-Service
Designing data stores for native Cloud Computing services brings a number of challenges, especially if the Cloud Provider wants to offer database services capable of controlling the response time for specific customers. These requests may come from heterogeneous data-driven applications with conflicting responsiveness requirements. For instance, a batch processing workload does not require the same level of responsiveness as a time-sensitive one. Their coexistence may interfere with the responsiveness of the time-sensitive workload, such as online video gaming, virtual reality, and cloud-based machine learning. This paper presents a modification to the popular MongoDB NoSQL database to enable differentiated per-user/request performance on a priority basis by leveraging CPU scheduling and synchronization mechanisms available within the Operating System. This is achieved with minimally invasive changes to the source code and without affecting the performance and behavior of the database when the new feature is not in use. The proposed extension has been integrated with the access-control model of MongoDB for secure and controlled access to the new capability. Extensive experimentation with realistic workloads demonstrates how the proposed solution is able to reduce the response times for high-priority users/requests, with respect to lower-priority ones, in scenarios with mixed-priority clients accessing the data store
A Last-Level Defense for Application Integrity and Confidentiality
Our objective is to protect the integrity and confidentiality of applications
operating in untrusted environments. Trusted Execution Environments (TEEs) are
not a panacea. Hardware TEEs fail to protect applications against Sybil, Fork
and Rollback Attacks and, consequently, fail to preserve the consistency and
integrity of applications. We introduce a novel system, LLD, that enforces the
integrity and consistency of applications in a transparent and scalable
fashion. Our solution augments TEEs with instantiation control and rollback
protection. Instantiation control, enforced with TEE-supported leases,
mitigates Sybil/Fork Attacks without incurring the high costs of solving
crypto-puzzles. Our rollback detection mechanism does not need excessive
replication, nor does it sacrifice durability. We show that implementing these
functionalities in the LLD runtime automatically protects applications and
services such as a popular DBMS
Serverless Cloud Computing: A Comparative Analysis of Performance, Cost, and Developer Experiences in Container-Level Services
Serverless cloud computing is a subset of cloud computing considerably adopted to build modern web applications, while the underlying server and infrastructure management duties are abstracted from customers to the cloud vendors. In serverless computing, customers must pay for the runtime consumed by their services, but they are exempt from paying for the idle time. Prior to serverless containers, customers needed to provision, scale, and manage servers, which was a bottleneck for rapidly growing customer-facing applications where latency and scaling were a concern.
The viability of adopting a serverless platform for a web application regarding performance, cost, and developer experiences is studied in this thesis. Three serverless container-level services are employed in this study from AWS and GCP. The services include GCP Cloud Run, GKE AutoPilot, and AWS EKS with AWS Fargate. Platform as a Service (PaaS) underpins the former, and Container as a Service (CaaS) the remainder. A single-page web application was created to perform incremental and spike load tests on those services to assess the performance differences. Furthermore, the cost differences are compared and analyzed. Lastly, the final element considered while evaluating the developer experiences is the complexity of using the services during the project implementation.
Based on the results of this research, it was determined that PaaS-based solutions are a high-performing, affordable alternative for CaaS-based solutions in circumstances where high levels of traffic are periodically anticipated, but sporadic latency is never a concern. Given that this study has limitations, the author recommends additional research to strengthen it
Intrusion detection system in software-defined networks
Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáSoftware-Defined Networking technologies represent a recent cutting-edge paradigm in network management, offering unprecedented flexibility and scalability. As the adoption of SDN continues to grow, so does the urgency of studying methods to enhance its security. It is the critical importance of understanding and fortifying SDN security, given its pivotal role in the modern digital ecosystem. With the ever-evolving threat landscape, research into innovative security measures is essential to ensure the integrity, confidentiality, and availability of network resources in this dynamic and transformative technology, ultimately
safeguarding the reliability and functionality of our interconnected world. This research presents a novel approach to enhancing security in Software-Defined Networking through the development of an initial Intrusion Detection System. The IDS offers a scalable solution, facilitating the transmission and storage of network traffic with robust support for failure recovery across multiple nodes. Additionally, an innovative analysis module incorporates artificial intelligence (AI) to predict the nature of network traffic, effectively
distinguishing between malicious and benign data. The system integrates a diverse range of technologies and tools, enabling the processing and analysis of network traffic data from PCAP files, thus contributing to the reinforcement of SDN security.As tecnologias de Redes Definidas por Software representam um paradigma recente na gestão de redes, oferecendo flexibilidade e escalabilidade sem precedentes. À medida que a adoção de soluções SDN continuam a crescer, também aumenta a urgência de estudar métodos para melhorar a sua segurança. É de extrema importância compreender e fortalecer a segurança das SDN, dado o seu papel fundamental no ecossistema digital moderno. Com o cenário de ameaças em constante evolução, a investigação de medidas
de segurança inovadoras é essencial para garantir a integridade, a confidencialidade e a disponibilidade dos recursos da rede nesta tecnologia dinâmica e transformadora. Esta investigação apresenta uma nova abordagem para melhorar a segurança nas redes definidas por software através do desenvolvimento de um sistema inicial de deteção de intrusões. O IDS oferece uma solução escalável, facilitando a transmissão e o armazenamento do tráfego de rede com suporte robusto para recuperação de falhas em vários nós. Além disso, um módulo de análise inovador incorpora inteligência artificial (IA) para prever a natureza do
tráfego de rede, distinguindo efetivamente entre dados maliciosos e benignos. O sistema integra uma gama diversificada de tecnologias e ferramentas, permitindo o processamento e a análise de dados de tráfego de rede a partir de ficheiros PCAP, contribuindo assim para o reforço da segurança SDN
Digital Innovations for the Circular Economy
Doctoral thesis (PhD) - Nord University, 2023publishedVersio
- …