1,257 research outputs found

    A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection

    Get PDF
    The broadening dependency and reliance that modern societies have on essential services provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just at the economic level but also in terms of physical damage and even loss of human life. Complementing traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are in place and compliant with standards and internal policies. Forensics assist the investigation of past security incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of tackling the requirements imposed by massively distributed and complex Industrial Automation and Control Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic template for a converged platform. These results are intended to guide future research on forensics and compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio

    Automation for network security configuration: state of the art and research trends

    Get PDF
    The size and complexity of modern computer networks are progressively increasing, as a consequence of novel architectural paradigms such as the Internet of Things and network virtualization. Consequently, a manual orchestration and configuration of network security functions is no more feasible, in an environment where cyber attacks can dramatically exploit breaches related to any minimum configuration error. A new frontier is then the introduction of automation in network security configuration, i.e., automatically designing the architecture of security services and the configurations of network security functions, such as firewalls, VPN gateways, etc. This opportunity has been enabled by modern computer networks technologies, such as virtualization. In view of these considerations, the motivations for the introduction of automation in network security configuration are first introduced, alongside with the key automation enablers. Then, the current state of the art in this context is surveyed, focusing on both the achieved improvements and the current limitations. Finally, possible future trends in the field are illustrated

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    UMSL Bulletin 2022-2023

    Get PDF
    The 2022-2023 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1087/thumbnail.jp

    Architectural Vision for Quantum Computing in the Edge-Cloud Continuum

    Full text link
    Quantum processing units (QPUs) are currently exclusively available from cloud vendors. However, with recent advancements, hosting QPUs is soon possible everywhere. Existing work has yet to draw from research in edge computing to explore systems exploiting mobile QPUs, or how hybrid applications can benefit from distributed heterogeneous resources. Hence, this work presents an architecture for Quantum Computing in the edge-cloud continuum. We discuss the necessity, challenges, and solution approaches for extending existing work on classical edge computing to integrate QPUs. We describe how warm-starting allows defining workflows that exploit the hierarchical resources spread across the continuum. Then, we introduce a distributed inference engine with hybrid classical-quantum neural networks (QNNs) to aid system designers in accommodating applications with complex requirements that incur the highest degree of heterogeneity. We propose solutions focusing on classical layer partitioning and quantum circuit cutting to demonstrate the potential of utilizing classical and quantum computation across the continuum. To evaluate the importance and feasibility of our vision, we provide a proof of concept that exemplifies how extending a classical partition method to integrate quantum circuits can improve the solution quality. Specifically, we implement a split neural network with optional hybrid QNN predictors. Our results show that extending classical methods with QNNs is viable and promising for future work.Comment: 16 pages, 5 figures, Vision Pape

    Priority-Driven Differentiated Performance for NoSQL Database-As-a-Service

    Get PDF
    Designing data stores for native Cloud Computing services brings a number of challenges, especially if the Cloud Provider wants to offer database services capable of controlling the response time for specific customers. These requests may come from heterogeneous data-driven applications with conflicting responsiveness requirements. For instance, a batch processing workload does not require the same level of responsiveness as a time-sensitive one. Their coexistence may interfere with the responsiveness of the time-sensitive workload, such as online video gaming, virtual reality, and cloud-based machine learning. This paper presents a modification to the popular MongoDB NoSQL database to enable differentiated per-user/request performance on a priority basis by leveraging CPU scheduling and synchronization mechanisms available within the Operating System. This is achieved with minimally invasive changes to the source code and without affecting the performance and behavior of the database when the new feature is not in use. The proposed extension has been integrated with the access-control model of MongoDB for secure and controlled access to the new capability. Extensive experimentation with realistic workloads demonstrates how the proposed solution is able to reduce the response times for high-priority users/requests, with respect to lower-priority ones, in scenarios with mixed-priority clients accessing the data store

    Towards a Digital Capability Maturity Framework for Tertiary Institutions

    Get PDF
    Background: The Digital Capability (DC) of an Institution is the extent to which the institution's culture, policies, and infrastructure enable and support digital practices (Killen et al., 2017), and maturity is the continuous improvement of those capabilities. As technology continues to evolve, it is likely to give rise to constant changes in teaching and learning, potentially disrupting Tertiary Education Institutions (TEIs) and making existing organisational models less effective. An institution’s ability to adapt to continuously changing technology depends on the change in culture and leadership decisions within the individual institutions. Change without structure leads to inefficiencies, evident across the Nigerian TEI landscape. These inefficiencies can be attributed mainly to a lack of clarity and agreement on a development structure. Objectives: This research aims to design a structure with a pathway to maturity, to support the continuous improvement of DC in TEIs in Nigeria and consequently improve the success of digital education programmes. Methods: I started by conducting a Systematic Literature Review (SLR) investigating the body of knowledge on DC, its composition, the relationship between its elements and their respective impact on the Maturity of TEIs. Findings from the review led me to investigate further the key roles instrumental in developing Digital Capability Maturity in Tertiary Institutions (DCMiTI). The results of these investigations formed the initial ideas and constructs upon which the proposed structure was built. I then explored a combination of quantitative and qualitative methods to substantiate the initial constructs and gain a deeper understanding of the relationships between elements/sub-elements. Next, I used triangulation as a vehicle to expand the validity of the findings by replicating the methods in a case study of TEIs in Nigeria. Finally, after using the validated constructs and knowledge base to propose a structure based on CMMI concepts, I conducted an expert panel workshop to test the model’s validity. Results: I consolidated the body of knowledge from the SLR into a universal classification of 10 elements, each comprising sub-elements. I also went on to propose a classification for DCMiTI. The elements/sub-elements in the classification indicate the success factors for digital maturity, which were also found to positively impact the ability to design, deploy and sustain digital education. These findings were confirmed in a UK University and triangulated in a case study of Northwest Nigeria. The case study confirmed the literature findings on the status of DCMiTI in Nigeria and provided sufficient evidence to suggest that a maturity structure would be a well-suited solution to supporting DCM in the region. I thus scoped, designed, and populated a domain-specific framework for DCMiTI, configured to support the educational landscape in Northwest Nigeria. Conclusion: The proposed DCMiTI framework enables TEIs to assess their maturity level across the various capability elements and reports on DCM as a whole. It provides guidance on the criteria that must be satisfied to achieve higher levels of digital maturity. The framework received expert validation, as domain experts agreed that the proposed Framework was well applicable to developing DCMiTI and would be a valuable tool to support TEIs in delivering successful digital education. Recommendations were made to engage in further iterations of testing by deploying the proposed framework for use in TEI to confirm the extent of its generalisability and acceptability
    • …
    corecore