975 research outputs found

    Framework for Security Transparency in Cloud Computing

    Get PDF
    The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations. The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area. This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied. The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment

    Semantic hierarchies for extracting, modeling, and connecting compliance requirements in information security control standards

    Get PDF
    Companies and government organizations are increasingly compelled, if not required by law, to ensure that their information systems will comply with various federal and industry regulatory standards, such as the NIST Special Publication on Security Controls for Federal Information Systems (NIST SP-800-53), or the Common Criteria (ISO 15408-2). Such organizations operate business or mission critical systems where a lack of or lapse in security protections translates to serious confidentiality, integrity, and availability risks that, if exploited, could result in information disclosure, loss of money, or, at worst, loss of life. To mitigate these risks and ensure that their information systems meet regulatory standards, organizations must be able to (a) contextualize regulatory documents in a way that extracts the relevant technical implications for their systems, (b) formally represent their systems and demonstrate that they meet the extracted requirements following an accreditation process, and (c) ensure that all third-party systems, which may exist outside of the information system enclave as web or cloud services also implement appropriate security measures consistent with organizational expectations. This paper introduces a step-wise process, based on semantic hierarchies, that systematically extracts relevant security requirements from control standards to build a certification baseline for organizations to use in conjunction with formal methods and service agreements for accreditation. The approach is demonstrated following a case study of all audit-related controls in the SP-800-53, ISO 15408-2, and related documents. Accuracy, applicability, consistency, and efficacy of the approach were evaluated using controlled qualitative and quantitative methods in two separate studies

    Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination

    Full text link
    Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making and a wide range of Artificial Intelligence (AI) services across major corporations such as Google, Walmart, and AirBnb. KGs complement Machine Learning (ML) algorithms by providing data context and semantics, thereby enabling further inference and question-answering capabilities. The integration of KGs with neuronal learning (e.g., Large Language Models (LLMs)) is currently a topic of active research, commonly named neuro-symbolic AI. Despite the numerous benefits that can be accomplished with KG-based AI, its growing ubiquity within online services may result in the loss of self-determination for citizens as a fundamental societal issue. The more we rely on these technologies, which are often centralised, the less citizens will be able to determine their own destinies. To counter this threat, AI regulation, such as the European Union (EU) AI Act, is being proposed in certain regions. The regulation sets what technologists need to do, leading to questions concerning: How can the output of AI systems be trusted? What is needed to ensure that the data fuelling and the inner workings of these artefacts are transparent? How can AI be made accountable for its decision-making? This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination. Drawing upon this conceptual framework, challenges and opportunities for citizen self-determination are illustrated and analysed in a real-world scenario. As a result, we propose a research agenda aimed at accomplishing the recommended objectives

    System Security Assurance: A Systematic Literature Review

    Get PDF
    System security assurance provides the confidence that security features, practices, procedures, and architecture of software systems mediate and enforce the security policy and are resilient against security failure and attacks. Alongside the significant benefits of security assurance, the evolution of new information and communication technology (ICT) introduces new challenges regarding information protection. Security assurance methods based on the traditional tools, techniques, and procedures may fail to account new challenges due to poor requirement specifications, static nature, and poor development processes. The common criteria (CC) commonly used for security evaluation and certification process also comes with many limitations and challenges. In this paper, extensive efforts have been made to study the state-of-the-art, limitations and future research directions for security assurance of the ICT and cyber-physical systems (CPS) in a wide range of domains. We conducted a systematic review of requirements, processes, and activities involved in system security assurance including security requirements, security metrics, system and environments and assurance methods. We highlighted the challenges and gaps that have been identified by the existing literature related to system security assurance and corresponding solutions. Finally, we discussed the limitations of the present methods and future research directions

    A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection

    Get PDF
    The broadening dependency and reliance that modern societies have on essential services provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just at the economic level but also in terms of physical damage and even loss of human life. Complementing traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are in place and compliant with standards and internal policies. Forensics assist the investigation of past security incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of tackling the requirements imposed by massively distributed and complex Industrial Automation and Control Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic template for a converged platform. These results are intended to guide future research on forensics and compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio

    Hacking SIEMs to Catch Hackers: Decreasing the Mean Time to Respond to Network Security Events with a Novel Threat Ontology in SIEM Software

    Get PDF
    Information security is plagued with increasingly sophisticated and persistent threats to communication networks. The development of new threat tools or vulnerability exploits often outpaces advancements in network security detection systems. As a result, detection systems often compensate by over reporting partial detections of routine network activity to security analysts for further review. Such alarms seldom contain adequate forensic data for analysts to accurately validate alerts to other stakeholders without lengthy investigations. As a result, security analysts often ignore the vast majority of network security alarms provided by sensors, resulting in security breaches that may have otherwise been prevented. Security Information and Event Management (SIEM) software has been introduced recently in an effort to enable data correlation across multiple sensors, with the intent of producing a lower number of security alerts with little forensic value and a higher number of security alerts that accurately reflect malicious actions. However, the normalization frameworks found in current SIEM systems do not accurately depict modern threat activities. As a result, recent network security research has introduced the concept of a "kill chain" model designed to represent threat activities based upon patterns of action, known indicators, and methodical intrusion phases. Such a model was hypothesized by many researchers to result in the realization of the desired goals of SIEM software. The focus of this thesis is the implementation of a "kill chain" framework within SIEM software. A novel "Kill chain" model was developed and implemented within a commercial SIEM system through modifications to the existing SIEM database. These modifications resulted in a new log ontology capable of normalizing security sensor data in accordance with modern threat research. New SIEM correlation rules were developed using the novel log ontology compared to existing vendor recommended correlation rules using the default model. The novel log ontology produced promising results indicating improved detection rates, more descriptive security alarms, and a lower number of false positive alarms. These improvements were assessed to provide improved visibility and more efficient investigation processes to security analysts ultimately reducing the mean time required to detect and escalate security incidents

    Development and Validation of a Proof-of-Concept Prototype for Analytics-based Malicious Cybersecurity Insider Threat in a Real-Time Identification System

    Get PDF
    Insider threat has continued to be one of the most difficult cybersecurity threat vectors detectable by contemporary technologies. Most organizations apply standard technology-based practices to detect unusual network activity. While there have been significant advances in intrusion detection systems (IDS) as well as security incident and event management solutions (SIEM), these technologies fail to take into consideration the human aspects of personality and emotion in computer use and network activity, since insider threats are human-initiated. External influencers impact how an end-user interacts with both colleagues and organizational resources. Taking into consideration external influencers, such as personality, changes in organizational polices and structure, along with unusual technical activity analysis, would be an improvement over contemporary detection tools used for identifying at-risk employees. This would allow upper management or other organizational units to intervene before a malicious cybersecurity insider threat event occurs, or mitigate it quickly, once initiated. The main goal of this research study was to design, develop, and validate a proof-of-concept prototype for a malicious cybersecurity insider threat alerting system that will assist in the rapid detection and prediction of human-centric precursors to malicious cybersecurity insider threat activity. Disgruntled employees or end-users wishing to cause harm to the organization may do so by abusing the trust given to them in their access to available network and organizational resources. Reports on malicious insider threat actions indicated that insider threat attacks make up roughly 23% of all cybercrime incidents, resulting in $2.9 trillion in employee fraud losses globally. The damage and negative impact that insider threats cause was reported to be higher than that of outsider or other types of cybercrime incidents. Consequently, this study utilized weighted indicators to measure and correlate simulated user activity to possible precursors to malicious cybersecurity insider threat attacks. This study consisted of a mixed method approach utilizing an expert panel, developmental research, and quantitative data analysis using the developed tool on simulated data set. To assure validity and reliability of the indicators, a panel of subject matter experts (SMEs) reviewed the indicators and indicator categorizations that were collected from prior literature following the Delphi technique. The SMEs’ responses were incorporated into the development of a proof-of-concept prototype. Once the proof-of-concept prototype was completed and fully tested, an empirical simulation research study was conducted utilizing simulated user activity within a 16-month time frame. The results of the empirical simulation study were analyzed and presented. Recommendations resulting from the study also be provided

    Resources-Events-Agents Design Theory: A Revolutionary Approach to Enterprise System Design

    Get PDF
    Enterprise systems typically include constructs such as ledgers and journals with debit and credit entries as central pillars of the systems’ architecture due in part to accountants and auditors who demand those constructs. At best, structuring systems with such constructs as base objects results in the storing the same data at multiple levels of aggregation, which creates inefficiencies in the database. At worst, basing systems on such constructs destroys details that are unnecessary for accounting but that may facilitate decision making by other enterprise functional areas. McCarthy (1982) proposed the resources-events-agents (REA) framework as an alternative structure for a shared data environment more than thirty years ago, and scholars have further developed it such that it is now a robust design theory. Despite this legacy, the broad IS community has not widely researched REA. In this paper, we discuss REA’s genesis and primary constructs, provide a history of REA research, discuss REA’s impact on practice, and speculate as to what the future may hold for REA-based enterprise systems. We invite IS researchers to consider integrating REA constructs with other theories and various emerging technologies to help advance the future of information systems and business research
    • …
    corecore