3,791 research outputs found

    A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection

    Get PDF
    The broadening dependency and reliance that modern societies have on essential services provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just at the economic level but also in terms of physical damage and even loss of human life. Complementing traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are in place and compliant with standards and internal policies. Forensics assist the investigation of past security incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of tackling the requirements imposed by massively distributed and complex Industrial Automation and Control Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic template for a converged platform. These results are intended to guide future research on forensics and compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    Configuration Management of Distributed Systems over Unreliable and Hostile Networks

    Get PDF
    Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems. This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration. Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture. The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn. Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts

    Conversations on Empathy

    Get PDF
    In the aftermath of a global pandemic, amidst new and ongoing wars, genocide, inequality, and staggering ecological collapse, some in the public and political arena have argued that we are in desperate need of greater empathy — be this with our neighbours, refugees, war victims, the vulnerable or disappearing animal and plant species. This interdisciplinary volume asks the crucial questions: How does a better understanding of empathy contribute, if at all, to our understanding of others? How is it implicated in the ways we perceive, understand and constitute others as subjects? Conversations on Empathy examines how empathy might be enacted and experienced either as a way to highlight forms of otherness or, instead, to overcome what might otherwise appear to be irreducible differences. It explores the ways in which empathy enables us to understand, imagine and create sameness and otherness in our everyday intersubjective encounters focusing on a varied range of "radical others" – others who are perceived as being dramatically different from oneself. With a focus on the importance of empathy to understand difference, the book contends that the role of empathy is critical, now more than ever, for thinking about local and global challenges of interconnectedness, care and justice

    Effects of municipal smoke-free ordinances on secondhand smoke exposure in the Republic of Korea

    Get PDF
    ObjectiveTo reduce premature deaths due to secondhand smoke (SHS) exposure among non-smokers, the Republic of Korea (ROK) adopted changes to the National Health Promotion Act, which allowed local governments to enact municipal ordinances to strengthen their authority to designate smoke-free areas and levy penalty fines. In this study, we examined national trends in SHS exposure after the introduction of these municipal ordinances at the city level in 2010.MethodsWe used interrupted time series analysis to assess whether the trends of SHS exposure in the workplace and at home, and the primary cigarette smoking rate changed following the policy adjustment in the national legislation in ROK. Population-standardized data for selected variables were retrieved from a nationally representative survey dataset and used to study the policy action’s effectiveness.ResultsFollowing the change in the legislation, SHS exposure in the workplace reversed course from an increasing (18% per year) trend prior to the introduction of these smoke-free ordinances to a decreasing (−10% per year) trend after adoption and enforcement of these laws (β2 = 0.18, p-value = 0.07; β3 = −0.10, p-value = 0.02). SHS exposure at home (β2 = 0.10, p-value = 0.09; β3 = −0.03, p-value = 0.14) and the primary cigarette smoking rate (β2 = 0.03, p-value = 0.10; β3 = 0.008, p-value = 0.15) showed no significant changes in the sampled period. Although analyses stratified by sex showed that the allowance of municipal ordinances resulted in reduced SHS exposure in the workplace for both males and females, they did not affect the primary cigarette smoking rate as much, especially among females.ConclusionStrengthening the role of local governments by giving them the authority to enact and enforce penalties on SHS exposure violation helped ROK to reduce SHS exposure in the workplace. However, smoking behaviors and related activities seemed to shift to less restrictive areas such as on the streets and in apartment hallways, negating some of the effects due to these ordinances. Future studies should investigate how smoke-free policies beyond public places can further reduce the SHS exposure in ROK

    Synergi: A Mixed-Initiative System for Scholarly Synthesis and Sensemaking

    Full text link
    Efficiently reviewing scholarly literature and synthesizing prior art are crucial for scientific progress. Yet, the growing scale of publications and the burden of knowledge make synthesis of research threads more challenging than ever. While significant research has been devoted to helping scholars interact with individual papers, building research threads scattered across multiple papers remains a challenge. Most top-down synthesis (and LLMs) make it difficult to personalize and iterate on the output, while bottom-up synthesis is costly in time and effort. Here, we explore a new design space of mixed-initiative workflows. In doing so we develop a novel computational pipeline, Synergi, that ties together user input of relevant seed threads with citation graphs and LLMs, to expand and structure them, respectively. Synergi allows scholars to start with an entire threads-and-subthreads structure generated from papers relevant to their interests, and to iterate and customize on it as they wish. In our evaluation, we find that Synergi helps scholars efficiently make sense of relevant threads, broaden their perspectives, and increases their curiosity. We discuss future design implications for thread-based, mixed-initiative scholarly synthesis support tools.Comment: ACM UIST'2

    Making sense of solid for data governance and GDPR

    Get PDF
    Solid is a new radical paradigm based on decentralising control of data from central organisations to individuals that seeks to empower individuals to have active control of who and how their data is being used. In order to realise this vision, the use-cases and implementations of Solid also require us to be consistent with the relevant privacy and data protection regulations such as the GDPR. However, to do so first requires a prior understanding of all actors, roles, and processes involved in a use-case, which then need to be aligned with GDPR's concepts to identify relevant obligations, and then investigate their compliance. To assist with this process, we describe Solid as a variation of `cloud technology' and adapt the existing standardised terminologies and paradigms from ISO/IEC standards. We then investigate the applicability of GDPR's requirements to Solid-based implementations, along with an exploration of how existing issues arising from GDPR enforcement also apply to Solid. Finally, we outline the path forward through specific extensions to Solid's specifications that mitigate known issues and enable the realisation of its benefits

    University bulletin 2023-2024

    Get PDF
    This catalog for the University of South Carolina at Beaufort lists information about the college, the academic calendar, admission policies, degree programs, faculty and course descriptions

    Improving patient safety by learning from near misses – insights from safety-critical industries

    Get PDF
    Background Patients are at risk of being harmed by the very processes meant to help them. To improve patient safety, healthcare organisations attempt to identify the factors that contribute to incidents and take action to optimise conditions to minimise repeats. However, improvements in patient safety have not matched those observed in other safety-critical industries. One difference between healthcare and other safety-critical industries may be how they learn from near misses when seeking to make safety improvements. Near misses are incidents that almost happened, but for an interruption in the sequence of events. Management of near misses includes their identification, reporting and investigation, and the learning that results. Safety theory suggests that acting on near misses will lead to actions to help prevent incidents. However, evidence also suggests that healthcare has yet to embrace the learning potential that patient safety near misses offer. The aims of this research, in support of this thesis, were to explore how best healthcare can learn from patient safety near misses to improve patient safety, and to identify what guidance non-healthcare safety-critical industries, which have implemented effective near-miss management systems, can offer healthcare. As this research progressed the aims were updated to include consideration of whether healthcare should seek to learn from patient safety near misses. Methods This research took a mixed-methods approach augmented by scoping reviews of the healthcare (study 1) and non-healthcare safety-critical industry (study 3) literature. A qualitative case study (study 2) was undertaken to explore the management of patient safety near misses in the English National Health Service. Seventeen interviews were undertaken with patient safety leads across acute hospitals, ambulance trusts, mental health trusts, primary care, and national bodies. A questionnaire was also used to help access the views of frontline staff. A grounded theory (study 4) was used to develop a set of principles, based on learning from non-healthcare safety-critical industries, around how best near misses can be managed. Thirty-five interviews were undertaken across aviation, maritime, and rail, with nuclear later added as per the theoretical sampling. Results The scoping reviews contributed 125 healthcare and 108 non-healthcare safety-critical industry academic articles, published internationally between 2000 and 2022, to the evidence gained from the qualitative case study and grounded theory. Safety cultures and maturity with safety management processes were found to vary in and across the different industries, and there was a reluctance for healthcare to learn about safety and near misses from other industries. Healthcare has yet to establish effective processes to manage patient safety near misses. There is an absence of evidence that learning has led to improvements in patient safety. The definition of a patient safety near miss varies, and organisations focus their efforts on reporting and investigating incidents, with limited attention to patient safety near misses. In non-healthcare safety-critical industries, near-miss management is more established, but process maturity varies in and across industries. Near misses are often defined specifically for an industry, but there is limited evidence that learning from them has improved safety. Information about near misses are commonly aggregated and may contribute to company and industry safety management systems. Exploration of the definition of a patient safety near miss led to the identification of the features of a near miss. The features have not been previously defined in the manner presented in this thesis. A patient safety near miss is context-specific and complex, involves interruptions, highlights system vulnerabilities, and is delineated from an incident by whether events reach a patient. Across healthcare and non-healthcare safety-critical industries the impact of learning from near misses is often assumed or extrapolated based on the common cause hypothesis. The hypothesis is regularly cited in safety literature and is used as the basis for justifying a focus on patient safety near misses. However, the validity of the hypothesis has been questioned and has not been validated for different patient safety near miss and incident types. Conclusions The research findings challenge long-held beliefs that learning from patient safety near misses will lead to improvements in patient safety. These beliefs are based on traditional safety theory that is unlikely to now be valid in the complexity of modern-day systems where incidents are the result of multiple factors and can emerge without apparent warning. Further research is required to understand the relationship between learning from patient safety near misses and patient safety, and whether the common cause hypothesis is valid for different types of healthcare safety event. While there are questions about the value of learning directly from patient safety near misses, the contribution of near misses to safety management systems in non-healthcare safety-critical industries looks to be beneficial for safety improvement. Safety management systems have yet to be implemented in the National Health Service and future research should look to understand how best this may be achieved and their value. In the meantime, patient safety near misses may help healthcare’s understanding of systems and their optimisation to create barriers to incidents and build resilience. This research offers an evidence-based definition of a patient safety near miss and describes principles to support identification, reporting, prioritisation, investigation, aggregation, learning, and action to help improve patient safety
    corecore