4,228 research outputs found

    A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection

    Get PDF
    The broadening dependency and reliance that modern societies have on essential services provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just at the economic level but also in terms of physical damage and even loss of human life. Complementing traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are in place and compliant with standards and internal policies. Forensics assist the investigation of past security incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of tackling the requirements imposed by massively distributed and complex Industrial Automation and Control Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic template for a converged platform. These results are intended to guide future research on forensics and compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio

    DEVELOPING COMPETENCE FOR INNOVATION IN KNOWLEDGE PRACTICE: an exploration of the sustainability science-policy interface

    Get PDF
    This thesis is motivated by institutional claims for a “new type of knowledge” in the sustainability science-policy interface. It thus explores the thinking and practice of experts in the field about professional competencies necessary to induce required innovations in their knowledge practice. The thesis proposes a novel conceptual framework, synthesising (1) five key features informing claimed innovations in knowledge practice of science-policy sustainability ‘boundary organisations’, (2) a set of ten differentiating individual competencies deemed critical to induce such type of innovations and (3) required approaches to effective development of such competencies. In doing so, this thesis suggests an operative framework to engage with a competence-based approach in response to the need for innovations in knowledge practice within boundary organisations. Under the conceptual framework above, the thesis engages in an empirical work exploring the thinking and praxis of experts in the field, around three key questions: (1) How do practitioners in the field perceive the need for and the pertinence of such type of innovations, (2) How do they relate to the notion of individual competence and the need for different types of competencies to induce innovations in their own knowledge practice and (3) How can professionals working in the science-policy interface most effectively learn and develop such new set of competences, given their specific organisational / institutional contexts? Methodologically, this thesis engaged a combined set of empirical research instruments, mostly including semi-structured interviews with professionals operating in the sustainability sciencepolicy interface, three focus-groups in The Netherlands, Portugal and the UK, with actors operating within the remit of sustainability boundary organizations, and participatory observation within the European Environment Agency. Outcomes of this research indicate that, while the need for a new type of knowledge is clearly acknowledged by practitioners in ‘boundary organisations’, notions associated with required innovations in knowledge practice – such as co-creation, systems thinking, transdisciplinarity, reflexivity and action-orientated knowledge – are still subject to ambiguity and controversy within the institutional context they operate. As practitioners struggle to engage the notion of individual competence in this debate, the type of competencies deemed critical to induce required innovations in their knowledge practice resonates with their own experience. Experts in boundary organisations identify though a lack of institutional frameworks to support their efforts to generate innovations in knowledge practice. While this research synthetises and presents existing examples of learning programmes and approaches to help develop such type of competencies, practitioners in the field manifest scepticism on the extent to which such learning approaches are feasible in their given institutional settings

    Effect of Polyphenol Supplementation on Memory Functioning in Overweight and Obese Adults: A Systematic Review and Meta-Analysis

    Get PDF
    Negative health consequences of obesity include impaired neuronal functioning and celldeath, thus bringing the risk of impaired cognitive functioning. Antioxidant properties of polyphenolsoffer a possible intervention for overweight people, but evidence for their effectiveness in supportingcognitive functioning is mixed. This review examined evidence from randomized controlledtrials concerning the effect of polyphenols on tasks requiring either immediate or delayed retrievalof learned information, respectively, thus controlling for differences in cognitive processes and relatedneural substrates supporting respective task demands. Searches of the PubMed/Medline,PsycInfo, and Scopus databases identified 24 relevant primary studies with N = 2336 participantshaving a BMI ≄ 25.0 kg/m2. The participants’ mean age for the 24 studies exceeded 60 years. Respectivemeta-analyses produced a significant summary effect for immediate retrieval but not for delayedretrieval. The present findings support a potential positive effect of chronic supplementationwith polyphenols, most notably flavonoids, on immediate retrieval in participants aged over 60years with obesity being a risk factor for cognitive impairment. We recommend further investigationof this potential positive effect in participants with such risk factors. Future research on all populationsshould report the phenolic content of the supplementation administered and be specific regardingthe cognitive processes tested

    Optimizing digital archiving: An artificial intelligence approach for OCR error correction

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsThis thesis research scopes the knowledge gap for effective ways to address OCR errors and the importance to have training datasets adequated size and quality, to promote digital documents OCR recognition efficiency. The main goal is to examine the effects regarding the following dimensions of sourcing data: input size vs performance vs time efficiency, and to propose a new design that includes a machine translation model, to automate the errors correction caused by OCR scan. The study implemented various LSTM, with different thresholds, to recover errors generated by OCR systems. However, the results did not overcomed the performance of existing OCR systems, due to dataset size limitations, a step further was achieved. A relationship between performance and input size was established, providing meaningful insights for future digital archiving systems optimisation. This dissertation creates a new approach, to deal with OCR problems and implementation considerations, that can be further followed, to optimise digital archive systems efficiency and results

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    Evaluation Methodologies in Software Protection Research

    Full text link
    Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 572 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    An empirical investigation of the relationship between integration, dynamic capabilities and performance in supply chains

    Get PDF
    This research aimed to develop an empirical understanding of the relationships between integration, dynamic capabilities and performance in the supply chain domain, based on which, two conceptual frameworks were constructed to advance the field. The core motivation for the research was that, at the stage of writing the thesis, the combined relationship between the three concepts had not yet been examined, although their interrelationships have been studied individually. To achieve this aim, deductive and inductive reasoning logics were utilised to guide the qualitative study, which was undertaken via multiple case studies to investigate lines of enquiry that would address the research questions formulated. This is consistent with the author’s philosophical adoption of the ontology of relativism and the epistemology of constructionism, which was considered appropriate to address the research questions. Empirical data and evidence were collected, and various triangulation techniques were employed to ensure their credibility. Some key features of grounded theory coding techniques were drawn upon for data coding and analysis, generating two levels of findings. These revealed that whilst integration and dynamic capabilities were crucial in improving performance, the performance also informed the former. This reflects a cyclical and iterative approach rather than one purely based on linearity. Adopting a holistic approach towards the relationship was key in producing complementary strategies that can deliver sustainable supply chain performance. The research makes theoretical, methodological and practical contributions to the field of supply chain management. The theoretical contribution includes the development of two emerging conceptual frameworks at the micro and macro levels. The former provides greater specificity, as it allows meta-analytic evaluation of the three concepts and their dimensions, providing a detailed insight into their correlations. The latter gives a holistic view of their relationships and how they are connected, reflecting a middle-range theory that bridges theory and practice. The methodological contribution lies in presenting models that address gaps associated with the inconsistent use of terminologies in philosophical assumptions, and lack of rigor in deploying case study research methods. In terms of its practical contribution, this research offers insights that practitioners could adopt to enhance their performance. They can do so without necessarily having to forgo certain desired outcomes using targeted integrative strategies and drawing on their dynamic capabilities

    Fairness Testing: A Comprehensive Survey and Analysis of Trends

    Full text link
    Unfair behaviors of Machine Learning (ML) software have garnered increasing attention and concern among software engineers. To tackle this issue, extensive research has been dedicated to conducting fairness testing of ML software, and this paper offers a comprehensive survey of existing studies in this field. We collect 100 papers and organize them based on the testing workflow (i.e., how to test) and testing components (i.e., what to test). Furthermore, we analyze the research focus, trends, and promising directions in the realm of fairness testing. We also identify widely-adopted datasets and open-source tools for fairness testing
    • 

    corecore