375 research outputs found

    Automation for network security configuration: state of the art and research trends

    Get PDF
    The size and complexity of modern computer networks are progressively increasing, as a consequence of novel architectural paradigms such as the Internet of Things and network virtualization. Consequently, a manual orchestration and configuration of network security functions is no more feasible, in an environment where cyber attacks can dramatically exploit breaches related to any minimum configuration error. A new frontier is then the introduction of automation in network security configuration, i.e., automatically designing the architecture of security services and the configurations of network security functions, such as firewalls, VPN gateways, etc. This opportunity has been enabled by modern computer networks technologies, such as virtualization. In view of these considerations, the motivations for the introduction of automation in network security configuration are first introduced, alongside with the key automation enablers. Then, the current state of the art in this context is surveyed, focusing on both the achieved improvements and the current limitations. Finally, possible future trends in the field are illustrated

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    On the Feasibility of Specialized Ability Stealing for Large Language Code Models

    Full text link
    Recent progress in large language code models (LLCMs) has led to a dramatic surge in the use of software development. Nevertheless, it is widely known that training a well-performed LLCM requires a plethora of workforce for collecting the data and high quality annotation. Additionally, the training dataset may be proprietary (or partially open source to the public), and the training process is often conducted on a large-scale cluster of GPUs with high costs. Inspired by the recent success of imitation attacks in stealing computer vision and natural language models, this work launches the first imitation attack on LLCMs: by querying a target LLCM with carefully-designed queries and collecting the outputs, the adversary can train an imitation model that manifests close behavior with the target LLCM. We systematically investigate the effectiveness of launching imitation attacks under different query schemes and different LLCM tasks. We also design novel methods to polish the LLCM outputs, resulting in an effective imitation training process. We summarize our findings and provide lessons harvested in this study that can help better depict the attack surface of LLCMs. Our research contributes to the growing body of knowledge on imitation attacks and defenses in deep neural models, particularly in the domain of code related tasks.Comment: 11 page

    Toward a Pentecostal theology of prophetic legitimacy

    Get PDF
    Within the framework of contemporary Pentecostalism, this thesis considers prophetic legitimacy and its elements. Since the 1948 inception of the Latter Rain Movement, prophetic expression and function have proliferated, with the movement’s tributaries carrying its most welcomed and most-questioned aspects into the larger Pentecostal tradition. With prophetic activity gaining prominence, the Third Wave/Independent tribes’ diverse prophetic expressions raise important questions, which are illustrated in presented examples of current prophetic praxis. This thesis borrows the terms prophetic consciousness and prophetic perception from Walter Brueggemann and explores them from theological, psychological, and phenomenological perspectives. In approaching the Scripture, two methodologies are employed: the literary-critical approach and a canonical approach, which are used to consider OT and NT prophetic figures in regard to their prophetic function and legitimacy. Given the Pentecostal tradition of Luke-Acts as entrance into the prophetic conversation, the NT work is based in a Lukan perspective, as is the argument for a Pentecostal theology of prophetic legitimacy. A more contemporary exemplar of prophetic legitimacy is also presented: Violet Kiteley, a Latter Rain adherent and participant from the movement’s inception. A narrative of her prophetic journey, spiritual formation, focus on Latter Rain Restorationism, understanding of the prophetic presbytery, and Latter Rain Pentecostal hermeneutic are detailed. A critique of the Latter Rain Restorationism schema explores its inherent challenges while affirming Kiteley’s place as an exemplar of prophetic legitimacy. This research concludes with a proposed construct for prophetic legitimacy, along with three proposed elements that commend a healthy Pentecostal theology of the same: they are prophetica discretio, prophetica conscientia, and prophetica praxis. These are examined in relation to prophetic orthodoxy, orthopraxy, and orthopathy and are considered in regard to a prophetic ethic that grounds all prophetic function and legitimacy

    Large Language Models for Software Engineering: A Systematic Literature Review

    Full text link
    Large Language Models (LLMs) have significantly impacted numerous domains, notably including Software Engineering (SE). Nevertheless, a well-rounded understanding of the application, effects, and possible limitations of LLMs within SE is still in its early stages. To bridge this gap, our systematic literature review takes a deep dive into the intersection of LLMs and SE, with a particular focus on understanding how LLMs can be exploited in SE to optimize processes and outcomes. Through a comprehensive review approach, we collect and analyze a total of 229 research papers from 2017 to 2023 to answer four key research questions (RQs). In RQ1, we categorize and provide a comparative analysis of different LLMs that have been employed in SE tasks, laying out their distinctive features and uses. For RQ2, we detail the methods involved in data collection, preprocessing, and application in this realm, shedding light on the critical role of robust, well-curated datasets for successful LLM implementation. RQ3 allows us to examine the specific SE tasks where LLMs have shown remarkable success, illuminating their practical contributions to the field. Finally, RQ4 investigates the strategies employed to optimize and evaluate the performance of LLMs in SE, as well as the common techniques related to prompt optimization. Armed with insights drawn from addressing the aforementioned RQs, we sketch a picture of the current state-of-the-art, pinpointing trends, identifying gaps in existing research, and flagging promising areas for future study

    The determination of non-pecuniary reparations by regional human rights courts : a cross-regional comparative study

    Get PDF
    Defence date: 9 December 2019Examining Board: Professor Martin Scheinin, European University Institute (Supervisor); Professor Deirdre Curtin, European University Institute; Professor Başak Çalı, Hertie School of Governance; Professor Antoine Buyse, Utrecht UniversityHow do human rights courts determine non-pecuniary reparations? For a long time, the granting of reparations has been considered to be a special feature of regional human rights courts, governed by their respective conventional provisions. In this light, courts developed dissimilar approaches to reparations. While the european court of human rights (ecthr) mostly favoured the granting of monetary compensation, the inter american court of human rights (iacthr) produced a broad array of non-pecuniary reparative measures. However, these reparative paths started to cross some years ago, as the ecthr began occasionally ordering non-pecuniary reparations. Moreover, the african court on human and peoples’ rights (african court) has partially adopted this practice. Hence, these courts actually have a common reparative practice which has not been examined comparatively. This dissertation explains how regional human rights courts are determining non pecuniary reparations. Taking an integrated approach, this dissertation places the discussion within a single legal system, considering the influence of conventional provisions (lex specialis) and the norms of general international law which have a bearing on reparations notwithstanding their formal non-binding status (lex generalis). Through a comparative examination of the three regional human rights courts’ practice, and occasionally the human rights committee, this thesis inquires into the legal basis and purposes of reparations. Moreover, the ubiquitous, yet controversial, use of discretion in determining reparations is examined, finding that it can be exercised within the consideration of the principles of restitutio in integrum and equity. Additionally, this dissertation examines the iacthr’s innovative approach to reparations, noticing that non-pecuniary measures are used to achieve far-reaching goals. While said innovative approach challenges the traditional understanding of human rights adjudication, it is recognised that a discretionary use of reparations may be allowed within a permissible framework. Finally, a suitable use of the iacthr’s innovative approach by other regional courts is examined

    The Ontology of Biological Attributes (OBA)-computational traits for the life sciences.

    Get PDF
    Existing phenotype ontologies were originally developed to represent phenotypes that manifest as a character state in relation to a wild-type or other reference. However, these do not include the phenotypic trait or attribute categories required for the annotation of genome-wide association studies (GWAS), Quantitative Trait Loci (QTL) mappings or any population-focussed measurable trait data. The integration of trait and biological attribute information with an ever increasing body of chemical, environmental and biological data greatly facilitates computational analyses and it is also highly relevant to biomedical and clinical applications. The Ontology of Biological Attributes (OBA) is a formalised, species-independent collection of interoperable phenotypic trait categories that is intended to fulfil a data integration role. OBA is a standardised representational framework for observable attributes that are characteristics of biological entities, organisms, or parts of organisms. OBA has a modular design which provides several benefits for users and data integrators, including an automated and meaningful classification of trait terms computed on the basis of logical inferences drawn from domain-specific ontologies for cells, anatomical and other relevant entities. The logical axioms in OBA also provide a previously missing bridge that can computationally link Mendelian phenotypes with GWAS and quantitative traits. The term components in OBA provide semantic links and enable knowledge and data integration across specialised research community boundaries, thereby breaking silos

    Syntactic Reasoning with Conditional Probabilities in Deductive Argumentation

    Get PDF
    Evidence from studies, such as in science or medicine, often corresponds to conditional probability statements. Furthermore, evidence can conflict, in particular when coming from multiple studies. Whilst it is natural to make sense of such evidence using arguments, there is a lack of a systematic formalism for representing and reasoning with conditional probability statements in computational argumentation. We address this shortcoming by providing a formalization of conditional probabilistic argumentation based on probabilistic conditional logic. We provide a semantics and a collection of comprehensible inference rules that give different insights into evidence. We show how arguments constructed from proofs and attacks between them can be analyzed as arguments graphs using dialectical semantics and via the epistemic approach to probabilistic argumentation. Our approach allows for a transparent and systematic way of handling uncertainty that often arises in evidence

    Efficient Axiomatization of OWL 2 EL Ontologies from Data by means of Formal Concept Analysis: (Extended Version)

    Get PDF
    We present an FCA-based axiomatization method that produces a complete EL TBox (the terminological part of an OWL 2 EL ontology) from a graph dataset in at most exponential time. We describe technical details that allow for efficient implementation as well as variations that dispense with the computation of extremely large axioms, thereby rendering the approach applicable albeit some completeness is lost. Moreover, we evaluate the prototype on real-world datasets.This is an extended version of an article accepted at AAAI 2024

    MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models

    Full text link
    Language Models (LMs) have shown impressive performance in various natural language tasks. However, when it comes to natural language reasoning, LMs still face challenges such as hallucination, generating incorrect intermediate reasoning steps, and making mathematical errors. Recent research has focused on enhancing LMs through self-improvement using feedback. Nevertheless, existing approaches relying on a single generic feedback source fail to address the diverse error types found in LM-generated reasoning chains. In this work, we propose Multi-Aspect Feedback, an iterative refinement framework that integrates multiple feedback modules, including frozen LMs and external tools, each focusing on a specific error category. Our experimental results demonstrate the efficacy of our approach to addressing several errors in the LM-generated reasoning chain and thus improving the overall performance of an LM in several reasoning tasks. We see a relative improvement of up to 20% in Mathematical Reasoning and up to 18% in Logical Entailment.Comment: Accepted at EMNLP 2023 Main Conference, Camera Read
    • …
    corecore