4,368 research outputs found

    State-of-the-art on evolution and reactivity

    Get PDF
    This report starts by, in Chapter 1, outlining aspects of querying and updating resources on the Web and on the Semantic Web, including the development of query and update languages to be carried out within the Rewerse project. From this outline, it becomes clear that several existing research areas and topics are of interest for this work in Rewerse. In the remainder of this report we further present state of the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs; in Chapter 4 event-condition-action rules, both in the context of active database systems and in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks

    ATTACK2VEC: Leveraging Temporal Word Embeddings to Understand the Evolution of Cyberattacks

    Full text link
    Despite the fact that cyberattacks are constantly growing in complexity, the research community still lacks effective tools to easily monitor and understand them. In particular, there is a need for techniques that are able to not only track how prominently certain malicious actions, such as the exploitation of specific vulnerabilities, are exploited in the wild, but also (and more importantly) how these malicious actions factor in as attack steps in more complex cyberattacks. In this paper we present ATTACK2VEC, a system that uses temporal word embeddings to model how attack steps are exploited in the wild, and track how they evolve. We test ATTACK2VEC on a dataset of billions of security events collected from the customers of a commercial Intrusion Prevention System over a period of two years, and show that our approach is effective in monitoring the emergence of new attack strategies in the wild and in flagging which attack steps are often used together by attackers (e.g., vulnerabilities that are frequently exploited together). ATTACK2VEC provides a useful tool for researchers and practitioners to better understand cyberattacks and their evolution, and use this knowledge to improve situational awareness and develop proactive defenses

    TEMPOS: A Platform for Developing Temporal Applications on Top of Object DBMS

    Get PDF
    This paper presents TEMPOS: a set of models and languages supporting the manipulation of temporal data on top of object DBMS. The proposed models exploit object-oriented technology to meet some important, yet traditionally neglected design criteria related to legacy code migration and representation independence. Two complementary ways for accessing temporal data are offered: a query language and a visual browser. The query language, namely TempOQL, is an extension of OQL supporting the manipulation of histories regardless of their representations, through fully composable functional operators. The visual browser offers operators that facilitate several time-related interactive navigation tasks, such as studying a snapshot of a collection of objects at a given instant, or detecting and examining changes within temporal attributes and relationships. TEMPOS models and languages have been formalized both at the syntactical and the semantical level and have been implemented on top of an object DBMS. The suitability of the proposals with regard to applications' requirements has been validated through concrete case studies

    BCFA: Bespoke Control Flow Analysis for CFA at Scale

    Full text link
    Many data-driven software engineering tasks such as discovering programming patterns, mining API specifications, etc., perform source code analysis over control flow graphs (CFGs) at scale. Analyzing millions of CFGs can be expensive and performance of the analysis heavily depends on the underlying CFG traversal strategy. State-of-the-art analysis frameworks use a fixed traversal strategy. We argue that a single traversal strategy does not fit all kinds of analyses and CFGs and propose bespoke control flow analysis (BCFA). Given a control flow analysis (CFA) and a large number of CFGs, BCFA selects the most efficient traversal strategy for each CFG. BCFA extracts a set of properties of the CFA by analyzing the code of the CFA and combines it with properties of the CFG, such as branching factor and cyclicity, for selecting the optimal traversal strategy. We have implemented BCFA in Boa, and evaluated BCFA using a set of representative static analyses that mainly involve traversing CFGs and two large datasets containing 287 thousand and 162 million CFGs. Our results show that BCFA can speedup the large scale analyses by 1%-28%. Further, BCFA has low overheads; less than 0.2%, and low misprediction rate; less than 0.01%.Comment: 12 page

    Attack2vec: Leveraging temporal word embeddings to understand the evolution of cyberattacks

    Full text link
    Despite the fact that cyberattacks are constantly growing in complexity, the research community still lacks effective tools to easily monitor and understand them. In particular, there is a need for techniques that are able to not only track how prominently certain malicious actions, such as the exploitation of specific vulnerabilities, are exploited in the wild, but also (and more importantly) how these malicious actions factor in as attack steps in more complex cyberattacks. In this paper we present ATTACK2VEC, a system that uses temporal word embeddings to model how attack steps are exploited in the wild, and track how they evolve. We test ATTACK2VEC on a dataset of billions of security events collected from the customers of a commercial Intrusion Prevention System over a period of two years, and show that our approach is effective in monitoring the emergence of new attack strategies in the wild and in flagging which attack steps are often used together by attackers (e.g., vulnerabilities that are frequently exploited together). ATTACK2VEC provides a useful tool for researchers and practitioners to better understand cyberattacks and their evolution, and use this knowledge to improve situational awareness and develop proactive defenses.Accepted manuscrip

    Role-Based Access-Control for Databases

    Get PDF
    Liikudes ĂŒha enam paberivaba ari suunas, hoitakse ĂŒha enam tundlikku informatsiooni andmebaasides. Sellest tulenevalt on andmebaasid rĂŒndajatele vÀÀrtuslik sihtmĂ€rk. Levinud meetod andmete kaitseks on rollipĂ”hine ligipÀÀsu kontroll (role-based access control), mis piirab sĂŒsteemi kasutajate Ă”iguseid vastavalt neile omistatud rollidele. Samas on turvameetmete realiseerimine arendajate jaoks aeganĂ”udev kĂ€sitöö, mida teostatakse samaaegselt rakenduse toimeloogika realiseerimisega. Sellest tulenevalt on raskendatud turva vajaduste osas kliendiga lĂ€birÀÀkimine projekti algfaasides. See omakorda suurendab projekti reaalsete arenduskulude kasvamise riski, eriti kui ilmnevad turvalisuse puudujÀÀgid realisatsioonis. TĂ€napĂ€eva veebirakendustes andmebaasi ĂŒhenduste puulimine (connec-tion pooling ), kus kasutatakse ĂŒht ja sama ĂŒhendust erinevate kasutajate teenindamiseks, rikub vĂ€hima vajaliku Ă”iguse printsiipi. KĂ”ikidel ĂŒhendunud kasutajatel on ligipÀÀs tĂ€pselt samale hulgale andmetele, mille tulemusena vĂ”ib lekkida tundlik informatsioon (nĂ€iteks SQLi sĂŒstimine (SQL injection ) vĂ”i vead rakenduses). Lahenduseks probleemile pakume vĂ€lja vahendid rollipĂ”hise ligipÀÀsu kontorolli disainimiseks tarkvara projekteerimise faasis. RollipĂ”hise ligipÀÀsu kontorolli modelleerimiseks kasutame UML'i laiendust SecureUML. Antud mudelist on vĂ”imalik antud töö raames valminud vahenditega genereerida koodi, mis kontrollib ligipÀÀsu Ă”iguseid andmebaasi tasemel. Antud madaltasemekontroll vĂ€hendab riski, et kasutajad nĂ€evad andmeid, millele neil ligipÀÀsu Ă”igused puuduvad. Antud töös lĂ€biviidud uuring nĂ€itas, et mudelipĂ”hine turvalisuse arendamise kvaliteet on kĂ”rgem vĂ”rreldes programmeerijate poolt kirjutatud koodiga. Kuna turvamudel on loodud projekteerimise faasis on selle semantiline tĂ€ielikkus ja korrektsus kĂ”rge, millest tulenevalt on seda kerge lugeda ja muuta ning seda on lihtsam kasutada arendajate ja klientide vahelises suhtluses.With the constant march towards a paperless business environment, database systems are increasingly being used to hold more and more sensitive information. This means they present an increasingly valuable target for attackers. A mainstream method for information system security is Role-based Access Control (RBAC), which restricts system access to authorised users. However the implementation of the RBAC policy remains a human intensive activity, typically, performed at the implementation stage of the system development. This makes it difficult to communicate security solutions to the stakeholders earlier and raises the system development cost, especially if security implementation errors are detected. The use of connection pooling in web applications, where all the application users connect to the database via the web server with the same database connection, violates the the principle of minimal privilege. Every connected user has, in principle, access to the same data. This may leave the sensitive data vulnerable to SQL injection attacks or bugs in the application. As a solution we propose the application of the model-driven development to define RBAC mechanism for data access at the design stages of the system development. The RBAC model created using the SecureUML approach is automatically translated to source code, which implements the modelled security rules at the database level. Enforcing access-control at this low level limits the risk of leaking sensitive data to unauthorised users. In out case study we compared SecureUML and the traditional security model, written as a source code, mixed with business logic and user-interface statements. The case study showed that the model-driven security development results in significantly better quality for the security model. Hence the security model created at the design stage contains higher semantic completeness and correctness, it is easier to modify and understand, and it facilitates a better communication of security solutions to the system stakeholders than the security model created at the implementation stage

    Provenance of "after the fact" harmonised community-based demographic and HIV surveillance data from ALPHA cohorts

    Get PDF
    Background: Data about data, metadata, for describing Health and Demographic Surveillance System (HDSS) data have often received insufficient attention. This thesis studied how to develop provenance metadata within the context of HDSS data harmonisation - the network for Analysing Longitudinal Population-based HIV/ AIDS data on Africa (ALPHA). Technologies from the data documentation community were customised, among them: A process model - Generic Longitudinal Business Process Model (GLBPM), two metadata standards - Data Documentation Initiative (DDI) and Standard for Data and Metadata eXchange (SDMX) and a data transformations description language - Structured Data Transform Language (SDTL). Methods: A framework with three complementary facets was used: Creating a recipe for annotating primary HDSS data using the GLBPM and DDI; Approaches for documenting data transformations. At a business level, prospective and retrospective documentation using GLBPM and DDI and retrospectively recovering the more granular details using SDMX and SDTL; Requirements analysis for a user-friendly provenance metadata browser. Results: A recipe for the annotation of HDSS data was created outlining considerations to guide HDSS on metadata entry, staff training and software costs. Regarding data transformations, at a business level, a specialised process model for the HDSS domain was created. It has algorithm steps for each data transformation sub-process and data inputs and outputs. At a lower level, the SDMX and SDTL captured about 80% (17/21) of the variable level transformations. The requirements elicitation study yielded requirements for a provenance metadata browser to guide developers. Conclusions: This is a first attempt ever at creating detailed metadata for this resource or any other similar resources in this field. HDSS can implement these recipes to document their data. This will increase transparency and facilitate reuse thus potentially bringing down costs of data management. It will arguably promote the longevity and wide and accurate use of these data
    • 

    corecore