3,237 research outputs found

    Two Decades of Maude

    Get PDF
    This paper is a tribute to José Meseguer, from the rest of us in the Maude team, reviewing the past, the present, and the future of the language and system with which we have been working for around two decades under his leadership. After reviewing the origins and the language's main features, we present the latest additions to the language and some features currently under development. This paper is not an introduction to Maude, and some familiarity with it and with rewriting logic are indeed assumed.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    An Evolutionary Approach for Learning Attack Specifications in Network Graphs

    Get PDF
    This paper presents an evolutionary algorithm that learns attack scenarios, called attack specifications, from a network graph. This learning process aims to find attack specifications that minimise cost and maximise the value that an attacker gets from a successful attack. The attack specifications that the algorithm learns are represented using an approach based on Hoare's CSP (Communicating Sequential Processes). This new approach is able to represent several elements found in attacks, for example synchronisation. These attack specifications can be used by network administrators to find vulnerable scenarios, composed from the basic constructs Sequence, Parallel and Choice, that lead to valuable assets in the network

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    End-to-end security in service-oriented architecture

    Get PDF
    A service-oriented architecture (SOA)-based application is composed of a number of distributed and loosely-coupled web services, which are orchestrated to accomplish a more complex functionality. Any of these web services is able to invoke other web services to offload part of its functionality. The main security challenge in SOA is that we cannot trust the participating web services in a service composition to behave as expected all the time. In addition, the chain of services involved in an end-to-end service invocation may not be visible to the clients. As a result, any violation of client’s policies could remain undetected. To address these challenges in SOA, we proposed the following contributions. First, we devised two composite trust schemes by using graph abstraction to quantitatively maintain the trust levels of different services. The composite trust values are based on feedbacks from the actual execution of services, and the structure of the SOA application. To maintain the dynamic trust, we designed the trust manager, which is a trusted-third party service. Second, we developed an end-to-end inter-service policy monitoring and enforcement framework (PME framework), which is able to dynamically inspect the interactions between services at runtime and react to the potentially malicious activities according to the client’s policies. Third, we designed an intra-service policy monitoring and enforcement framework based on taint analysis mechanism to monitor the information flow within services and prevent information disclosure incidents. Fourth, we proposed an adaptive and secure service composition engine (ASSC), which takes advantage of an efficient heuristic algorithm to generate optimal service compositions in SOA. The service compositions generated by ASSC maximize the trustworthiness of the selected services while meeting the predefined QoS constraints. Finally, we have extensively studied the correctness and performance of the proposed security measures based on a realistic SOA case study. All experimental studies validated the practicality and effectiveness of the presented solutions

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    Self-Healing Computation

    Full text link
    In the problem of reliable multiparty computation (RC), there are nn parties, each with an individual input, and the parties want to jointly compute a function ff over nn inputs. The problem is complicated by the fact that an omniscient adversary controls a hidden fraction of the parties. We describe a self-healing algorithm for this problem. In particular, for a fixed function ff, with nn parties and mm gates, we describe how to perform RC repeatedly as the inputs to ff change. Our algorithm maintains the following properties, even when an adversary controls up to t(14ϵ)nt \leq (\frac{1}{4} - \epsilon) n parties, for any constant ϵ>0\epsilon >0. First, our algorithm performs each reliable computation with the following amortized resource costs: O(m+nlogn)O(m + n \log n) messages, O(m+nlogn)O(m + n \log n) computational operations, and O()O(\ell) latency, where \ell is the depth of the circuit that computes ff. Second, the expected total number of corruptions is O(t(logm)2)O(t (\log^{*} m)^2), after which the adversarially controlled parties are effectively quarantined so that they cause no more corruptions.Comment: 17 pages and 1 figure. It is submitted to SSS'1
    corecore