3,859 research outputs found

    Determining Training Needs for Cloud Infrastructure Investigations using I-STRIDE

    Full text link
    As more businesses and users adopt cloud computing services, security vulnerabilities will be increasingly found and exploited. There are many technological and political challenges where investigation of potentially criminal incidents in the cloud are concerned. Security experts, however, must still be able to acquire and analyze data in a methodical, rigorous and forensically sound manner. This work applies the STRIDE asset-based risk assessment method to cloud computing infrastructure for the purpose of identifying and assessing an organization's ability to respond to and investigate breaches in cloud computing environments. An extension to the STRIDE risk assessment model is proposed to help organizations quickly respond to incidents while ensuring acquisition and integrity of the largest amount of digital evidence possible. Further, the proposed model allows organizations to assess the needs and capacity of their incident responders before an incident occurs.Comment: 13 pages, 3 figures, 3 tables, 5th International Conference on Digital Forensics and Cyber Crime; Digital Forensics and Cyber Crime, pp. 223-236, 201

    Queensland University of Technology at TREC 2005

    Get PDF
    The Information Retrieval and Web Intelligence (IR-WI) research group is a research team at the Faculty of Information Technology, QUT, Brisbane, Australia. The IR-WI group participated in the Terabyte and Robust track at TREC 2005, both for the first time. For the Robust track we applied our existing information retrieval system that was originally designed for use with structured (XML) retrieval to the domain of document retrieval. For the Terabyte track we experimented with an open source IR system, Zettair and performed two types of experiments. First, we compared Zettair’s performance on both a high-powered supercomputer and a distributed system across seven midrange personal computers. Second, we compared Zettair’s performance when a standard TREC title is used, compared with a natural language query, and a query expanded with synonyms. We compare the systems both in terms of efficiency and retrieval performance. Our results indicate that the distributed system is faster than the supercomputer, while slightly decreasing retrieval performance, and that natural language queries also slightly decrease retrieval performance, while our query expansion technique significantly decreased performance

    Integrating and Ranking Uncertain Scientific Data

    Get PDF
    Mediator-based data integration systems resolve exploratory queries by joining data elements across sources. In the presence of uncertainties, such multiple expansions can quickly lead to spurious connections and incorrect results. The BioRank project investigates formalisms for modeling uncertainty during scientific data integration and for ranking uncertain query results. Our motivating application is protein function prediction. In this paper we show that: (i) explicit modeling of uncertainties as probabilities increases our ability to predict less-known or previously unknown functions (though it does not improve predicting the well-known). This suggests that probabilistic uncertainty models offer utility for scientific knowledge discovery; (ii) small perturbations in the input probabilities tend to produce only minor changes in the quality of our result rankings. This suggests that our methods are robust against slight variations in the way uncertainties are transformed into probabilities; and (iii) several techniques allow us to evaluate our probabilistic rankings efficiently. This suggests that probabilistic query evaluation is not as hard for real-world problems as theory indicates

    Mass Customization of Cloud Services - Engineering, Negotiation and Optimization

    Get PDF
    Several challenges hinder the entry of mass customization principles into Cloud computing: Firstly, the service engineering on provider side needs to be automated. Secondly, there has to be a suitable negotiation mechanism helping provider and consumer on finding an agreement on Quality-of-Service and price. Thirdly, finding the optimal configuration requires adequate and efficient optimization techniques. The work at hand addresses these challenges through technical and economic contributions

    Approximating expressive queries on graph-modeled data: The GeX approach

    Get PDF
    We present the GeX (Graph-eXplorer) approach for the approximate matching of complex queries on graph-modeled data. GeX generalizes existing approaches and provides for a highly expressive graph-based query language that supports queries ranging from keyword-based to structured ones. The GeX query answering model gracefully blends label approximation with structural relaxation, under the primary objective of delivering meaningfully approximated results only. GeX implements ad-hoc data structures that are exploited by a top-k retrieval algorithm which enhances the approximate matching of complex queries. An extensive experimental evaluation on real world datasets demonstrates the efficiency of the GeX query answering

    Weighted coverage based reviewer assignment

    Get PDF
    Peer reviewing is a standard process for assessing the quality of submissions at academic conferences and journals. A very important task in this process is the assignment of reviewers to papers. However, achieving an appropriate assignment is not easy, because all reviewers should have similar load and the subjects of the assigned papers should be consistent with the reviewers' expertise. In this paper, we propose a generalized framework for fair reviewer assignment. We first extract the domain knowledge from the reviewers' published papers and model this knowledge as a set of topics. Then, we perform a group assignment of reviewers to papers, which is a generalization of the classic Reviewer Assignment Problem (RAP), considering the relevance of the papers to topics as weights. We study a special case of the problem, where reviewers are to be found for just one paper (Journal Assignment Problem) and propose an exact algorithm which is fast in practice, as opposed to brute-force solutions. For the general case of having to assign multiple papers, which is too hard to be solved exactly, we propose a greedy algorithm that achieves a 1/2-approximation ratio compared to the exact solution. This is a great improvement compared to the 1/3-approximation solution proposed in previous work for the simpler coverage-based reviewer assignment problem, where there are no weights on topics. We theoretically prove the approximation bound of our solution and experimentally show that it is superior to the current state-of-the-art.postprin
    • …
    corecore