66,738 research outputs found

    Representing Network Trust and Using It to Improve Anonymous Communication

    Full text link
    Motivated by the effectiveness of correlation attacks against Tor, the censorship arms race, and observations of malicious relays in Tor, we propose that Tor users capture their trust in network elements using probability distributions over the sets of elements observed by network adversaries. We present a modular system that allows users to efficiently and conveniently create such distributions and use them to improve their security. The major components of this system are (i) an ontology of network-element types that represents the main threats to and vulnerabilities of anonymous communication over Tor, (ii) a formal language that allows users to naturally express trust beliefs about network elements, and (iii) a conversion procedure that takes the ontology, public information about the network, and user beliefs written in the trust language and produce a Bayesian Belief Network that represents the probability distribution in a way that is concise and easily sampleable. We also present preliminary experimental results that show the distribution produced by our system can improve security when employed by users; further improvement is seen when the system is employed by both users and services.Comment: 24 pages; talk to be presented at HotPETs 201

    Pooling stated and revealed preference data in the presence of RP endogeneity

    Get PDF
    Pooled discrete choice models combine revealed preference (RP) data and stated preference (SP) data to exploit advantages of each. SP data is often treated with suspicion because consumers may respond differently in a hypothetical survey context than they do in the marketplace. However, models built on RP data can suffer from endogeneity bias when attributes that drive consumer choices are unobserved by the modeler and correlated with observed variables. Using a synthetic data experiment, we test the performance of pooled RP–SP models in recovering the preference parameters that generated the market data under conditions that choice modelers are likely to face, including (1) when there is potential for endogeneity problems in the RP data, such as omitted variable bias, and (2) when consumer willingness to pay for attributes may differ from the survey context to the market context. We identify situations where pooling RP and SP data does and does not mitigate each data source’s respective weaknesses. We also show that the likelihood ratio test, which has been widely used to determine whether pooling is statistically justifiable, (1) can fail to identify the case where SP context preference differences and RP endogeneity bias shift the parameter estimates of both models in the same direction and magnitude and (2) is unreliable when the product attributes are fixed within a small number of choice sets, which is typical of automotive RP data. Our findings offer new insights into when pooling data sources may or may not be advisable for accurately estimating market preference parameters, including consideration of the conditions and context under which the data were generated as well as the relative balance of information between data sources.This work was supported in part by a grant from the Link Foundation, a grant from the National Science Foundation # 1064241 , and a grant from Ford Motor Company. The opinions expressed are those of the authors and not necessarily those of the sponsors.Accepted manuscrip

    Towards a Generic Trace for Rule Based Constraint Reasoning

    Get PDF
    CHR is a very versatile programming language that allows programmers to declaratively specify constraint solvers. An important part of the development of such solvers is in their testing and debugging phases. Current CHR implementations support those phases by offering tracing facilities with limited information. In this report, we propose a new trace for CHR which contains enough information to analyze any aspects of \CHRv\ execution at some useful abstract level, common to several implementations. %a large family of rule based solvers. This approach is based on the idea of generic trace. Such a trace is formally defined as an extension of the ωr∨\omega_r^\lor semantics of CHR. We show that it can be derived form the SWI Prolog CHR trace

    Optimizing the computation of overriding

    Full text link
    We introduce optimization techniques for reasoning in DLN---a recently introduced family of nonmonotonic description logics whose characterizing features appear well-suited to model the applicative examples naturally arising in biomedical domains and semantic web access control policies. Such optimizations are validated experimentally on large KBs with more than 30K axioms. Speedups exceed 1 order of magnitude. For the first time, response times compatible with real-time reasoning are obtained with nonmonotonic KBs of this size

    Topic Models Conditioned on Arbitrary Features with Dirichlet-multinomial Regression

    Full text link
    Although fully generative models have been successfully used to model the contents of text documents, they are often awkward to apply to combinations of text data and document metadata. In this paper we propose a Dirichlet-multinomial regression (DMR) topic model that includes a log-linear prior on document-topic distributions that is a function of observed features of the document, such as author, publication venue, references, and dates. We show that by selecting appropriate features, DMR topic models can meet or exceed the performance of several previously published topic models designed for specific data.Comment: Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (UAI2008
    • …
    corecore