35,337 research outputs found

    Guessing under source uncertainty

    Full text link
    This paper considers the problem of guessing the realization of a finite alphabet source when some side information is provided. The only knowledge the guesser has about the source and the correlated side information is that the joint source is one among a family. A notion of redundancy is first defined and a new divergence quantity that measures this redundancy is identified. This divergence quantity shares the Pythagorean property with the Kullback-Leibler divergence. Good guessing strategies that minimize the supremum redundancy (over the family) are then identified. The min-sup value measures the richness of the uncertainty set. The min-sup redundancies for two examples - the families of discrete memoryless sources and finite-state arbitrarily varying sources - are then determined.Comment: 27 pages, submitted to IEEE Transactions on Information Theory, March 2006, revised September 2006, contains minor modifications and restructuring based on reviewers' comment

    The nature of knowledge reliance in source guessing

    Get PDF
    Context matters for complex human information processing. The ability to attribute information to its origin (source) is not only crucial for daily social interactions (e.g., who told me something?) and impression formation about other people, but it is also essential to judge the credibility of sources and validity of the received information (e.g., where did I read something?). Even more so, failures of source attributions can have far-reaching consequences, for instance in the distribution of fake news in the recent media debate but also in eyewitness testimony where misattributions become even more severe (i.e., sentencing innocent individuals). Retrieving the origin of an episode from memory has been termed source monitoring (cf. M. K. Johnson et al., 1993). Alongside memory processes, source attributions can be based on guessing processes affected by general or contingency knowledge and plausibility or metacognitive beliefs in case a source’s characteristic cannot be retrieved from memory (cf. M. K. Johnson et al., 1993). This thesis specifically focuses on knowledge reliance in source guessing. That is, when people infer the source due to a lack of memory traces, they rely on knowledge acquired prior to or during the learning environment. While mostly memory processes were of prime research interest in the past decades, guessing processes were given less consideration. In this thesis, I address the underlying nature of knowledge reliance in source guessing more thoroughly as part of four manuscripts, thereby contributing to put the so far missing pieces of the source-guessing puzzle into place. In these manuscripts, I examine knowledge reliance in source guessing with regard to its cognitive dynamics, stability, resource dependence, and generalization to novel stimuli to serve the overarching objective of inferring its underlying nature. In the first manuscript, I demonstrate the utility of the process-tracing methodology mouse tracking to unpack the influence of knowledge reliance on source-monitoring processes. In the second manuscript, I quantify the extent to which individual differences in knowledge reliance in source guessing are stable across time and knowledge domains. In the third manuscript, I refine the understanding of the underlying automatic or controlled mechanisms of knowledge reliance. In the fourth manuscript, I expand the scope of knowledge reliance in source guessing to novel information contexts. In sum, this thesis provides new insights into the application of knowledge structures in judgmental processes under source uncertainty

    An equality between entanglement and uncertainty

    Get PDF
    Heisenberg's uncertainty principle implies that if one party (Alice) prepares a system and randomly measures one of two incompatible observables, then another party (Bob) cannot perfectly predict the measurement outcomes. This implication assumes that Bob does not possess an additional system that is entangled to the measured one; indeed the seminal paper of Einstein, Podolsky and Rosen (EPR) showed that maximal entanglement allows Bob to perfectly win this guessing game. Although not in contradiction, the observations made by EPR and Heisenberg illustrate two extreme cases of the interplay between entanglement and uncertainty. On the one hand, no entanglement means that Bob's predictions must display some uncertainty. Yet on the other hand, maximal entanglement means that there is no more uncertainty at all. Here we follow an operational approach and give an exact relation - an equality - between the amount of uncertainty as measured by the guessing probability, and the amount of entanglement as measured by the recoverable entanglement fidelity. From this equality we deduce a simple criterion for witnessing bipartite entanglement and a novel entanglement monogamy equality.Comment: v2: published as "Entanglement-assisted guessing of complementary measurement outcomes", 11 pages, 1 figur

    Entropic Energy-Time Uncertainty Relation

    Get PDF
    Energy-time uncertainty plays an important role in quantum foundations and technologies, and it was even discussed by the founders of quantum mechanics. However, standard approaches (e.g., Robertson's uncertainty relation) do not apply to energy-time uncertainty because, in general, there is no Hermitian operator associated with time. Following previous approaches, we quantify time uncertainty by how well one can read off the time from a quantum clock. We then use entropy to quantify the information-theoretic distinguishability of the various time states of the clock. Our main result is an entropic energy-time uncertainty relation for general time-independent Hamiltonians, stated for both the discrete-time and continuous-time cases. Our uncertainty relation is strong, in the sense that it allows for a quantum memory to help reduce the uncertainty, and this formulation leads us to reinterpret it as a bound on the relative entropy of asymmetry. Due to the operational relevance of entropy, we anticipate that our uncertainty relation will have information-processing applications.Comment: 6 + 9 pages, 2 figure

    Children's suggestibility in relation to their understanding about sources of knowledge

    Get PDF
    In the experiments reported here, children chose either to maintain their initial belief about an object's identity or to accept the experimenter's contradicting suggestion. Both 3– to 4–year–olds and 4– to 5–year–olds were good at accepting the suggestion only when the experimenter was better informed than they were (implicit source monitoring). They were less accurate at recalling both their own and the experimenter's information access (explicit recall of experience), though they performed well above chance. Children were least accurate at reporting whether their final belief was based on what they were told or on what they experienced directly (explicit source monitoring). Contrasting results emerged when children decided between contradictory suggestions from two differentially informed adults: Three– to 4–year–olds were more accurate at reporting the knowledge source of the adult they believed than at deciding which suggestion was reliable. Decision making in this observation task may require reflective understanding akin to that required for explicit source judgments when the child participates in the task

    Guessing Revisited: A Large Deviations Approach

    Full text link
    The problem of guessing a random string is revisited. A close relation between guessing and compression is first established. Then it is shown that if the sequence of distributions of the information spectrum satisfies the large deviation property with a certain rate function, then the limiting guessing exponent exists and is a scalar multiple of the Legendre-Fenchel dual of the rate function. Other sufficient conditions related to certain continuity properties of the information spectrum are briefly discussed. This approach highlights the importance of the information spectrum in determining the limiting guessing exponent. All known prior results are then re-derived as example applications of our unifying approach.Comment: 16 pages, to appear in IEEE Transaction on Information Theor

    Guessing based on length functions

    Full text link
    A guessing wiretapper's performance on a Shannon cipher system is analyzed for a source with memory. Close relationships between guessing functions and length functions are first established. Subsequently, asymptotically optimal encryption and attack strategies are identified and their performances analyzed for sources with memory. The performance metrics are exponents of guessing moments and probability of large deviations. The metrics are then characterized for unifilar sources. Universal asymptotically optimal encryption and attack strategies are also identified for unifilar sources. Guessing in the increasing order of Lempel-Ziv coding lengths is proposed for finite-state sources, and shown to be asymptotically optimal. Finally, competitive optimality properties of guessing in the increasing order of description lengths and Lempel-Ziv coding lengths are demonstrated.Comment: 16 pages, Submitted to IEEE Transactions on Information Theory, Special issue on Information Theoretic Security, Simplified proof of Proposition

    Tight Bounds on the R\'enyi Entropy via Majorization with Applications to Guessing and Compression

    Full text link
    This paper provides tight bounds on the R\'enyi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one-to-one. To that end, a tight lower bound on the R\'enyi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al., which is focused on the Shannon entropy, and it strengthens and generalizes the results of that paper to R\'enyi entropies of arbitrary positive orders. In view of these generalized bounds and the works by Arikan and Campbell, non-asymptotic bounds are derived for guessing moments and lossless data compression of discrete memoryless sources.Comment: The paper was published in the Entropy journal (special issue on Probabilistic Methods in Information Theory, Hypothesis Testing, and Coding), vol. 20, no. 12, paper no. 896, November 22, 2018. Online available at https://www.mdpi.com/1099-4300/20/12/89
    corecore