46,991 research outputs found

    Neural signals encoding shifts in beliefs

    Get PDF
    Dopamine is implicated in a diverse range of cognitive functions including cognitive flexibility, task switching, signalling novel or unexpected stimuli as well as advance information. There is also longstanding line of thought that links dopamine with belief formation and, crucially, aberrant belief formation in psychosis. Integrating these strands of evidence would suggest that dopamine plays a central role in belief updating and more specifically in encoding of meaningful information content in observations. The precise nature of this relationship has remained unclear. To directly address this question we developed a paradigm that allowed us to decompose two distinct types of information content, information-theoretic surprise that reflects the unexpectedness of an observation, and epistemic value that induces shifts in beliefs or, more formally, Bayesian surprise. Using functional magnetic-resonance imaging in humans we show that dopamine-rich midbrain regions encode shifts in beliefs whereas surprise is encoded in prefrontal regions, including the pre-supplementary motor area and dorsal cingulate cortex. By linking putative dopaminergic activity to belief updating these data provide a link to false belief formation that characterises hyperdopaminergic states associated with idiopathic and drug induced psychosis

    Communication Theoretic Data Analytics

    Full text link
    Widespread use of the Internet and social networks invokes the generation of big data, which is proving to be useful in a number of applications. To deal with explosively growing amounts of data, data analytics has emerged as a critical technology related to computing, signal processing, and information networking. In this paper, a formalism is considered in which data is modeled as a generalized social network and communication theory and information theory are thereby extended to data analytics. First, the creation of an equalizer to optimize information transfer between two data variables is considered, and financial data is used to demonstrate the advantages. Then, an information coupling approach based on information geometry is applied for dimensionality reduction, with a pattern recognition example to illustrate the effectiveness. These initial trials suggest the potential of communication theoretic data analytics for a wide range of applications.Comment: Published in IEEE Journal on Selected Areas in Communications, Jan. 201

    The Pragmatics of Arabic Religious Posts on Facebook: A Relevance-Theoretic Account

    Get PDF
    Despite growing interest in the impact of computer-mediated communication on our lives, linguistic studies on such communication conducted in the Arabic language are scarce. Grounded in Relevance Theory, this paper seeks to fill this void by analysing the linguistic structure of Arabic religious posts on Facebook. First, I discuss communication on Facebook, treating it as a relevance-seeking process of writing or sharing posts, with the functions of ‘Like’ and ‘Share’ seen as cues for communicating propositional attitude. Second, I analyse a corpus of around 80 posts, revealing an interesting use of imperatives, interrogatives and conditionals which manipulate the interpretation of such posts between descriptive and interpretive readings. I also argue that a rigorous system of incentives is employed in such posts in order to boost their relevance. Positive, negative and challenging incentives link the textual to the visual message in an attempt to raise more cognitive effects for the readers

    Emergence of social networks via direct and indirect reciprocity

    Get PDF
    Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals' degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores ("indirect reciprocity"), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. "tit-for-tat") as well as indirect reciprocity (helping strangers in order to increase one's reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page
    corecore