35 research outputs found

    Cognitive Network Modeling as a Basis for Characterizing Human Communication Dynamics and Belief Contagion in Technology Adoption

    Get PDF
    Societal level macro models of social behavior do not sufficiently capture nuances needed to adequately represent the dynamics of person-to-person interactions. Likewise, individual agent level micro models have limited scalability - even minute parameter changes can drastically affect a model's response characteristics. This work presents an approach that uses agent-based modeling to represent detailed intra- and inter-personal interactions, as well as a system dynamics model to integrate societal-level influences via reciprocating functions. A Cognitive Network Model (CNM) is proposed as a method of quantitatively characterizing cognitive mechanisms at the intra-individual level. To capture the rich dynamics of interpersonal communication for the propagation of beliefs and attitudes, a Socio-Cognitive Network Model (SCNM) is presented. The SCNM uses socio-cognitive tie strength to regulate how agents influence--and are influenced by--one another's beliefs during social interactions. We then present experimental results which support the use of this network analytical approach, and we discuss its applicability towards characterizing and understanding human information processing

    Shape skeletons and shape similarity

    No full text
    Judgments of similarity play an integral role in the human cognitive system as they provide a means for extracting information about how objects in the world relate to each other. This similarity information is applied in various cognitive tasks, such as categorization, recognition, and identification. Previous work suggests that perceived objects are cognitively represented in a psychological space where similarity is preserved, allowing for an internal structured representation of objects in the world (Shepard, 1964). For an internal representation to be formed, information about an object must be extracted. Shape, a highly informative and salient property of an object, is often used. Judgments made about shape similarity reflect how humans functionally represent and utilize shape information from an object. Computational shape representation has been achieved with varying amounts of success (e.g. Blum, 1973; Biederman, 1987). This variability is due, in part, to the complexity of mimicking the seemingly effortless human ability to make judgments about shape even in spite of numerous possible complications, such as sparse information and occlusions. This work presents the use of a Bayesian estimation of a shape's skeleton, the maximum a posteriori (MAP) skeleton (Feldman & Singh, 2006), as part of a generative model of shape that allows for the computation of a probabilistically-based similarity metric. This method of shape representation makes possible the prediction of similarity judgments reported by human subjects on collections of shapes that exhibit differences in both part structure and metric qualities and that have been generated by an unrelated process. It is argued that the derivation of a similarity metric from this model provides the previously unavailable relationship between shape representation and categorical judgments about shape.Ph.D.Includes bibliographical references (p. 83-91)

    Conceptual Complexity and the Bias-Variance Tradeoff

    No full text
    In this paper we propose that the dichotomy between exemplar-based and prototype-based models of concept learning can be regarded as an instance of the tradeoff between complexity and data-fit, often referred to in the statistical learning literature as the bias-variance tradeoff. This continuum reflects differences in models ’ assumptions about the form of the concepts in their environments: models at one extreme, here exemplified by prototype models, assume a simple conceptual form, entailing high bias; models at the other extreme, exemplified by exem-plar models, entertain more complex hypotheses, but tend to overfit the data, with a concomitant loss in generalization per-formance. To investigate human learners ’ place on this contin-uum, we had subjects learn concepts of varying levels of struc-tural complexity. Concepts consisted of mixtures of Gaussian distributions, with the number of mixture components serving as the measure of complexity. We then fit subjects ’ responses to both a representative exemplar model and a representative prototype model. With moderately complex multimodal cate-gories, the exemplar model generally fit subjects ’ performance better, due to the prototype models ’ overly narrow (high-bias) assumption of a unimodal concept. But with high-complexity concepts, the exemplar model’s overly flexible (high-variance) assumptions made it overfit concepts relative to subjects, al-lowing it to outperform subjects on highly complex concepts. We conclude that neither strategy is uniformly optimal as a model of human performance

    Towards Automated Personality Identification Using Speech Acts

    No full text
    The way people communicate — be it verbally, visually, or via text– is indicative of personality traits. In social media the concept of the status update is used for individuals to communicate to their social networks in an always-on fashion. In doing so individuals utilize various kinds of speech acts that, while primarily communicating their content, also leave traces of their personality dimensions behind. We human-coded a set of Facebook status updates from the myPersonality dataset in terms of speech acts label and then experimented with surface level linguistic features including lexical, syntactic, and simple sentiment detection to automatically label status updates as their appropriate speech act. We apply supervised learning to the dataset and using our features are able to classify with high accuracy two dominant kinds of acts that have been found to occur in social media. At the same time we used the coded data to perform a regression analysis to determine which speech acts are significant of certain personality dimensions. The implications of our work allow for automatic large-scale personality identification through social media status updates

    Finding the Red Balloon

    No full text
    Erica Briscoe and the Ethan Trewhitt of the Georgia Tech Research Institute discuss their recent second place finish in the DARPA Network Challenge to use social media and the network to find 10 red balloons across the U.S. They will also discuss how tracking the flow of information and misinformation through social media can be used to gather information and mobilize people

    Semantic Analysis of Open Source Data for Syndromic Surveillance

    No full text
    ObjectiveThe objective of this analysis is to leverage recent advances innatural language processing (NLP) to develop new methods andsystem capabilities for processing social media (Twitter messages)for situational awareness (SA), syndromic surveillance (SS), andevent-based surveillance (EBS). Specifically, we evaluated the useof human-in-the-loop semantic analysis to assist public health (PH)SA stakeholders in SS and EBS using massive amounts of publiclyavailable social media data.IntroductionSocial media messages are often short, informal, and ungrammatical.They frequently involve text, images, audio, or video, which makesthe identification of useful information difficult. This complexityreduces the efficacy of standard information extraction techniques1.However, recent advances in NLP, especially methods tailoredto social media2, have shown promise in improving real-time PHsurveillance and emergency response3. Surveillance data derived fromsemantic analysis combined with traditional surveillance processeshas potential to improve event detection and characterization. TheCDC Office of Public Health Preparedness and Response (OPHPR),Division of Emergency Operations (DEO) and the Georgia TechResearch Institute have collaborated on the advancement of PH SAthrough development of new approaches in using semantic analysisfor social media.MethodsTo understand how computational methods may benefit SS andEBS, we studied an iterative refinement process, in which the datauser actively cultivated text-based topics (“semantic culling”) in asemi-automated SS process. This ‘human-in-the-loop’ process wascritical for creating accurate and efficient extraction functions in large,dynamic volumes of data. The general process involved identifyinga set of expert-supplied keywords, which were used to collect aninitial set of social media messages. For purposes of this analysisresearchers applied topic modeling to categorize related messages intoclusters. Topic modeling uses statistical techniques to semanticallycluster and automatically determine salient aggregations. A user thensemantically culled messages according to their PH relevance.In June 2016, researchers collected 7,489 worldwide English-language Twitter messages (tweets) and compared three samplingmethods: a baseline random sample (C1, n=2700), a keyword-basedsample (C2, n=2689), and one gathered after semantically cullingC2 topics of irrelevant messages (C3, n=2100). Researchers utilizeda software tool, Luminoso Compass4, to sample and perform topicmodeling using its real-time modeling and Twitter integrationfeatures. For C2 and C3, researchers sampled tweets that theLuminoso service matched to both clinical and layman definitions ofRash, Gastro-Intestinal syndromes5, and Zika-like symptoms. Laymanterms were derived from clinical definitions from plain languagemedical thesauri. ANOVA statistics were calculated using SPSSsoftware, version. Post-hoc pairwise comparisons were completedusing ANOVA Turkey’s honest significant difference (HSD) test.ResultsAn ANOVA was conducted, finding the following mean relevancevalues: 3% (+/- 0.01%), 24% (+/- 6.6%) and 27% (+/- 9.4%)respectively for C1, C2, and C3. Post-hoc pairwise comparison testsshowed the percentages of discovered messages related to the eventtweets using C2 and C3 methods were significantly higher than forthe C1 method (random sampling) (p<0.05). This indicates that thehuman-in-the-loop approach provides benefits in filtering socialmedia data for SS and ESB; notably, this increase is on the basis ofa single iteration of semantic culling; subsequent iterations could beexpected to increase the benefits.ConclusionsThis work demonstrates the benefits of incorporating non-traditional data sources into SS and EBS. It was shown that an NLP-based extraction method in combination with human-in-the-loopsemantic analysis may enhance the potential value of social media(Twitter) for SS and EBS. It also supports the claim that advancedanalytical tools for processing non-traditional SA, SS, and EBSsources, including social media, have the potential to enhance diseasedetection, risk assessment, and decision support, by reducing the timeit takes to identify public health events
    corecore