196 research outputs found

    Auditory startle response predicts introversion: an individual analysis

    Get PDF
    We assessed a possible link between the Introversion/Extraversion spectrum and sensori-motor gating and predicted self-reported introverts would have more sensitive sensori-motor gating pathways than extraverts at the individual subject level. 28 subjects self-identified as introverts or extraverts; individuals that self-identified as both introverted and extraverted were classified as ambiverts . Participants\u27orbicularis oculus muscles were electromyographically measured while abrupt auditory stimuli ranging from 50 to 100 decibels were played over headphones. As predicted, introverts exhibited greater electromyographical frequencies and magnitudes of response to stimuli at almost all levels of stimulus intensity. These results indicate introverts tend to be more sensitive, on a physiological level, to incoming stimuli compared to extraverts; this finding counters explanations of introversion as a purely social construct. Interestingly, a further and unpredicted pattern of three distinct groups was also observed. These groups are not organized along the lines of introversion/extraversion and may be linked to the concept of neuroticism

    Heuristics for Broader Assessment of Effectiveness and Usability in Technology-Mediated Technical Communication

    Get PDF
    Purpose: To offer additional tools for the assessment of effectiveness and usability in technology-mediated communication based in established heuristics. Method: An interdisciplinary group of researchers at Rensselaer Polytechnic Institute selected five disparate examples of technology-mediated communication, formally evaluated each using contemporary heuristics, and then engaged in an iterative design process to arrive at an expanded toolkit for in depth analyses. Results: A set of heuristics and operationalized metrics for the deeper analysis of a broader scope of contemporary technology-mediated communication. Conclusions: The continual evolution of communication, including the emergence of new, interactive media, provides a challenging opportunity to identify effective approaches and techniques. There are benefits to a renewed focus on relationships between people and between people and information, and we offer additional criteria and metrics to supplement established means of heuristic analysis

    Amyloid-β accumulation in the CNS in human growth hormone recipients in the UK

    Get PDF
    Human-to-human transmission of Creutzfeldt–Jakob disease (CJD) has occurred through medical procedures resulting in iatrogenic CJD (iCJD). One of the commonest causes of iCJD was the use of human pituitary-derived growth hormone (hGH) to treat primary or secondary growth hormone deficiency. As part of a comprehensive tissue-based analysis of the largest cohort yet collected (35 cases) of UK hGH-iCJD cases, we describe the clinicopathological phenotype of hGH-iCJD in the UK. In the 33/35 hGH-iCJD cases with sufficient paraffin-embedded tissue for full pathological examination, we report the accumulation of the amyloid beta (Aβ) protein associated with Alzheimer’s disease (AD) in the brains and cerebral blood vessels in 18/33 hGH-iCJD patients and for the first time in 5/12 hGH recipients who died from causes other than CJD. Aβ accumulation was markedly less prevalent in age-matched patients who died from sporadic CJD and variant CJD. These results are consistent with the hypothesis that Aβ, which can accumulate in the pituitary gland, was present in the inoculated hGH preparations and had a seeding effect in the brains of around 50% of all hGH recipients, producing an AD-like neuropathology and cerebral amyloid angiopathy (CAA), regardless of whether CJD neuropathology had occurred. These findings indicate that Aβ seeding can occur independently and in the absence of the abnormal prion protein in the human brain. Our findings provide further evidence for the prion-like seeding properties of Aβ and give insights into the possibility of iatrogenic transmission of AD and CAA

    Common Genetic Variants, Acting Additively, Are a Major Source of Risk for Autism

    Get PDF
    Background: Autism spectrum disorders (ASD) are early onset neurodevelopmental syndromes typified by impairments in reciprocal social interaction and communication, accompanied by restricted and repetitive behaviors. While rare and especially de novo genetic variation are known to affect liability, whether common genetic polymorphism plays a substantial role is an open question and the relative contribution of genes and environment is contentious. It is probable that the relative contributions of rare and common variation, as well as environment, differs between ASD families having only a single affected individual (simplex) versus multiplex families who have two or more affected individuals. Methods: By using quantitative genetics techniques and the contrast of ASD subjects to controls, we estimate what portion of liability can be explained by additive genetic effects, known as narrow-sense heritability. We evaluate relatives of ASD subjects using the same methods to evaluate the assumptions of the additive model and partition families by simplex/multiplex status to determine how heritability changes with status. Results: By analyzing common variation throughout the genome, we show that common genetic polymorphism exerts substantial additive genetic effects on ASD liability and that simplex/multiplex family status has an impact on the identified composition of that risk. As a fraction of the total variation in liability, the estimated narrow-sense heritability exceeds 60% for ASD individuals from multiplex families and is approximately 40% for simplex families. By analyzing parents, unaffected siblings and alleles not transmitted from parents to their affected children, we conclude that the data for simplex ASD families follow the expectation for additive models closely. The data from multiplex families deviate somewhat from an additive model, possibly due to parental assortative mating. Conclusions: Our results, when viewed in the context of results from genome-wide association studies, demonstrate that a myriad of common variants of very small effect impacts ASD liability

    Common genetic variants, acting additively, are a major source of risk for autism

    Full text link
    Abstract Background Autism spectrum disorders (ASD) are early onset neurodevelopmental syndromes typified by impairments in reciprocal social interaction and communication, accompanied by restricted and repetitive behaviors. While rare and especially de novo genetic variation are known to affect liability, whether common genetic polymorphism plays a substantial role is an open question and the relative contribution of genes and environment is contentious. It is probable that the relative contributions of rare and common variation, as well as environment, differs between ASD families having only a single affected individual (simplex) versus multiplex families who have two or more affected individuals. Methods By using quantitative genetics techniques and the contrast of ASD subjects to controls, we estimate what portion of liability can be explained by additive genetic effects, known as narrow-sense heritability. We evaluate relatives of ASD subjects using the same methods to evaluate the assumptions of the additive model and partition families by simplex/multiplex status to determine how heritability changes with status. Results By analyzing common variation throughout the genome, we show that common genetic polymorphism exerts substantial additive genetic effects on ASD liability and that simplex/multiplex family status has an impact on the identified composition of that risk. As a fraction of the total variation in liability, the estimated narrow-sense heritability exceeds 60% for ASD individuals from multiplex families and is approximately 40% for simplex families. By analyzing parents, unaffected siblings and alleles not transmitted from parents to their affected children, we conclude that the data for simplex ASD families follow the expectation for additive models closely. The data from multiplex families deviate somewhat from an additive model, possibly due to parental assortative mating. Conclusions Our results, when viewed in the context of results from genome-wide association studies, demonstrate that a myriad of common variants of very small effect impacts ASD liability.http://deepblue.lib.umich.edu/bitstream/2027.42/112370/1/13229_2012_Article_55.pd

    Differential Attraction of Malaria Mosquitoes to Volatile Blends Produced by Human Skin Bacteria

    Get PDF
    The malaria mosquito Anopheles gambiae sensu stricto is mainly guided by human odour components to find its blood host. Skin bacteria play an important role in the production of human body odour and when grown in vitro, skin bacteria produce volatiles that are attractive to A. gambiae. The role of single skin bacterial species in the production of volatiles that mediate the host-seeking behaviour of mosquitoes has remained largely unknown and is the subject of the present study. Headspace samples were taken to identify volatiles that mediate this behaviour. These volatiles could be used as mosquito attractants or repellents. Five commonly occurring species of skin bacteria were tested in an olfactometer for the production of volatiles that attract A. gambiae. Odour blends produced by some bacterial species were more attractive than blends produced by other species. In contrast to odours from the other bacterial species tested, odours produced by Pseudomonas aeruginosa were not attractive to A. gambiae. Headspace analysis of bacterial volatiles in combination with behavioural assays led to the identification of six compounds that elicited a behavioural effect in A. gambiae. Our results provide, to our knowledge, the first evidence for a role of selected bacterial species, common on the human skin, in determining the attractiveness of humans to malaria mosquitoes. This information will be used in the further development of a blend of semiochemicals for the manipulation of mosquito behaviour

    Irony Detection in Twitter: The Role of Affective Content

    Full text link
    © ACM 2016. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Internet Technology, Vol. 16. http://dx.doi.org/10.1145/2930663.[EN] Irony has been proven to be pervasive in social media, posing a challenge to sentiment analysis systems. It is a creative linguistic phenomenon where affect-related aspects play a key role. In this work, we address the problem of detecting irony in tweets, casting it as a classification problem. We propose a novel model that explores the use of affective features based on a wide range of lexical resources available for English, reflecting different facets of affect. Classification experiments over different corpora show that affective information helps in distinguishing among ironic and nonironic tweets. Our model outperforms the state of the art in almost all cases.The National Council for Science and Technology (CONACyT Mexico) has funded the research work of Delia Irazu Hernandez Farias (Grant No. 218109/313683 CVU-369616). The work of Viviana Patti was partially carried out at the Universitat Politecnica de Valencia within the framework of a fellowship of the University of Turin cofunded by Fondazione CRT (World Wide Style Program 2). The work of Paolo Rosso has been partially funded by the SomEMBED TIN2015-71147-C2-1-P MINECO research project and by the Generalitat Valenciana under the grant ALMAMATER (PrometeoII/2014/030).Hernandez-Farias, DI.; Patti, V.; Rosso, P. (2016). Irony Detection in Twitter: The Role of Affective Content. ACM Transactions on Internet Technology. 16(3):19:1-19:24. https://doi.org/10.1145/2930663S19:119:24163Rob Abbott, Marilyn Walker, Pranav Anand, Jean E. Fox Tree, Robeson Bowmani, and Joseph King. 2011. How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Languages in Social Media (LSM’11). Association for Computational Linguistics, Stroudsburg, PA, USA, 2--11.Laura Alba-Juez and Salvatore Attardo. 2014. The evaluative palette of verbal irony. In Evaluation in Context, Geoff Thompson and Laura Alba-Juez (Eds.). John Benjamins Publishing Company, Amsterdam/ Philadelphia, 93--116.Magda B. Arnold. 1960. Emotion and Personality. Vol. 1. Columbia University Press, New York, NY.Giuseppe Attardi, Valerio Basile, Cristina Bosco, Tommaso Caselli, Felice Dell’Orletta, Simonetta Montemagni, Viviana Patti, Maria Simi, and Rachele Sprugnoli. 2015. State of the art language technologies for italian: The EVALITA 2014 perspective. Journal of Intelligenza Artificiale 9, 1 (2015), 43--61.Salvatore Attardo. 2000. Irony as relevant inappropriateness. Journal of Pragmatics 32, 6 (2000), 793--826.Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC’10). European Language Resources Association (ELRA), Valletta, Malta, 2200,2204.David Bamman and Noah A. Smith. 2015. Contextualized sarcasm detection on twitter. In Proceedings of the 9th International Conference on Web and Social Media, (ICWSM’15). AAAI, Oxford, UK, 574--577.Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics, Baltimore, Maryland, 50--58.Valerio Basile, Andrea Bolioli, Malvina Nissim, Viviana Patti, and Paolo Rosso. 2014. Overview of the evalita 2014 SENTIment POLarity classification task. In Proceedings of the 4th Evaluation Campaign of Natural Language Processing and Speech tools for Italian (EVALITA’14). Pisa University Press, Pisa, Italy, 50--57.Cristina Bosco, Viviana Patti, and Andrea Bolioli. 2013. Developing corpora for sentiment analysis: The case of irony and senti-TUT. IEEE Intelligent Systems 28, 2 (March 2013), 55--63.Andrea Bowes and Albert Katz. 2011. When sarcasm stings. Discourse Processes: A Multidisciplinary Journal 48, 4 (2011), 215--236.Margaret M. Bradley and Peter J. Lang. 1999. Affective Norms for English Words (ANEW): Instruction Manual and Affective Ratings. Technical Report. Center for Research in Psychophysiology, University of Florida, Gainesville, Florida.Konstantin Buschmeier, Philipp Cimiano, and Roman Klinger. 2014. An impact analysis of features in a classification approach to irony detection in product reviews. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics, Baltimore, Maryland, 42--49.Erik Cambria, Andrew Livingstone, and Amir Hussain. 2012. The hourglass of emotions. In Cognitive Behavioural Systems. Lecture Notes in Computer Science, Vol. 7403. Springer, Berlin, 144--157.Erik Cambria, Daniel Olsher, and Dheeraj Rajagopal. 2014. SenticNet 3: A common and common-sense knowledge base for cognition-driven sentiment analysis. In Proceedings of AAAI Conference on Artificial Intelligence. AAAI, Québec, Canada, 1515--1521.Jorge Carrillo de Albornoz, Laura Plaza, and Pablo Gervás. 2012. SentiSense: An easily scalable concept-based affective lexicon for sentiment analysis. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12) (23-25), Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association (ELRA), Istanbul, Turkey, 3562--3567.Paula Carvalho, Luís Sarmento, Mário J. Silva, and Eugénio de Oliveira. 2009. Clues for detecting irony in user-generated contents: Oh&hallip;!! It’s “so easy” ;-). In Proceedings of the 1st International CIKM Workshop on Topic-sentiment Analysis for Mass Opinion (TSA’09). ACM, New York, NY, 53--56.Yoonjung Choi and Janyce Wiebe. 2014. +/-EffectWordNet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14). Association for Computational Linguistics, Doha, Qatar, 1181--1191.Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the 14th Conference on Computational Natural Language Learning (CoNLL’10). Association for Computational Linguistics, Uppsala, Sweden, 107--116.Shelly Dews, Joan Kaplan, and Ellen Winner. 1995. Why not say it directly? The social functions of irony. Discourse Processes 19, 3 (1995), 347--367.Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion 6, 3--4 (1992), 169--200.Elisabetta Fersini, Federico Alberto Pozzi, and Enza Messina. 2015. Detecting irony and sarcasm in microblogs: The role of expressive signals and ensemble classifiers. In 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA’15). IEEE Xplore Digital Library, Paris, France, 1--8.Elena Filatova. 2012. Irony and sarcasm: Corpus generation and analysis using crowdsourcing. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, 392--398.Aniruddha Ghosh, Guofu Li, Tony Veale, Paolo Rosso, Ekaterina Shutova, John Barnden, and Antonio Reyes. 2015. SemEval-2015 task 11: Sentiment analysis of figurative language in twitter. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval’15). Association for Computational Linguistics, Denver, Colorado, 470--478.Raymond W. Gibbs. 2000. Irony in talk among friends. Metaphor and Symbol 15, 1--2 (2000), 5--27.Rachel Giora and Salvatore Attardo. 2014. Irony. In Encyclopedia of Humor Studies. SAGE, Thousand Oaks, CA.Rachel Giora and Ofer Fein. 1999. Irony: Context and salience. Metaphor and Symbol 14, 4 (1999), 241--257.Roberto González-Ibáñez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: A closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (HLT’11). Association for Computational Linguistics, Portland, OR, 581--586.H. Paul Grice. 1975. Logic and conversation. In Syntax and Semantics: Vol. 3: Speech Acts, P. Cole and J. L. Morgan (Eds.). Academic Press, San Diego, CA, 41--58.Irazú Hernández Farías, José-Miguel Benedí, and Paolo Rosso. 2015. Applying basic features from sentiment analysis for automatic irony detection. In Pattern Recognition and Image Analysis. Lecture Notes in Computer Science, Vol. 9117. Springer International Publishing, Santiago de Compostela, Spain, 337--344.Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’04). ACM, Seattle, WA, 168--177.Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, 757--762.Jihen Karoui, Farah Benamara, Véronique Moriceau, Nathalie Aussenac-Gilles, and Lamia Hadrich-Belguith. 2015. Towards a contextual pragmatic model to detect irony in tweets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, 644--650.Roger J. Kreuz and Gina M. Caucci. 2007. Lexical influences on the perception of sarcasm. In Proceedings of the Workshop on Computational Approaches to Figurative Language (FigLanguages’07). Association for Computational Linguistics, Rochester, NY, 1--4.Florian Kunneman, Christine Liebrecht, Margot van Mulken, and Antal van den Bosch. 2015. Signaling sarcasm: From hyperbole to hashtag. Information Processing & Management 51, 4 (2015), 500--509.Christopher Lee and Albert Katz. 1998. The differential role of ridicule in sarcasm and irony. Metaphor and Symbol 13, 1 (1998), 1--15.John S. Leggitt and Raymond W. Gibbs. 2000. Emotional reactions to verbal irony. Discourse Processes 29, 1 (2000), 1--24.Stephanie Lukin and Marilyn Walker. 2013. Really? Well. Apparently bootstrapping improves the performance of sarcasm and nastiness classifiers for online dialogue. In Proceedings of the Workshop on Language Analysis in Social Media. Association for Computational Linguistics, Atlanta, GA, 30--40.Diana Maynard and Mark Greenwood. 2014. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC’14) (26-31). European Language Resources Association (ELRA), Reykjavik, Iceland, 4238--4243.Skye McDonald. 2007. Neuropsychological studies of sarcasm. In Irony in Language and Thought: A Cognitive Science Reader, H. Colston and R. Gibbs (Eds.). Lawrence Erlbaum, 217--230.Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word--emotion association lexicon. Computational Intelligence 29, 3 (2013), 436--465.Saif M. Mohammad, Xiaodan Zhu, Svetlana Kiritchenko, and Joel Martin. 2015. Sentiment, emotion, purpose, and style in electoral tweets. Information Processing & Management 51, 4 (2015), 480--499.Finn Årup Nielsen. 2011. A new ANEW: Evaluation of a word list for sentiment analysis in microblogs. In Proceedings of the ESWC2011 Workshop on “Making Sense of Microposts”: Big Things Come in Small Packages (CEUR Workshop Proceedings), Vol. 718. CEUR-WS.org, Heraklion, Crete, Greece, 93--98.W. Gerrod Parrot. 2001. Emotions in Social Psychology: Essential Readings. Psychology Press, Philadelphia, PA.James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, 71.Robert Plutchik. 2001. The nature of emotions. American Scientist 89, 4 (2001), 344--350.Soujanya Poria, Alexander Gelbukh, Amir Hussain, Newton Howard, Dipankar Das, and Sivaji Bandyopadhyay. 2013. Enhanced senticnet with affective labels for concept-based opinion mining. IEEE Intelligent Systems 28, 2 (2013), 31--38.Tomáš Ptáček, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on Czech and English twitter. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 213--223.Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the 8th ACM International Conference on Web Search and Data Mining (WSDM’15). ACM, 97--106.Antonio Reyes and Paolo Rosso. 2014. On the difficulty of automatically detecting irony: Beyond a simple case of negation. Knowledge Information Systems 40, 3 (2014), 595--614.Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in twitter. Language Resources and Evaluation 47, 1 (2013), 239--268.Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, (EMNLP’13). Association for Computational Linguistics, Seattle, Washington, 704--714.Simone Shamay-Tsoory, Rachel Tomer, B. D. Berger, Dorith Goldsher, and Judith Aharon-Peretz. 2005. Impaired “affective theory of mind” is associated with right ventromedial prefrontal damage. Cognitive Behavioral Neurology 18, 1 (2005), 55--67.Philip J. Stone and Earl B. Hunt. 1963. A computer approach to content analysis: Studies using the general inquirer system. In Proceedings of the May 21-23, 1963, Spring Joint Computer Conference (AFIPS’63 (Spring)). ACM, New York, NY, 241--256.Emilio Sulis, Delia Irazú Hernández Farías, Paolo Rosso, Viviana Patti, and Giancarlo Ruffo. 2016. Figurative messages and affect in Twitter: Differences between #irony, #sarcasm and #not. Knowledge-Based Systems. In Press. Available online.Maite Taboada and Jack Grieve. 2004. Analyzing appraisal automatically. In Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI, Stanford, CA, 158--161.Yi-jie Tang and Hsin-Hsi Chen. 2014. Chinese irony corpus construction and ironic structure analysis. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). Association for Computational Linguistics, Dublin, Ireland, 1269--1278.Tony Veale and Yanfen Hao. 2010. Detecting ironic intent in creative comparisons. In Proceedings of the 19th European Conference on Artificial Intelligence. IOS Press, Amsterdam, The Netherlands, 765--770.Byron C. Wallace. 2015. Computational irony: A survey and new perspectives. Artificial Intelligence Review 43, 4 (2015), 467--483.Byron C. Wallace, Do Kook Choe, and Eugene Charniak. 2015. Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 1035--1044.Angela P. Wang. 2013. #Irony or #sarcasm—a quantitative and qualitative study based on twitter. In Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC’13). Department of English, National Chengchi University, Taipei, Taiwan, 349--356.Juanita M. Whalen, Penny M. Pexman, J. Alastair Gill, and Scott Nowson. 2013. Verbal irony use in personal blogs. Behaviour & Information Technology 32, 6 (2013), 560--569.Cynthia Whissell. 2009. Using the revised dictionary of affect in language to quantify the emotional undertones of samples of natural languages. Psychological Reports 2, 105 (2009), 509--521.Deirdre Wilson and Dan Sperber. 1992. On verbal irony. Lingua 87, 1--2 (1992), 53--76.Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT’05). Association for Computational Linguistics, Stroudsburg, PA, 347--354.Alecia Wolf. 2000. Emotional expression online: Gender differences in emoticon use. CyberPsychology & Behavior 3, 5 (2000), 827--833.Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics (ACL’94). Association for Computational Linguistics, Stroudsburg, PA, 133--138

    Manipulating the alpha level cannot cure significance testing

    Get PDF
    We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable

    The Epistemic Status of Processing Fluency as Source for Judgments of Truth

    Get PDF
    This article combines findings from cognitive psychology on the role of processing fluency in truth judgments with epistemological theory on justification of belief. We first review evidence that repeated exposure to a statement increases the subjective ease with which that statement is processed. This increased processing fluency, in turn, increases the probability that the statement is judged to be true. The basic question discussed here is whether the use of processing fluency as a cue to truth is epistemically justified. In the present analysis, based on Bayes’ Theorem, we adopt the reliable-process account of justification presented by Goldman (1986) and show that fluency is a reliable cue to truth, under the assumption that the majority of statements one has been exposed to are true. In the final section, we broaden the scope of this analysis and discuss how processing fluency as a potentially universal cue to judged truth may contribute to cultural differences in commonsense beliefs
    corecore