91 research outputs found

    Mapping Acoustic and Semantic Dimensions of Auditory Perception

    Get PDF
    Auditory categorisation is a function of sensory perception which allows humans to generalise across many different sounds present in the environment and classify them into behaviourally relevant categories. These categories cover not only the variance of acoustic properties of the signal but also a wide variety of sound sources. However, it is unclear to what extent the acoustic structure of sound is associated with, and conveys, different facets of semantic category information. Whether people use such data and what drives their decisions when both acoustic and semantic information about the sound is available, also remains unknown. To answer these questions, we used the existing methods broadly practised in linguistics, acoustics and cognitive science, and bridged these domains by delineating their shared space. Firstly, we took a model-free exploratory approach to examine the underlying structure and inherent patterns in our dataset. To this end, we ran principal components, clustering and multidimensional scaling analyses. At the same time, we drew sound labels’ semantic space topography based on corpus-based word embeddings vectors. We then built an LDA model predicting class membership and compared the model-free approach and model predictions with the actual taxonomy. Finally, by conducting a series of web-based behavioural experiments, we investigated whether acoustic and semantic topographies relate to perceptual judgements. This analysis pipeline showed that natural sound categories could be successfully predicted based on the acoustic information alone and that perception of natural sound categories has some acoustic grounding. Results from our studies help to recognise the role of physical sound characteristics and their meaning in the process of sound perception and give an invaluable insight into the mechanisms governing the machine-based and human classifications

    Searching Spontaneous Conversational Speech:Proceedings of ACM SIGIR Workshop (SSCS2008)

    Get PDF

    Proceedings of the Detection and Classification of Acoustic Scenes and Events 2016 Workshop (DCASE2016)

    Get PDF

    Pragmatics & Language Learning, Volume 12

    Get PDF
    Pragmatics & Language Learning Volume 12 examines the organization of second language and multilingual speakers’ talk and pragmatic knowledge across a range of naturalistic and experimental activities. Based on data collected on Danish, English, Hawaiʻi Creole, Indonesian, and Japanese as target languages, the contributions explore the nexus of pragmatic knowledge, interaction, and L2 learning outside and inside of educational settings. Pragmatics & Language Learning (“PLL”), a refereed series sponsored by the National Foreign Language Resource Center at the University of Hawaiʻi, publishes selected papers from the biennial Conference on International Pragmatics & Language Learning under the editorship of the conference hosts and the series editor, Gabriele Kasper

    Combining Text Classification and Fact Checking to Detect Fake News

    Get PDF
    Due to the widespread use of fake news in social and news media, it is an emerging research topic gaining attention in today‘s world. In news media and social media, information is spread at high speed but without accuracy, and therefore detection mechanisms should be able to predict news quickly enough to combat the spread of fake news. It has the potential for a negative impact on individuals and society. Therefore, detecting fake news is important and also a technically challenging problem nowadays. The challenge is to use text classification to combat fake news. This includes determining appropriate text classification methods and evaluating how good these methods are at distinguishing between fake and non- fake news. Machine learning is helpful for building Artificial intelligence systems based on tacit knowledge because it can help us solve complex problems based on real-world data. For this reason, I proposed that integrating text classification and fact checking of check-worthy statements can be helpful in detecting fake news. I used text processing and three classifiers such as Passive Aggressive, Naïve Bayes, and Support Vector Machine to classify the news data. Text classification mainly focuses on extracting various features from texts and then incorporating these features into the classification. The big challenge in this area is the lack of an efficient method to distinguish between fake news and non-fake news due to the lack of corpora. I applied three different machine learning classifiers to two publicly available datasets. Experimental analysis based on the available dataset shows very encouraging and improved performance. Simple classification is not quite accurate in detecting fake news because the classification methods are not specialized for fake news. So I added a system that checks the news in depth sentence by sentence. Fact checking is a multi-step process that begins with the extraction of check-worthy statements. Identification of check-worthy statements is a subtask in the fact checking process, the automation of which would reduce the time and effort required to fact check a statement. In this thesis I have proposed an approach that focuses on classifying statements into check-worthy and not check-worthy, while also taking into account the context around a statement. This work shows that inclusion of context in the approach makes a significant contribution to classification, while at the same time using more general features to capture information from sentences. The aim of thischallenge is to propose an approach that automatically identifies check-worthy statements for fact checking, including the context around a statement. The results are analyzed by examining which features contributes more to classification, but also how well the approach performs. For this work, a dataset is created by consulting different fact checking organizations. It contains debates and speeches in the domain of politics. The capability of the approach is evaluated in this domain. The approach starts with extracting sentence and context features from the sentences, and then classifying the sentences based on these features. The feature set and context features are selected after several experiments, based on how well they differentiate check-worthy statements. Fact checking has received increasing attention after the 2016 United States Presidential election; so far that many efforts have been made to develop a viable automated fact checking system. I introduced a web based approach for fact checking that compares the full news text and headline with known facts such as name, location, and place. The challenge is to develop an automated application that takes claims directly from mainstream news media websites and fact checks the news after applying classification and fact checking components. For fact checking a dataset is constructed that contains 2146 news articles labelled fake, non-fake and unverified. I include forty mainstream news media sources to compare the results and also Wikipedia for double verification. This work shows that a combination of text classification and fact checking gives considerable contribution to the detection of fake news, while also using more general features to capture information from sentences

    Unsupervised Recognition of Motion Verbs Metaphoricity in Atyical Political Dialogues

    Get PDF
    This thesis deals with the unsupervised recognition of the novel metaphorical use of lexical items in dialogical naturally-occurring political texts without the recourse to task-specific hand-crafted knowledge. The focus of metaphorical analysis is represented by the class of verbs of motion identified by Beth Levin. These lexical items are investigated in the atypical political genre of the White House Press Briefings due to their role in the communication strategies deployed in public and political discourse. The Computational White House press Briefings (CompWHoB) corpus, a large resource developed as one of the main objectives of the present work, is used for the extraction of the press briefings including the lexical items under analysis. The metaphor recognition of the motion verbs is addressed employing unsupervised techniques which theoretical foundations primarily lie in the Distributional Hypothesis theory, i.e. word embeddings and topic models. Three algorithms are developed for the task, combining the Word2Vec and the Latent Dirichlet Allocation models, and based on two approaches representing their foundational theoretical framework. The first one is defined as "local" and leverages the syntactic relations of the verb of motion with its direct object for the detection of metaphoricity. The second one, termed as "global", drifts away from the use of the syntactic knowledge as feature of the system hence only using the information inferred from the discourse context. The three systems and their corresponding approaches are evaluated against 1220 instances of verbs of motion annotated by human judges according to their metaphoricity. Results show that the global approach performs poorly compared to the other two models also implementing the local approach, leading to the conclusion that a syntax-agnostic system is still far from reaching a significant performance. The evaluation of the local approach yields instead promising results, proving the importance of endowing the machine with syntactic knowledge as also confirmed by a qualitative analysis on the influence of the linguistic properties of metaphorical utterances

    Play Among Books

    Get PDF
    How does coding change the way we think about architecture? Miro Roman and his AI Alice_ch3n81 develop a playful scenario in which they propose coding as the new literacy of information. They convey knowledge in the form of a project model that links the fields of architecture and information through two interwoven narrative strands in an “infinite flow” of real books

    More playful user interfaces:interfaces that invite social and physical interaction

    Get PDF

    Violent urban disturbance in England 1980-81

    Get PDF
    This study addresses violent urban disturbances which occurred in England in the early 1980s with particular reference to the Bristol ‘riots’ of April 1980 and the numerous disorders which followed in July 1981. Revisiting two concepts traditionally utilised to explain the spread of collective violence, namely ‘diffusion’ and ‘contagion,’ it argues that the latter offers a more useful model for understanding the above-mentioned events. Diffusion used in this context implies that such disturbances are independent of each other and occur randomly. It is associated with the concept of ‘copycat riots’, which were commonly invoked by the national media as a way of explaining the spread of urban disturbances in July 1981. Contagion by contrast holds that urban disturbances are related to one another and involve a variety of communication processes and rational collective decision-making. This implies that such events can only be fully understood if they are studied in terms of their local dynamics.Providing the first comprehensive macro-historical analysis of the disturbances of July 1981, this thesis utilises a range of quantitative techniques to argue that the temporal and spatial spread of the unrest exhibited patterns of contagion. These mini-waves of disorder located in several conurbations were precipitated by major disturbances in inner-city multi-ethnic areas. This contradicts more conventional explanations which credit the national media as the sole driver of riotous behaviour.The thesis then proceeds to offer a micro analysis of disturbances in Bristol in April 1980, incorporating both qualitative and quantitative techniques. Exploiting previously unexplored primary sources and recently collected oral histories from participants, it establishes detailed narratives of three related disturbances in the city. The anatomy of the individual incidents and local contagious effects are examined using spatial mapping, social network and ethnographic analyses. The results suggest that previously ignored educational, sub-cultural and ethnographic intra- and inter-community linkages were important factors in the spread of the disorders in Bristol.The case studies of the Bristol disorders are then used to illuminate our understanding of the processes at work during the July 1981 disturbances. It is argued that the latter events were essentially characterised by anti-police and anti-racist collective violence, which marked a momentary recomposition of working-class youth across ethnic divides
    • 

    corecore