8,103 research outputs found

    Deriving and Exploiting Situational Information in Speech: Investigations in a Simulated Search and Rescue Scenario

    Get PDF
    The need for automatic recognition and understanding of speech is emerging in tasks involving the processing of large volumes of natural conversations. In application domains such as Search and Rescue, exploiting automated systems for extracting mission-critical information from speech communications has the potential to make a real difference. Spoken language understanding has commonly been approached by identifying units of meaning (such as sentences, named entities, and dialogue acts) for providing a basis for further discourse analysis. However, this fine-grained identification of fundamental units of meaning is sensitive to high error rates in the automatic transcription of noisy speech. This thesis demonstrates that topic segmentation and identification techniques can be employed for information extraction from spoken conversations by being robust to such errors. Two novel topic-based approaches are presented for extracting situational information within the search and rescue context. The first approach shows that identifying the changes in the context and content of first responders' report over time can provide an estimation of their location. The second approach presents a speech-based topological map estimation technique that is inspired, in part, by automatic mapping algorithms commonly used in robotics. The proposed approaches are evaluated on a goal-oriented conversational speech corpus, which has been designed and collected based on an abstract communication model between a first responder and a task leader during a search process. Results have confirmed that a highly imperfect transcription of noisy speech has limited impact on the information extraction performance compared with that obtained on the transcription of clean speech data. This thesis also shows that speech recognition accuracy can benefit from rescoring its initial transcription hypotheses based on the derived high-level location information. A new two-pass speech decoding architecture is presented. In this architecture, the location estimation from a first decoding pass is used to dynamically adapt a general language model which is used for rescoring the initial recognition hypotheses. This decoding strategy has resulted in a statistically significant gain in the recognition accuracy of the spoken conversations in high background noise. It is concluded that the techniques developed in this thesis can be extended to more application domains that deal with large volumes of natural spoken conversations

    How we do things with words: Analyzing text as social and cultural data

    Get PDF
    In this article we describe our experiences with computational text analysis. We hope to achieve three primary goals. First, we aim to shed light on thorny issues not always at the forefront of discussions about computational text analysis methods. Second, we hope to provide a set of best practices for working with thick social and cultural concepts. Our guidance is based on our own experiences and is therefore inherently imperfect. Still, given our diversity of disciplinary backgrounds and research practices, we hope to capture a range of ideas and identify commonalities that will resonate for many. And this leads to our final goal: to help promote interdisciplinary collaborations. Interdisciplinary insights and partnerships are essential for realizing the full potential of any computational text analysis that involves social and cultural concepts, and the more we are able to bridge these divides, the more fruitful we believe our work will be

    Blurring the Line Between Human and Machine: Marketing Artificial Intelligence

    Get PDF
    One of the most prominent and potentially transformative trends in society today is machines becoming more human-like, driven by progress in artificial intelligence. How this trend will impact individuals, private and public organizations, and society as a whole is still unknown, and depends largely on how individual consumers choose to adopt and use these technologies. This dissertation focuses on understanding how consumers perceive, adopt, and use technologies that blur the line between human and machine, with two primary goals. First, I build on psychological and philosophical theories of mind perception, anthropomorphism, and dehumanization, and on management research into technology adoption, in order to develop a theoretical understanding of the forces that shape consumer adoption of these technologies. Second, I develop practical marketing interventions that can be used to influence patterns of adoption according to the desired outcome. This dissertation is organized as follows. Essay 1 develops a conceptual framework for understanding what AI is, what it can do, and what are some of the key antecedents and consequences of its’ adoption. The subsequent two Essays test various parts of this framework. Essay 2 explores consumers’ willingness to use algorithms to perform tasks normally done by humans, focusing specifically on how the nature of the task for which algorithms are used and the human-likeness of the algorithm itself impact consumers’ use of the algorithm. Essay 3 focuses on the use of social robots in consumption contexts, specifically addressing the role of robots’ physical and mental human-likeness in shaping consumers’ comfort with and perceived usefulness of such robots. Together, these three Essays offer an empirically supported conceptual structure ¬for marketing researchers and practitioners to understand artificial intelligence and influence the processes through which consumers perceive and adopt it. Artificial intelligence has the potential to create enormous value for consumers, firms, and society, but also poses many profound challenges and risks. A better understanding of how this transformative technology is perceived and used can potentially help to maximize its potential value and minimize its risks

    The role of tacit knowledge in knowledge intensive project management

    Get PDF
    The traditional doctrine of project management, having evolved from operations management, has been dominated by a rationalist approach in terms of planning and control. There is increasing criticism that this prescriptive approach is deficient for the management of dynamically complex projects which is a common characteristic for modern-day projects. In response to this and the relative lack of scholarly literature, this study uses an emergent grounded theory design to discover and understand the softer, intangible aspects of project management. With primary data collected from twenty semi-structured personal interviews, this study explores the lived experiences of project practitioners and how they ‘muddle through’ the complex social setting of a knowledge intensive financial services organisation. The model which evolved from the research portrays the project practitioner as being exposed to multiple cues, with multiple meanings around five causal themes: environmental, organisational, nature of the task, role and knowledge capability. In response to these cues, the practitioner reflects upon their emotions and past experiences in order to make sense of the uncertain situation to determine their necessary course of action. As a coping strategy the project practitioner takes on the role of bricoleur, by making do by applying combinations of the resources at hand, in order to facilitate the successful delivery of their projects

    The Spoken British National Corpus 2014:design, compilation and analysis

    Get PDF
    The ESRC-funded Centre for Corpus Approaches to Social Science at Lancaster University (CASS) and the English Language Teaching group at Cambridge University Press (CUP) have compiled a new, publicly-accessible corpus of spoken British English from the 2010s, known as the Spoken British National Corpus 2014 (Spoken BNC2014). The 11.5 million-word corpus, gathered solely in informal contexts, is the first freely-accessible corpus of its kind since the spoken component of the original British National Corpus (the Spoken BNC1994), which, despite its age, is still used as a proxy for present-day English in research today. This thesis presents a detailed account of each stage of the Spoken BNC2014’s construction, including its conception, design, transcription, processing and dissemination. It also demonstrates the research potential of the corpus, by presenting a diachronic analysis of ‘bad language’ in spoken British English, comparing the 1990s to the 2010s. The thesis shows how the research team struck a delicate balance between backwards compatibility with the Spoken BNC1994 and optimal practice in the context of compiling a new corpus. Although comparable with its predecessor, the Spoken BNC2014 is shown to represent innovation in approaches to the compilation of spoken corpora. This thesis makes several useful contributions to the linguistic research community. The Spoken BNC2014 itself should be of use to many researchers, educators and students in the corpus linguistics and English language communities and beyond. In addition, the thesis represents an example of good practice with regards to academic collaboration with a commercial stakeholder. Thirdly, although not a ‘user guide’, the methodological discussions and analysis presented in this thesis are intended to help the Spoken BNC2014 to be as useful to as many people, and for as many purposes, as possible
    • …
    corecore