6,708 research outputs found
Speech and hand transcribed retrieval
This paper describes the issues and preliminary work involved
in the creation of an information retrieval system that will
manage the retrieval from collections composed of both speech
recognised and ordinary text documents. In previous work, it
has been shown that because of recognition errors, ordinary
documents are generally retrieved in preference to recognised
ones. Means of correcting or eliminating the observed bias is
the subject of this paper. Initial ideas and some preliminary
results are presented
Examining the contributions of automatic speech transcriptions and metadata sources for searching spontaneous conversational speech
The searching spontaneous speech can be enhanced by combining automatic speech transcriptions with semantically
related metadata. An important question is what can be expected from search of such transcriptions and different
sources of related metadata in terms of retrieval effectiveness. The Cross-Language Speech Retrieval (CL-SR) track at recent CLEF workshops provides a spontaneous speech
test collection with manual and automatically derived metadata fields. Using this collection we investigate the comparative search effectiveness of individual fields comprising automated transcriptions and the available metadata. A further important question is how transcriptions and metadata should be combined for the greatest benefit to search accuracy. We compare simple field merging of individual fields with the extended BM25 model for weighted field combination (BM25F). Results indicate that BM25F can produce improved search accuracy, but that it is currently important to set its parameters suitably using a suitable training set
Spoken content retrieval: A survey of techniques and technologies
Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR
Search of spoken documents retrieves well recognized transcripts
This paper presents a series of analyses and experiments on spoken
document retrieval systems: search engines that retrieve transcripts produced by
speech recognizers. Results show that transcripts that match queries well tend to
be recognized more accurately than transcripts that match a query less well.
This result was described in past literature, however, no study or explanation of
the effect has been provided until now. This paper provides such an analysis
showing a relationship between word error rate and query length. The paper
expands on past research by increasing the number of recognitions systems that
are tested as well as showing the effect in an operational speech retrieval
system. Potential future lines of enquiry are also described
The relationship of word error rate to document ranking
This paper describes two experiments that examine the relationship of Word Error Rate (WER) of retrieved
spoken documents returned by a spoken document retrieval system. Previous work has demonstrated that
recognition errors do not significantly affect retrieval effectiveness but whether they will adversely affect
relevance judgement remains unclear. A user-based experiment measuring ability to judge relevance from
the recognised text presented in a retrieved result list was conducted. The results indicated that users were
capable of judging relevance accurately despite transcription errors. This lead an examination of the
relationship of WER in retrieved audio documents to their rank position when retrieved for a particular
query. Here it was shown that WER was somewhat lower for top ranked documents than it was for
documents retrieved further down the ranking, thereby indicating a possible explanation for the success of
the user experiment
Second language learning in the context of MOOCs
Massive Open Online Courses are becoming popular educational vehicles through which universities reach out to non-traditional audiences. Many enrolees hail from other countries and cultures, and struggle to cope with the English language in which these courses are invariably offered. Moreover, most such learners have a strong desire and motivation to extend their knowledge of academic English, particularly in the specific area addressed by the course. Online courses provide a compelling opportunity for domain-specific language learning. They supply a large corpus of interesting linguistic material relevant to a particular area, including supplementary images (slides), audio and video. We contend that this corpus can be automatically analysed, enriched, and transformed into a resource that learners can browse and query in order to extend their ability to understand the language used, and help them express themselves more fluently and eloquently in that domain. To illustrate this idea, an existing online corpus-based language learning tool (FLAX) is applied to a Coursera MOOC entitled Virology 1: How Viruses Work, offered by Columbia University
Creating a data collection for evaluating rich speech retrieval
We describe the development of a test collection for the investigation of speech retrieval beyond identification of relevant content. This collection focuses on satisfying user information needs for queries associated with specific types of speech acts. The collection is based on an archive of the Internet video from Internet video sharing platform (blip.tv), and was provided by the MediaEval benchmarking initiative. A crowdsourcing approach was used to identify segments in the video data which contain speech acts, to create a description of the video containing the act and to generate search queries designed to refind this speech act. We describe and reflect on our experiences with crowdsourcing this test collection using the Amazon Mechanical Turk platform. We highlight the challenges of constructing this dataset, including the selection of the data source, design of the crowdsouring task and the specification of queries and relevant items
Overview of the NTCIR-12 SpokenQuery&Doc-2 task
This paper presents an overview of the Spoken Query and
Spoken Document retrieval (SpokenQuery&Doc-2) task at
the NTCIR-12 Workshop. This task included spoken query
driven spoken content retrieval (SQ-SCR) and a spoken query
driven spoken term detection (SQ-STD) as the two subtasks. The paper describes details of each sub-task, the
data used, the creation of the speech recognition systems
used to create the transcripts, the design of the retrieval
test collections, the metrics used to evaluate the sub-tasks
and a summary of the results of submissions by the task
participants
- …