314 research outputs found

    What Is Historic Integrity Worth to the General Public? Evidence from a Proposed Relocation of a West Virginia Agricultural Mill

    Get PDF
    While historians believe that preserving a historic building in its original location is important to maintain its historic integrity, the general publicÂ’s opinion is unknown. Survey data were gathered from local residents regarding a proposed relocation of a historic mill in rural West Virginia. Only a minority of the sample population supported preserving the mill at its original location. Willingness to pay for preservation was estimated at 8.45foraonetimedonationforthesampleand8.45 for a one-time donation for the sample and 2.29 after adjusting for non-respondents using characteristics of the local population.contingent valuation, historic preservation, Tobit model, willingness to pay, Demand and Price Analysis, Resource /Energy Economics and Policy,

    Soundbite Detection in Broadcast News Domain

    Get PDF
    In this paper, we present results of a study designed to identify SOUNDBITES in Broadcast News. We describe a Conditional Random Field-based model for the detection of these included speech segments uttered by individuals who are interviewed or who are the subject of a news story. Our goal is to identify direct quotations in spoken corpora which can be directly attributable to particular individuals, as well as to associate these soundbites with their speakers. We frame soundbite detection as a binary classification problem in which each turn is categorized either as a soundbite or not. We use lexical, acoustic/prosodic and structural features on a turn level to train a CRF. We performed a 10-fold cross validation experiment in which we obtained an accuracy of 67.4 % and an F-measure of 0.566 which is 20.9 % and 38.6 % higher than a chance baseline. Index Terms: soundbite detection, speaker roles, speech summarization, information extraction

    Summarizing Speech Without Text Using Hidden Markov Models

    Get PDF
    We present a method for summarizing speech documents without using any type of transcript/text in a Hidden Markov Model framework. The hidden variables or states in the model represent whether a sentence is to be included in a summary or not, and the acoustic/prosodic features are the observation vectors. The model predicts the optimal sequence of segments that best summarize the document. We evaluate our method by comparing the predicted summary with one generated by a human summarizer. Our results indicate that we can generate 'good' summaries even when using only acoustic/prosodic information, which points toward the possibility of text-independent summarization for spoken documents

    A Phrase-Level Machine Translation Approach For Disfluency Detection Using Weighted Finite State Transducers

    Get PDF
    We propose a novel algorithm to detect disfluency in speech by reformulating the problem as phrase-level statistical machine translation using weighted finite state transducers. We approach the task as translation of noisy speech to clean speech. We simplify our translation framework such that it does not require fertility and alignment models. We tested our model on the Switchboard disfluency-annotated corpus. Using an optimized decoder that is developed for phrase-based translation at IBM, we are able to detect repeats, repairs and filled pauses for more than a thousand sentences in less than a second with encouraging results. Index Terms: disfluency detection, machine translation, speech-to-speech translation

    Improved Name-Recognition with Meta-data Dependent Name Networks

    Get PDF
    A transcription system that requires accurate general name transcription is faced with the problem of covering the large number of names it may encounter. Without any prior knowledge, this requires a large increase in the size and complexity of the system due to the expansion of the lexicon. Furthermore, this increase will adversely affect the system performance due to the increased confusability. Here we propose a method that uses meta-data, available at runtime to ensure better name coverage without significantly increasing the system complexity. We tested this approach on a voicemail transcription task and assumed meta-data to be available in the form of a caller ID string (as it would show up on a caller ID enabled phone) and the name of the mailbox owner. Networks representing possible spoken realization of those names are generated at runtime and included in network of the decoder. The decoder network is built at training time using a class-dependent language model, with caller and mailbox name instances modeled as class tokens. The class tokens are replaced at test time with the name networks built from the meta-data. The proposed algorithm showed a reduction in the error rate of name tokens of 22.1%

    From Text to Speech Summarization

    Get PDF
    In this paper, we present approaches used in text summarization, showing how they can be adapted for speech summarization and where they fall short. Informal style and apparent lack of structure in speech mean that the typical approaches used for text summarization must be extended for use with speech. We illustrate how features derived from speech can help determine summary content within two ongoing summarization projects at Columbia University

    Use of Exploratory Factor Analysis in Maritime Research

    Get PDF
    The purpose of this paper is to discuss the approaches that are undertaken while applying exploratory factor analysis (EFA) in maritime journals to attain a factor solution that fulfils the criteria of EFA, achieves the research objectives and makes interpretation easy. To achieve the aim of this paper, published articles across maritime journals will be examined to discuss the use of EFA. This will be followed by an example of EFA using an empirical data set to emphasise the approaches that can be undertaken to make appropriate decisions as to whether to retain or drop an item from the analysis to attain an interpretable factor solution.The results of this study demonstrate that majority of maritime studies employing EFA retain a factor solution based on the researchers’ subjective judgement. However, the researchers do not provide sufficient information to allow readers to evaluate the analysis. The majority of the reviewed papers failed to provide important information related to EFA explaining how the final factor structure has been acquired. Furthermore, some papers have failed to justify their decisions, for example, for deleting an item or retaining factors with single measured variable.The first contribution of this study is the analysis of how studies carried out in the maritime sector have been applying EFA in their studies. The second contribution of this study is to provide future researchers aiming to use EFA in their studies for the first time an example of a complete EFA process, explaining different steps that can be undertaken while carrying out EFA
    corecore