4,746 research outputs found

    Identifying semantically similar arabic words using a large vocabulary speech recognition system

    Get PDF
    Users search digital libraries for book references using one or more attributes such as keywords, subject and author name. Some book titles might contain the keyword that the user specified and thus these titles will directly qualify as candidate results. On the other hand there are other titles that are relevant but do not contain the same exact search keyword. A user expects to retrieve all titles that are relevant to a specified keyword. Similarly when searching for an author name, the system should be able to retrieve the different forms of the name. The library science community developed a mechanism called authority control that allows the user to do a comprehensive search and retrieve all the records that are relevant to the query keyword. In this paper we propose an approach that allows the user to query an Arabic audio library using voice. We use a combination of class-based language models and robust interpretation to recognize and identify the spoken keywords. The mechanism uses a Large Vocabulary Recognition System (LVCSR) to implement the functionality of the authority control system. A series of experiments were performed to assess the accuracy and the robustness of the proposed approach: restricted grammar recognition with semantic interpretation, class-based statistical language models (CB_SLM) with robust interpretation, and generalized CB-SLM. The results have shown that the combination of CB-SLM and robust interpretation provides better accuracy and robustness than the traditional grammar-based parsing

    Using term clouds to represent segment-level semantic content of podcasts

    Get PDF
    Spoken audio, like any time-continuous medium, is notoriously difficult to browse or skim without support of an interface providing semantically annotated jump points to signal the user where to listen in. Creation of time-aligned metadata by human annotators is prohibitively expensive, motivating the investigation of representations of segment-level semantic content based on transcripts generated by automatic speech recognition (ASR). This paper examines the feasibility of using term clouds to provide users with a structured representation of the semantic content of podcast episodes. Podcast episodes are visualized as a series of sub-episode segments, each represented by a term cloud derived from a transcript generated by automatic speech recognition (ASR). Quality of segment-level term clouds is measured quantitatively and their utility is investigated using a small-scale user study based on human labeled segment boundaries. Since the segment-level clouds generated from ASR-transcripts prove useful, we examine an adaptation of text tiling techniques to speech in order to be able to generate segments as part of a completely automated indexing and structuring system for browsing of spoken audio. Results demonstrate that the segments generated are comparable with human selected segment boundaries

    Detection of semantic errors in Arabic texts

    Get PDF
    AbstractDetecting semantic errors in a text is still a challenging area of investigation. A lot of research has been done on lexical and syntactic errors while fewer studies have tackled semantic errors, as they are more difficult to treat. Compared to other languages, Arabic appears to be a special challenge for this problem. Because words are graphically very similar to each other, the risk of getting semantic errors in Arabic texts is bigger. Moreover, there are special cases and unique complexities for this language. This paper deals with the detection of semantic errors in Arabic texts but the approach we have adopted can also be applied for texts in other languages. It combines four contextual methods (using statistics and linguistic information) in order to decide about the semantic validity of a word in a sentence. We chose to implement our approach on a distributed architecture, namely, a Multi Agent System (MAS). The implemented system achieved a precision rate of about 90% and a recall rate of about 83%

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning
    corecore