2 research outputs found

    FPGA-based implementation of speech recognition for robocar control using MFCC

    Get PDF
    This research proposes a simulation of the logic series of speech recognition on the MFCC (Mel Frequency Spread Spectrum) based FPGA and Euclidean Distance to control the robotic car motion. The speech known would be used as a command to operate the robotic car. MFCC in this study was used in the feature extraction process, while Euclidean distance was applied in the feature classification process of each speech that later would be forwarded to the part of decision to give the control logic in robotic motor. The test that has been conducted showed that the logic series designed was precise here by measuring the Mel Frequency Warping and Power Cepstrum. With the achievement of logic design in this research proven with a comparison between the Matlab computation and Xilinx simulation, it enables to facilitate the researchers to continue its implementation to FPGA hardware

    Multimodal Interaction in Electronic Customer Loyalty Management Systems: An Empirical Investigation

    Get PDF
    This thesis investigates the application of multimodal metaphors in electronic Customer Loyalty Management Systems (e-CLMS) in terms of efficiency, effectiveness, user satisfaction, and understandability of the customisation tasks and information communicated. The potential of users developing loyalty as a result of better usability and user satisfaction is also accessed via questionnaires. The first experiment investigated issues of usability and the users‟ views of an e-commerce platform developed for these experiments using three conditions with three independent groups. A visual group (VICLMS, n=25) that was communicated information within the platform using text with graphics, a multimodal group (MICLMS, n=25) that usedrecorded speech, earcons and auditory icons and an expressive avatars group(AICLMS, n=25) that was predominantly communicated information using avatars. The second experiment evaluated three avatar-based multimodal conditions using a dependent group (n=50). This experiment evaluated user satisfaction, perceived convenience, enjoyment, ease of use and customisation, and successful completion of user tasks. The conditions were avatars with earcons (AEICLMS), avatars with auditory icons (AAICLMS) and avatars with both earcons and auditory icons (AICLMS).The use of expressive avatars in the e-CLMS interface contributed to the positive predisposition of usersto develop loyalty. Multimodal metaphors contributed more significantly to complex customisation tasks. A set of empirically derived guidelines and a validation approach is suggested for designing multimodal E-CLMS interfaces
    corecore