10,995 research outputs found

    Features For Automated Tongue Image Shape Classification

    Get PDF
    Inspection of the tongue is a key component in Traditional Chinese Medicine. Chinese medical practitioners diagnose the health status of a patient based on observation of the color, shape, and texture characteristics of the tongue. The condition of the tongue can objectively reflect the presence of certain diseases and aid in the differentiation of syndromes, prognosis of disease and establishment of treatment methods. Tongue shape is a very important feature in tongue diagnosis. A different tongue shape other than ellipse could indicate presence of certain pathologies. In this paper, we propose a novel set of features, based on shape geometry and polynomial equations, for automated recognition and classification of the shape of a tongue image using supervised machine learning techniques. We also present a novel method to correct the orientation/deflection of the tongue based on the symmetry of axis detection method. Experimental results obtained on a set of 303 tongue images demonstrate that the proposed method improves the current state of the art method. © 2012 IEEE

    Computer-aided tongue image diagnosis and analysis

    Get PDF
    Title from PDF of title page (University of Missouri--Columbia, viewed on May 14, 2013).The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file.Dissertation advisor: Dr. Ye DuanIncludes bibliographical references.Vita.Ph. D. University of Missouri--Columbia 2012."May 2012"This work focuses on computer-aided tongue image analysis, specifically, as it relates to Traditional Chinese Medicine (TCM). Computerized tongue diagnosis aid medical practitioners capture quantitative features to improve reliability and consistence of diagnosis. A total computer-aided tongue analysis framework consists of tongue detection, tongue segmentation, tongue feature extraction, tongue classification and analysis, which are all included in our work. We propose a new hybrid image segmentation algorithm that integrates the region-based method with the boundary-based method. We apply this segmentation algorithm in designing an automatic tongue detection and segmentation framework. We also develop a novel color space based feature set for tongue feature extraction to implement an automated ZHENG (TCM syndrome) classification system using machine learning techniques. To further enhance the performance of our classification system, we propose preprocessing the tongue images using the Modified Specular-free technique prior to feature extraction, and explore the extraction of geometry features from the Petechia. Lastly, we propose a new feature set for automated tongue shape classification.Includes bibliographical reference

    Visualization Techniques for Tongue Analysis in Traditional Chinese Medicine

    Get PDF
    Visual inspection of the tongue has been an important diagnostic method of Traditional Chinese Medicine (TCM). Clinic data have shown significant connections between various viscera cancers and abnormalities in the tongue and the tongue coating. Visual inspection of the tongue is simple and inexpensive, but the current practice in TCM is mainly experience-based and the quality of the visual inspection varies between individuals. The computerized inspection method provides quantitative models to evaluate color, texture and surface features on the tongue. In this paper, we investigate visualization techniques and processes to allow interactive data analysis with the aim to merge computerized measurements with human expert's diagnostic variables based on five-scale diagnostic conditions: Healthy (H), History Cancers (HC), History of Polyps (HP), Polyps (P) and Colon Cancer (C)

    Spotting Agreement and Disagreement: A Survey of Nonverbal Audiovisual Cues and Tools

    Get PDF
    While detecting and interpreting temporal patterns of non–verbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems. Nevertheless, it is an important one to achieve if the goal is to realise a naturalistic communication between humans and machines. Machines that are able to sense social attitudes like agreement and disagreement and respond to them in a meaningful way are likely to be welcomed by users due to the more natural, efficient and human–centered interaction they are bound to experience. This paper surveys the nonverbal cues that could be present during agreement and disagreement behavioural displays and lists a number of tools that could be useful in detecting them, as well as a few publicly available databases that could be used to train these tools for analysis of spontaneous, audiovisual instances of agreement and disagreement

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    A survey on mouth modeling and analysis for Sign Language recognition

    Get PDF
    © 2015 IEEE.Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR

    Hyperspectral Imaging Technology Used in Tongue Diagnosis

    Get PDF
    • 

    corecore