309,373 research outputs found

    Deep Learning Body Region Classification of MRI and CT examinations

    Full text link
    Standardized body region labelling of individual images provides data that can improve human and computer use of medical images. A CNN-based classifier was developed to identify body regions in CT and MRI. 17 CT (18 MRI) body regions covering the entire human body were defined for the classification task. Three retrospective databases were built for the AI model training, validation, and testing, with a balanced distribution of studies per body region. The test databases originated from a different healthcare network. Accuracy, recall and precision of the classifier was evaluated for patient age, patient gender, institution, scanner manufacturer, contrast, slice thickness, MRI sequence, and CT kernel. The data included a retrospective cohort of 2,934 anonymized CT cases (training: 1,804 studies, validation: 602 studies, test: 528 studies) and 3,185 anonymized MRI cases (training: 1,911 studies, validation: 636 studies, test: 638 studies). 27 institutions from primary care hospitals, community hospitals and imaging centers contributed to the test datasets. The data included cases of all genders in equal proportions and subjects aged from a few months old to +90 years old. An image-level prediction accuracy of 91.9% (90.2 - 92.1) for CT, and 94.2% (92.0 - 95.6) for MRI was achieved. The classification results were robust across all body regions and confounding factors. Due to limited data, performance results for subjects under 10 years-old could not be reliably evaluated. We show that deep learning models can classify CT and MRI images by body region including lower and upper extremities with high accuracy.Comment: 21 pages, 2 figures, 4 table

    Classification of Humans into Ayurvedic Prakruti Types using Computer Vision

    Get PDF
    Ayurveda, a 5000 years old Indian medical science, believes that the universe and hence humans are made up of five elements namely ether, fire, water, earth, and air. The three Doshas (Tridosha) Vata, Pitta, and Kapha originated from the combinations of these elements. Every person has a unique combination of Tridosha elements contributing to a person’s ‘Prakruti’. Prakruti governs the physiological and psychological tendencies in all living beings as well as the way they interact with the environment. This balance influences their physiological features like the texture and colour of skin, hair, eyes, length of fingers, the shape of the palm, body frame, strength of digestion and many more as well as the psychological features like their nature (introverted, extroverted, calm, excitable, intense, laidback), and their reaction to stress and diseases. All these features are coded in the constituents at the time of a person’s creation and do not change throughout their lifetime. Ayurvedic doctors analyze the Prakruti of a person either by assessing the physical features manually and/or by examining the nature of their heartbeat (pulse). Based on this analysis, they diagnose, prevent and cure the disease in patients by prescribing precision medicine. This project focuses on identifying Prakruti of a person by analysing his facial features like hair, eyes, nose, lips and skin colour using facial recognition techniques in computer vision. This is the first of its kind research in this problem area that attempts to bring image processing into the domain of Ayurveda

    Fashion Conversation Data on Instagram

    Full text link
    The fashion industry is establishing its presence on a number of visual-centric social media like Instagram. This creates an interesting clash as fashion brands that have traditionally practiced highly creative and editorialized image marketing now have to engage with people on the platform that epitomizes impromptu, realtime conversation. What kinds of fashion images do brands and individuals share and what are the types of visual features that attract likes and comments? In this research, we take both quantitative and qualitative approaches to answer these questions. We analyze visual features of fashion posts first via manual tagging and then via training on convolutional neural networks. The classified images were examined across four types of fashion brands: mega couture, small couture, designers, and high street. We find that while product-only images make up the majority of fashion conversation in terms of volume, body snaps and face images that portray fashion items more naturally tend to receive a larger number of likes and comments by the audience. Our findings bring insights into building an automated tool for classifying or generating influential fashion information. We make our novel dataset of {24,752} labeled images on fashion conversations, containing visual and textual cues, available for the research community.Comment: 10 pages, 6 figures, This paper will be presented at ICWSM'1

    Person Recognition in Personal Photo Collections

    Full text link
    Recognising persons in everyday photos presents major challenges (occluded faces, different clothing, locations, etc.) for machine vision. We propose a convnet based person recognition system on which we provide an in-depth analysis of informativeness of different body cues, impact of training data, and the common failure modes of the system. In addition, we discuss the limitations of existing benchmarks and propose more challenging ones. Our method is simple and is built on open source and open data, yet it improves the state of the art results on a large dataset of social media photos (PIPA).Comment: Accepted to ICCV 2015, revise
    • …
    corecore