9 research outputs found

    Number and distribution of Leydig cells (LC) in the epidermis of the growing axolotl, Ambystoma mexicanum (Amphibia: Urodela)

    No full text
    The epidermal Leydig cells (LC) of larval and paedomorphic Urodela (= Caudata) are highly specialized cells, which are characterized by a complex peripheral cytoskeleton (Langerhans’ net) and numerous inclusions usually named secretory granules. We studied number, distribution and development of these cells in larvae up to 100 days after hatching and in some adults of the paedomorphic axolotl (Ambystoma mexicanum). With the exception of a short period after hatching, relation between age and total length of larvae was linear. The tail grew positively, the width of the head negatively allometric. Keeping larvae in groups resulted in a somewhat slower growth, in deviations from a strict linearity of some morphological parameters, and in a delayed increase of the number of LC, which is interpreted as crowding effect. LC could be identified already before hatching and developed first in the head, then in the trunk, and finally in the tail. Number of LC increased highly disproportionally during larval growth. Within 6 months, LC differentiated from relatively small cells (20 × 10 μm) with a vacuolated appearance to large round cells (diameter ca. 65 μm) with distinct and stainable granules and a prominent Langerhans’ net forming several layers within the epidermis. LC neither rested directly upon the basal lamella nor reached the epidermal surface. They showed a moderate mitotic activity in all age groups examined. Number of mitoses appeared too small to explain the high number of LC in the epidermis and to guarantee continuous replacement. Mature, most superficially located LC appear to be shed

    A cDNA library of the eutardigrade Hypsibius klebelsbergi Mihelčič, 1959 and analysis of the actin gene

    No full text
    A cDNA library was constructed from the glacier-dwelling eutardigrade Hypsibius klebelsbergi from more than 2000 individuals collected in the Austrian Central Alps. RNA, DNA and proteins were successively isolated by the Trizol®-method. From the RNA preparation a cDNA library was constructed with the cDNA inserted unidirectionally in the phagemid expression vector TriplEx2. The primary gene library had a titre of 107 pfu ml-1 and the final amplified gene library a titre of 6×109 pfu ml-1. The average insert length was about 1.6 kb. The partial sequence of H. klebelsbergi actin (746 bp) showed highest similarity to GenBank data of Drosophila melanogaster actin at the nucleic acid level (84.9%) and at the amino acid level (98%). Compared with actin fragments of the eutardigrades Ramazzottius oberhaeuseri (450 bp) and Macrobiotus sp. (453 bp) the identities were 85% - 81% and 100% - 98% with respect to the nucleic/amino acids. Identity with actin fragments (359 bp) of Hypsibius dujardini from GenBank was 96% - 100%

    Superior skin cancer classification by the combination of human and artificial intelligence

    No full text
    Background: In recent studies, convolutional neural networks (CNNs) outperformed dermatologists in distinguishing dermoscopic images of melanoma and nevi. In these studies, dermatologists and artificial intelligence were considered as opponents. However, the combination of classifiers frequently yields superior results, both in machine learning and among humans. In this study, we investigated the potential benefit of combining human and artificial intelligence for skin cancer classification. Methods: Using 11,444 dermoscopic images, which were divided into five diagnostic categories, novel deep learning techniques were used to train a single CNN. Then, both 112 dermatologists of 13 German university hospitals and the trained CNN independently classified a set of 300 biopsy-verified skin lesions into those five classes. Taking into account the certainty of the decisions, the two independently determined diagnoses were combined to a new classifier with the help of a gradient boosting method. The primary end-point of the study was the correct classification of the images into five designated categories, whereas the secondary end-point was the correct classification of lesions as either benign or malignant (binary classification). Findings: Regarding the multiclass task, the combination of man and machine achieved an accuracy of 82.95%. This was 1.36% higher than the best of the two individual classifiers (81.59% achieved by the CNN). Owing to the class imbalance in the binary problem, sensitivity, but not accuracy, was examined and demonstrated to be superior (89%) to the best individual classifier (CNN with 86.1%). The specificity in the combined classifier decreased from 89.2% to 84%. However, at an equal sensitivity of 89%, the CNN achieved a specificity of only 81.5% Interpretation: Our findings indicate that the combination of human and artificial intelligence achieves superior results over the independent results of both of these systems. (C) 2019 The Author(s). Published by Elsevier Ltd

    Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks

    No full text
    Background: Recently, convolutional neural networks (CNNs) systematically outperformed dermatologists in distinguishing dermoscopic melanoma and nevi images. However, such a binary classification does not reflect the clinical reality of skin cancer screenings in which multiple diagnoses need to be taken into account. Methods: Using 11,444 dermoscopic images, which covered dermatologic diagnoses comprising the majority of commonly pigmented skin lesions commonly faced in skin cancer screenings, a CNN was trained through novel deep learning techniques. A test set of 300 biopsy-verified images was used to compare the classifier's performance with that of 112 dermatologists from 13 German university hospitals. The primary end-point was the correct classification of the different lesions into benign and malignant. The secondary end-point was the correct classification of the images into one of the five diagnostic categories. Findings: Sensitivity and specificity of dermatologists for the primary end-point were 74.4% (95% confidence interval [CI]: 67.0-81.8%) and 59.8% (95% CI: 49.8-69.8%), respectively. At equal sensitivity, the algorithm achieved a specificity of 91.3% (95% CI: 85.5-97.1%). For the secondary end-point, the mean sensitivity and specificity of the dermatologists were at 56.5% (95% CI: 42.8-70.2%) and 89.2% (95% CI: 85.0-93.3%), respectively. At equal sensitivity, the algorithm achieved a specificity of 98.8%. Two-sided McNemar tests revealed significance for the primary end-point (p < 0.001). For the secondary end-point, outperformance (p < 0.001) was achieved except for basal cell carcinoma (on-par performance). Interpretation: Our findings show that automated classification of dermoscopic melanoma and nevi images is extendable to a multiclass classification problem, thus better reflecting clinical differential diagnoses, while still outperforming dermatologists at a significant level (p < 0.001). (C) 2019 The Author(s). Published by Elsevier Ltd

    Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks

    No full text

    Literaturverzeichnis

    No full text

    Quellen- und Literaturverzeichnis

    No full text
    corecore