6 research outputs found

    Artificial Neural Networks for the Diagnosis of Aggressive Periodontitis Trained by Immunologic Parameters

    Get PDF
    <div><p>There is neither a single clinical, microbiological, histopathological or genetic test, nor combinations of them, to discriminate aggressive periodontitis (AgP) from chronic periodontitis (CP) patients. We aimed to estimate probability density functions of clinical and immunologic datasets derived from periodontitis patients and construct artificial neural networks (ANNs) to correctly classify patients into AgP or CP class. The fit of probability distributions on the datasets was tested by the Akaike information criterion (<i>AIC</i>). ANNs were trained by cross entropy (<i>CE</i>) values estimated between probabilities of showing certain levels of immunologic parameters and a reference mode probability proposed by kernel density estimation (KDE). The weight decay regularization parameter of the ANNs was determined by 10-fold cross-validation. Possible evidence for 2 clusters of patients on cross-sectional and longitudinal bone loss measurements were revealed by KDE. Two to 7 clusters were shown on datasets of CD4/CD8 ratio, CD3, monocyte, eosinophil, neutrophil and lymphocyte counts, IL-1, IL-2, IL-4, INF-γ and TNF-α level from monocytes, antibody levels against A. <i>actinomycetemcomitans</i> (<i>A.a.</i>) and <i>P.gingivalis</i> (<i>P.g</i>.). ANNs gave 90%–98% accuracy in classifying patients into either AgP or CP. The best overall prediction was given by an ANN with <i>CE</i> of monocyte, eosinophil, neutrophil counts and CD4/CD8 ratio as inputs. ANNs can be powerful in classifying periodontitis patients into AgP or CP, when fed by <i>CE</i> values based on KDE. Therefore ANNs can be employed for accurate diagnosis of AgP or CP by using relatively simple and conveniently obtained parameters, like leukocyte counts in peripheral blood. This will allow clinicians to better adapt specific treatment protocols for their AgP and CP patients.</p></div

    Characteristics of three artificial neural networks (ANN) built on immunological parameters.

    No full text
    a<p>CE = Cross entropy. Feature selection by automatic relevance determination.</p>b<p>CD = cluster of differentiation.</p>c<p>epoch = iteration.</p>d<p>Determined by 10-fold cross validation.</p>e<p>Batch training passes all input data before updating the synaptic weights.</p>f<p>The mean values of 10 random configurations of initial weights are reported (mean values of sensitivity, specificity and overall accuracy of the ANNs against the original clinical diagnosis).</p>g<p>IL = interleukin.</p>h<p>INF = interferon.</p>i<p>TNF = tumor necrosis factor.</p>k<p><i>A.a.</i> = <i>Aggregatibacter actinomycetemcomitans</i>(Y4 antigen).</p>l<p><i>P.g.</i> = <i>Porphyromonas gingivalis</i>(FDC381 antigen).</p>m<p><i>C.o.</i> = <i>Capnocytophaga ochracea</i>.</p>n<p><i>F.n.</i> = <i>Fusobacterium nucleatum</i>.</p>o<p># = Number.</p

    Bivariate kernel density estimation (KDE) for some selected parameters.

    No full text
    <p>(A) Contour plot for bivariate KDE of longitudinal radiographic bone loss level (sample-1) in relation to age: this topographical-like plot shows a main cluster with 0.2 mm longitudinal bone loss and a small cluster with almost five times greater bone loss. (B) Contour plot for bivariate KDE: By estimating probability density for CD4/CD8 ratio by age (sample-2), we see two clusters although not separated distinctly, at modes of 1.5 and 1.9. (C) Contour plot for bivariate KDE: By estimating probability density for CD4/CD8 ratio (sample-2) by disease severity (% of teeth with bone loss ≥ of 50% of their root length), we reveal two distinct clusters of patients, with modes at <i>x</i> values of 1.5 and 1.9.</p

    Univariate kernel density estimation (KDE) graphs.

    No full text
    <p>Graphs A to C. Univariate KDE for radiographic bone loss measurements: modes (single, bimodal or multimodal) are defined as the values that appear more frequent. Graphs A & B from sample-1. In graph C (sample-2) we log transformed the confined data to find support in the interval (−∞, +∞) (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0089757#pone.0089757.s001" target="_blank">text S1</a>). Graphs D to X. Univariate KDE for immunologic data: possible evidence of multimodality for the CD4/CD8 ratio, CD3, lymphocytes, monocytes, eosinophils, basophils and neutrophils counts (sample-2), IgG levels (sample-3), IL-2, IL-4, IL-6, INF-γ, TNF-α, IgG <i>A.a.</i> titers and IgG <i>C.o.</i> titers (sample-4). Mini clusters close to each other are detected for IL-1 and IgG <i>P.g</i> titers (sample-4).</p

    Multilayer perceptron feedforward neural network with error backpropagation.

    No full text
    <p>The information (cross entropy values of immunological parameters for each patient) is inserted in the input neurons. At the hidden layer, here with 6 neurons, we sum the information and transfer it (through the sigmoid function) to the outcome layer, where the sigmoid function exits an AgP or CP verdict. Bias neurons have a constant value and help the network to learn patterns. They are independent from other neurons and can shift the curve of the sigmoid function to the left or to the right. The classification error found at the outcome layer backpropagates in the network and synaptic weights are adapted accordingly as the network learns from its error and tries to minimize it.</p
    corecore