2,856,952 research outputs found

    Training Process Reduction Based On Potential Weights Linear Analysis To Accelarate Back Propagation Network

    Get PDF
    Learning is the important property of Back Propagation Network (BPN) and finding the suitable weights and thresholds during training in order to improve training time as well as achieve high accuracy. Currently, data pre-processing such as dimension reduction input values and pre-training are the contributing factors in developing efficient techniques for reducing training time with high accuracy and initialization of the weights is the important issue which is random and creates paradox, and leads to low accuracy with high training time. One good data preprocessing technique for accelerating BPN classification is dimension reduction technique but it has problem of missing data. In this paper, we study current pre-training techniques and new preprocessing technique called Potential Weight Linear Analysis (PWLA) which combines normalization, dimension reduction input values and pre-training. In PWLA, the first data preprocessing is performed for generating normalized input values and then applying them by pre-training technique in order to obtain the potential weights. After these phases, dimension of input values matrix will be reduced by using real potential weights. For experiment results XOR problem and three datasets, which are SPECT Heart, SPECTF Heart and Liver disorders (BUPA) will be evaluated. Our results, however, will show that the new technique of PWLA will change BPN to new Supervised Multi Layer Feed Forward Neural Network (SMFFNN) model with high accuracy in one epoch without training cycle. Also PWLA will be able to have power of non linear supervised and unsupervised dimension reduction property for applying by other supervised multi layer feed forward neural network model in future work.Comment: 11 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS 2009, ISSN 1947 5500, Impact factor 0.42

    Inhalation characteristics of asthma patients, COPD patients and healthy volunteers with the Spiromax® and Turbuhaler® devices: a randomised, cross-over study.

    Get PDF
    BACKGROUND: Spiromax® is a novel dry-powder inhaler containing formulations of budesonide plus formoterol (BF). The device is intended to provide dose equivalence with enhanced user-friendliness compared to BF Turbuhaler® in asthma and chronic obstructive pulmonary disease (COPD). The present study was performed to compare inhalation parameters with empty versions of the two devices, and to investigate the effects of enhanced training designed to encourage faster inhalation. METHODS: This randomised, open-label, cross-over study included children with asthma (n = 23), adolescents with asthma (n = 27), adults with asthma (n = 50), adults with COPD (n = 50) and healthy adult volunteers (n = 50). Inhalation manoeuvres were recorded with each device after training with the patient information leaflet (PIL) and after enhanced training using an In-Check Dial device. RESULTS: After PIL training, peak inspiratory flow (PIF), maximum change in pressure (∆P) and the inhalation volume (IV) were significantly higher with Spiromax than with the Turbuhaler device (p values were at least <0.05 in all patient groups). After enhanced training, numerically or significantly higher values for PIF, ∆P, IV and acceleration remained with Spiromax versus Turbuhaler, except for ∆P in COPD patients. After PIL training, one adult asthma patient and one COPD patient inhaled <30 L/min through the Spiromax compared to one adult asthma patient and five COPD patients with the Turbuhaler. All patients achieved PIF values of at least 30 L/min after enhanced training. CONCLUSIONS: The two inhalers have similar resistance so inhalation flows and pressure changes would be expected to be similar. The higher flow-related values noted for Spiromax versus Turbuhaler after PIL training suggest that Spiromax might have human factor advantages in real-world use. After enhanced training, the flow-related differences between devices persisted; increased flow rates were achieved with both devices, and all patients achieved the minimal flow required for adequate drug delivery. Enhanced training could be useful, especially in COPD patients

    Threshold Determination for ARTMAP-FD Familiarity Discrimination

    Full text link
    The ARTMAP-FD neural network performs both identification (placing test patterns in classes encountered during training) and familiarity discrimination (judging whether a test pattern belongs to any of the classes encountered during training). ARTMAP-FD quantifies the familiarity of a test pattern by computing a measure of the degree to which the pattern's components lie within the ranges of values of training patterns grouped in the same cluster. This familiarity measure is compared to a threshold which can be varied to generate a receiver operating characteristic (ROC) curve. Methods for selecting optimal values for the threshold are evaluated. The performance of validation-set methods is compared with that of methods which track the development of the network's discrimination capability during training. The techniques are applied to databases of simulated radar range profiles.Advanced Research Projects Agency; Office of Naval Research (N00011-95-1-0657, N00011-95-0109, NOOOB-96-0659); National Science Foundation (IRI-94-01659

    Fader Networks: Manipulating Images by Sliding Attributes

    Get PDF
    This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space. As a result, after training, our model can generate different realistic versions of an input image by varying the attribute values. By using continuous attribute values, we can choose how much a specific attribute is perceivable in the generated image. This property could allow for applications where users can modify an image using sliding knobs, like faders on a mixing console, to change the facial expression of a portrait, or to update the color of some objects. Compared to the state-of-the-art which mostly relies on training adversarial networks in pixel space by altering attribute values at train time, our approach results in much simpler training schemes and nicely scales to multiple attributes. We present evidence that our model can significantly change the perceived value of the attributes while preserving the naturalness of images.Comment: NIPS 201

    Ethics and values in teacher training

    Get PDF

    Phase space sampling and operator confidence with generative adversarial networks

    Full text link
    We demonstrate that a generative adversarial network can be trained to produce Ising model configurations in distinct regions of phase space. In training a generative adversarial network, the discriminator neural network becomes very good a discerning examples from the training set and examples from the testing set. We demonstrate that this ability can be used as an anomaly detector, producing estimations of operator values along with a confidence in the prediction
    corecore