142 research outputs found

    Implementasi Jaringan Saraf Konvolusional dengan Inception-V3 untuk Deteksi Katarak Menggunakan Gambar Digital Funduskopi

    Get PDF
    Katarak merupakan salah satu penyakit mata yang paling serius yang dapat menyebabkan kebutaan. Deteksi dan pengobatan dini dapat mengurangi kebutaan pada pasien katarak. Seiring berkembangnya teknologi pelayanan kesehatan saat ini mengintegrasikan alat kesehatan dan teknologi informasi untuk meningkatkan kualitas dan produktivitas dalam pelayanan kesehatan. Hasil gambar funduskopi atau gambar bagian belakang dan dalam mata (fundus) dapat digunakan untuk memprediksi katarak. Dalam Penelitian ini diimplementasikan Convolutional Neural Network (CNN) dengan arsitektur Inception-V3 dalam deteksi katarak berdasarkan gambar digital funduskopi. Terdapat 3 jenis citra fundus yang digunakan yaitu citra fundus normal, citra fundus katarak, dan citra fundus degenerasi makula. Data gambar fundus dipraproses menggunakan histogram equalization dan Contrast Limited Adaptive Histogram Equalization (CLAHE) terhadap channel hijau. Hasil terbaik pada Penelitian ini adalah model dengan praproses CLAHE dengan Fine Tuning yang memiliki akurasi sebesar 98,33%

    BinaryConnect: Training Deep Neural Networks with binary weights during propagations

    Full text link
    Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.Comment: Accepted at NIPS 2015, 9 pages, 3 figure

    Do optimization methods in deep learning applications matter?

    Get PDF
    With advances in deep learning, exponential data growth and increasing model complexity, developing efficient optimization methods are attracting much research attention. Several implementations favor the use of Conjugate Gradient (CG) and Stochastic Gradient Descent (SGD) as being practical and elegant solutions to achieve quick convergence, however, these optimization processes also present many limitations in learning across deep learning applications. Recent research is exploring higher-order optimization functions as better approaches, but these present very complex computational challenges for practical use. Comparing first and higher-order optimization functions, in this paper, our experiments reveal that Levemberg-Marquardt (LM) significantly supersedes optimal convergence but suffers from very large processing time increasing the training complexity of both, classification and reinforcement learning problems. Our experiments compare off-the-shelf optimization functions(CG, SGD, LM and L-BFGS) in standard CIFAR, MNIST, CartPole and FlappyBird experiments.The paper presents arguments on which optimization functions to use and further, which functions would benefit from parallelization efforts to improve pretraining time and learning rate convergence
    • …
    corecore