29 research outputs found

    Integral Non Epsilon Deelta Pada (A,B)

    Get PDF
    Teori Integrasi Biasanya Dikembangkan Dengan Melibatkan Epsilon Delta. Dengan Demikian Maka Dengan Mempelajari Teori Integral Selalu Melibatkan Suatu Kesimetrian Terhadap Suatu Titik Tertentu

    Representasi Bulat Tengah Dari Jumlah Parsial Deret Harmonik

    Get PDF
    Representasi Bulat Tengah Dari Jumlah Parsial Deret Harmoni

    Transfer learning with multiple pre-trained network for fundus classification

    Get PDF
    Transfer learning (TL) is a technique of reuse and modify a pre-trained network. It reuses feature extraction layer at a pre-trained network. A target domain in TL obtains the features knowledge from the source domain. TL modified classification layer at a pre-trained network. The target domain can do new tasks according to a purpose. In this article, the target domain is fundus image classification includes normal and neovascularization. Data consist of 100 patches. The comparison of training and validation data was 70:30. The selection of training and validation data is done randomly. Steps of TL i.e load pre-trained networks, replace final layers, train the network, and assess network accuracy. First, the pre-trained network is a layer configuration of the convolutional neural network architecture. Pre-trained network used are AlexNet, VGG16, VGG19, ResNet50, ResNet101, GoogLeNet, Inception-V3, InceptionResNetV2, and squeezenet. Second, replace the final layer is to replace the last three layers. They are fully connected layer, softmax, and output layer. The layer is replaced with a fully connected layer that classifies according to number of classes. Furthermore, it's followed by a softmax and output layer that matches with the target domain. Third, we trained the network. Networks were trained to produce optimal accuracy. In this section, we use gradient descent algorithm optimization. Fourth, assess network accuracy. The experiment results show a testing accuracy between 80% and 100%

    Classification of neovascularization using convolutional neural network model

    Get PDF
    Neovascularization is a new vessel in the retina beside the artery-venous. Neovascularization can appear on the optic disk and the entire surface of the retina. The retina categorized in Proliferative Diabetic Retinopathy (PDR) if it has neovascularization. PDR is a severe Diabetic Retinopathy (DR). An image classification system between normal and neovascularization is here presented. The classification using Convolutional Neural Network (CNN) model and classification method such as Support Vector Machine, k-Nearest Neighbor, Naïve Bayes classifier, Discriminant Analysis, and Decision Tree. By far, there are no data patches of neovascularization for the process of classification. Data consist of normal, New Vessel on the Disc (NVD) and New Vessel Elsewhere (NVE). Images are taken from 2 databases, MESSIDOR and Retina Image Bank. The patches are made from a manual crop on the image that has been marked by experts as neovascularization. The dataset consists of 100 data patches. The test results using three scenarios obtained a classification accuracy of 90%-100% with linear loss cross validation 0%-26.67%. The test performs using a single Graphical Processing Unit (GPU)

    On Commutative Characterization of Graph Operation with Respect to Metric Dimension

    Get PDF
    Let  G be a connected graph with vertex set V(G) and W={w1, w2, ..., wm} ⊆ V(G). A representation of a vertex v âˆˆ V(G) with respect to W is an ordered m-tuple r(v|W)=(d(v,w1),d(v,w2),...,d(v,wm)) where d(v,w) is the distance between vertices v and w. The set W is called a resolving set for G if every vertex of G has a distinct representation with respect to W. A resolving set containing a minimum number of vertices is called a basis for G. The metric dimension of G, denoted by dim (G), is the number of vertices in a basis of G. In general, the comb product and the corona product are non-commutative operations in a graph. However, these operations can be commutative with respect to the metric dimension for some graphs with certain conditions. In this paper, we determine the metric dimension of the generalized comb and corona products of graphs and the necessary and sufficient  conditions of the graphs in order for the comb and corona products to be commutative operations with respect to the metric dimension

    Fractional Local Metric Dimension of Comb Product Graphs

    Get PDF
     يعرف الرسم البياني المتصل G مع قمة الرأس (V (G ومجموعة الحافة (E (G، (حي الحل المحلي)   لذرتين متجاورتين u، v بواسطة   دالة الحل المحلية fi لـ  G هي دالة ذات قيمة حقيقية  بحيث يكون   لكل رأسين متجاورين البُعد المتري المحلي الجزئي لـ  الرسم البياني G يشير إلى ، وهو معرّف بواسطة  وهي دالة حل محلية لـ G}. إحدى العمليات في الرسم البياني هي الرسوم البيانية لمنتج Comb. الرسوم البيانية لمنتج Comb لـ G و  H يشار إليه  بواسطة  الهدف من هذا البحث هو تحديد البعد المتري المحلي الجزئي لـ  ، وذلك  لان  الرسم البياني G هو رسم بياني متصل والرسم البياني H هو رسم بياني كامل   نحصل  من   علىThe local resolving neighborhood  of a pair of vertices  for  and  is if there is a vertex  in a connected graph  where the distance from  to  is not equal to the distance from  to , or defined by . A local resolving function  of  is a real valued function   such that  for  and . The local fractional metric dimension of graph  denoted by , defined by  In this research, the author discusses about the local fractional metric dimension of comb product are two graphs, namely graph  and graph , where graph  is a connected graphs and graph  is a complate graph  and denoted by  We ge

    KETERBATASAN OPERATOR INTEGRAL FRAKSIONAL PADA RUANG KUASI METRIK TAK HOMOGEN TERBOBOTI

    Get PDF
    Pada penelitian ini ditemukan syarat cukup keterbatasan operator integral fraksional di ruang Morrey terboboti dan ruang Morrey diperumum terboboti pada ruang Kuasi Metrik yang berbeda dengan hasil penelitian sebelumnya. Pembuktian dilakukan dengan menggunakan ketaksamaan Holder. Keywords— Operator integral fraksional, ruang Morrey terboboti, ruang Morrey diperumum terboboti, ruang kuasi metrik Tak homogen

    Support Vector Machine optimization with fractional gradient descent for data classification

    Get PDF
    Data classification has several problems one of which is a large amount of data that will reduce computing time. SVM is a reliable linear classifier for linear or non-linear data, for large-scale data, there are computational time constraints. The Fractional gradient descent method is an unconstrained optimization algorithm to train classifiers with support vector machines that have convex problems. Compared to the classic integer-order model, a model built with fractional calculus has a significant advantage to accelerate computing time. In this research, it is to conduct investigate the current state of this new optimization method fractional derivatives that can be implemented in the classifier algorithm. The results of the SVM Classifier with fractional gradient descent optimization, it reaches a convergence point of approximately 50 iterations smaller than SVM-SGD. The process of updating or fixing the model is smaller in fractional because the multiplier value is less than 1 or in the form of fractions. The SVM-Fractional SGD algorithm is proven to be an effective method for rainfall forecast decisions
    corecore