13 research outputs found

    Identification of cotton and corn plant areas by employing deep transformer encoder approach and different time series satellite images: A case study in Diyarbakir, Turkey

    No full text
    It is very important to determine the crops in the agricultural field in a short time and accurately. Thanks to the satellite images obtained from remote sensing sensors, information can be obtained on many subjects such as the detection and development of agricultural products and annual product forecasting. In this study, it is aimed to automatically detect agricultural crops (corn and cotton) by using Sentinel-1 and Landsat-8 satellite image indexes via a new deep learning approach (Deep Transformer Encoder). This work was carried out in several stages, respectively. In the first stage, a pilot area was determined to obtain Sentinel-1 and Landsat-8 satellite images of agricultural crops used in this study. In the second stage, the coordinates of 100 sample points from this pilot area were taken with the help of GPS and these coordinates were then transferred to Sentinel-1 and Landsat-8 satellite images. In the next step, reflection and backscattering values were obtained from the pixels of the satellite images corresponding to the sample points of these agricultural crops. While creating the data sets of satellite images, the months of June, July, August and September for the years 2016–2021, when the development and harvesting times of agricultural products are close to each other, were preferred. The image data set used in the study consists of a total of 434 images for Sentinel-1 satellite and a total of 693 images for Landsat-8. At the last stage, the datasets obtained from different satellite images were evaluated in three different categories for crop identification with the aid of Deep Transformer Encoder approach. These are: (1-) Crop identification with only Sentinel-1 dataset, (2-) Crop identification only with Landsat-8 dataset, (3-) Crop identification with both Sentinel-1 and Landsat-8 datasets. The results showed that 85%, 95% and 87.5% accuracy values were obtained from the band parameters of Sentinel-1 dataset, Landsat-8 dataset and Sentinel-1&Landsat-8 datasets, respectivel

    Soil Moisture Estimation over Vegetated Agricultural Areas: Tigris Basin, Turkey from Radarsat-2 Data by Polarimetric Decomposition Models and a Generalized Regression Neural Network

    No full text
    Determining the soil moisture in agricultural fields is a significant parameter to use irrigation systems efficiently. In contrast to standard soil moisture measurements, good results might be acquired in a shorter time over large areas by remote sensing tools. In order to estimate the soil moisture over vegetated agricultural areas, a relationship between Radarsat-2 data and measured ground soil moistures was established by polarimetric decomposition models and a generalized regression neural network (GRNN). The experiments were executed over two agricultural sites on the Tigris Basin, Turkey. The study consists of four phases. In the first stage, Radarsat-2 data were acquired on different dates and in situ measurements were implemented simultaneously. In the second phase, the Radarsat-2 data were pre-processed and the GPS coordinates of the soil sample points were imported to this data. Then the standard sigma backscattering coefficients with the Freeman–Durden and H/A/α polarimetric decomposition models were employed for feature extraction and a feature vector with four sigma backscattering coefficients (σhh, σhv, σvh, and σvv) and six polarimetric decomposition parameters (entropy, anisotropy, alpha angle, volume scattering, odd bounce, and double bounce) were generated for each pattern. In the last stage, GRNN was used to estimate the regional soil moisture with the aid of feature vectors. The results indicated that radar is a strong remote sensing tool for soil moisture estimation, with mean absolute errors around 2.31 vol %, 2.11 vol %, and 2.10 vol % for Datasets 1–3, respectively; and 2.46 vol %, 2.70 vol %, 7.09 vol %, and 5.70 vol % on Datasets 1 & 2, 2 & 3, 1 & 3, and 1 & 2 & 3, respectively

    Palmprint recognition system based on deep region of interest features with the aid of hybrid approach

    No full text
    Palmprint recognition system is a biometric technology, which is promising to have a high precision. This system has started to attract the attention of researchers, especially with the emergence of deep learning techniques in recent years. In this study, a deep learning and machine learning-based hybrid approach has been recommended to recognize palmprint images automatically via region of interest (ROI) features. The proposed work consists of several stages, respectively. In the first stage, the raw images have been collected from the PolyU database and preprocessing operations have been implemented in order to determine ROI areas. In the second stage, deep ROI features have been extracted from the preprocessed images with the aid of deep learning technique. In the last stage, the obtained deep features have been classified by employing a hybrid deep convolutional neural network and support vector machine models. Finally, it has been observed that the overall accuracy of the proposed system has achieved very successful results as 99.72% via hybrid approach. Moreover, very low execution time has been observed for whole process of the proposed system with 0.10 s

    Automatic diagnosis of cardiovascular disorders by sub images of the ECG signal using multi-feature extraction methods and randomized neural network

    No full text
    Electrocardiography has been employed successfully in medicine for many years to provide vital knowledge about the cardiovascular system. Although processing and evaluation of electrocardiogram (ECG) signals provide helpful information in the detection of anomalies in the vessel, diagnosis of heart defect, and treatment of diseases, multi-channel ECG signals have been started to be employed in order to achieve higher success. Utilizing a multi-channel ECG signal instead of a one-channel ECG signal yields more adequate achievements but require higher complexity in analysis and higher computational cost. To achieve faster and accurate results in multichannel ECG signals, an artificial intelligence-based automatic diagnosis system employing the texture features of two-dimensional images, which are constructed by projecting the ECG signal vector as a row of the image, is proposed. The hypothesis proposed in this study conjectures that these texture features in the images contain determinative indicators of various diseases (cardiovascular abnormalities/disturbance) even for the short-time intervals. Accordingly, the main contribution of this study is to expose that detection of cardiovascular defects can be done with the classical image texture methods by utilizing multi-channel biomedical signals in a sufficiently short-time-interval. The methodology has been implemented in different time intervals of a large dataset constructed from a diverse population that is labeled as one normal sinus rhythm type and eight abnormal types of ECG signals. The accuracy of this hypothesis has been proven by achieving high detection rates of identifying cardiac abnormalities and reduced the computational load of the processing system without any sacrificing accuracy

    Employing deep learning architectures for image-based automatic cataract diagnosis

    No full text
    Various eye diseases affect the quality of human life severely and ultimately may result in complete vision loss. Ocular diseases manifest themselves through mostly visual indicators in the early or mature stages of the disease by showing abnormalities in optics disc, fovea, or other descriptive anatomical structures of the eye. Cataract is among the most harmful diseases that affects millions of people and the leading cause of public vision impairment. It shows major visual symptoms that can be employed for early detection before the hypermature stage. Automatic diagnosis systems intend to assist ophthalmological experts by mitigating the burden of manual clinical decisions and on health care utilization. In this study, a diagnosis system based on color fundus images are addressed for cataract disease. Deep learning-based models were performed for the automatic identification of cataract diseases. Two pretrained robust architectures, namely VGGNet and DenseNet, were employed to detect abnormalities in descriptive parts of the human eye. The proposed system is implemented on a wide and unique dataset that includes diverse color retinal fundus images that are acquired comparatively in low-cost and common modality, which is considered a major contribution of the study. The dataset show symptoms of cataracts in different phases and represents the characteristics of the cataract. By the proposed system, dysfunction associated with cataracts could be identified in the early stage. The achievement of the proposed system is compared to various traditional and up-to-date classification systems. The proposed system achieves 97.94% diagnosis rate for cataract disease grading

    Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images

    No full text
    Today, as in every life-threatening disease, early diagnosis of brain tumors plays a life-saving role. The brain tumor is formed by the transformation of brain cells from their normal structures into abnormal cell structures. These formed abnormal cells begin to form in masses in the brain regions. Nowadays, many different techniques are employed to detect these tumor masses, and the most common of these techniques is Magnetic Resonance Imaging (MRI). In this study, it is aimed to automatically detect brain tumors with the help of ensemble deep learning architectures (ResNet50, VGG19, InceptionV3 and MobileNet) and Class Activation Maps (CAMs) indicators by employing MRI images. The proposed system was implemented in three stages. In the first stage, it was determined whether there was a tumor in the MR images (Binary Approach). In the second stage, different tumor types (Normal, Glioma Tumor, Meningioma Tumor, Pituitary Tumor) were detected from MR images (Multi-class Approach). In the last stage, CAMs of each tumor group were created as an alternative tool to facilitate the work of specialists in tumor detection. The results showed that the overall accuracy of the binary approach was calculated as 100% on the ResNet50, InceptionV3 and MobileNet architectures, and 99.71% on the VGG19 architecture. Moreover, the accuracy values of 96.45% with ResNet50, 93.40% with VGG19, 85.03% with InceptionV3 and 89.34% with MobileNet architectures were obtained in the multi-class approach
    corecore