31 research outputs found

    Detection of Macula and Recognition of Aged-Related Macular Degeneration in Retinal Fundus Images

    Get PDF
    In aged people, the central vision is affected by Age-Related Macular Degeneration (AMD). From the digital retinal fundus images, AMD can be recognized because of the existence of Drusen, Choroidal Neovascularization (CNV), and Geographic Atrophy (GA). It is time-consuming and costly for the ophthalmologists to monitor fundus images. A monitoring system for automated digital fundus photography can reduce these problems. In this paper, we propose a new macula detection system based on contrast enhancement, top-hat transformation, and the modified Kirsch template method. Firstly, the retinal fundus image is processed through an image enhancement method so that the intensity distribution is improved for finer visualization. The contrast-enhanced image is further improved using the top-hat transformation function to make the intensities level differentiable between the macula and different sections of images. The retinal vessel is enhanced by employing the modified Kirsch's template method. It enhances the vasculature structures and suppresses the blob-like structures. Furthermore, the OTSU thresholding is used to segment out the dark regions and separate the vessel to extract the candidate regions. The dark region and the background estimated image are subtracted from the extracted blood vessels image to obtain the exact location of the macula. The proposed method applied on 1349 images of STARE, DRIVE, MESSIDOR, and DIARETDB1 databases and achieved the average sensitivity, specificity, accuracy, positive predicted value, F1 score, and area under curve of 97.79 %, 97.65 %, 97.60 %, 97.38 %, 97.57 %, and 96.97 %, respectively. Experimental results reveal that the proposed method attains better performance, in terms of visual quality and enriched quantitative analysis, in comparison with eminent state-of-the-art methods

    An Ensemble Learning Model for COVID-19 Detection from Blood Test Samples

    Get PDF
    Current research endeavors in the application of artificial intelligence (AI) methods in the diagnosis of the COVID-19 disease has proven indispensable with very promising results. Despite these promising results, there are still limitations in real-time detection of COVID-19 using reverse transcription polymerase chain reaction (RT-PCR) test data, such as limited datasets, imbalance classes, a high misclassification rate of models, and the need for specialized research in identifying the best features and thus improving prediction rates. This study aims to investigate and apply the ensemble learning approach to develop prediction models for effective detection of COVID-19 using routine laboratory blood test results. Hence, an ensemble machine learning-based COVID-19 detection system is presented, aiming to aid clinicians to diagnose this virus effectively. The experiment was conducted using custom convolutional neural network (CNN) models as a first-stage classifier and 15 supervised machine learning algorithms as a second-stage classifier: K-Nearest Neighbors, Support Vector Machine (Linear and RBF), Naive Bayes, Decision Tree, Random Forest, MultiLayer Perceptron, AdaBoost, ExtraTrees, Logistic Regression, Linear and Quadratic Discriminant Analysis (LDA/QDA), Passive, Ridge, and Stochastic Gradient Descent Classifier. Our findings show that an ensemble learning model based on DNN and ExtraTrees achieved a mean accuracy of 99.28% and area under curve (AUC) of 99.4%, while AdaBoost gave a mean accuracy of 99.28% and AUC of 98.8% on the San Raffaele Hospital dataset, respectively. The comparison of the proposed COVID-19 detection approach with other state-of-the-art approaches using the same dataset shows that the proposed method outperforms several other COVID-19 diagnostics methods.publishedVersio

    Pareto Optimized Large Mask Approach for Efficient and Background Humanoid Shape Removal

    Get PDF
    The purpose of automated video object removal is to not only detect and remove the object of interest automatically, but also to utilize background context to inpaint the foreground area. Video inpainting requires to fill spatiotemporal gaps in a video with convincing material, necessitating both temporal and spatial consistency; the inpainted part must seamlessly integrate into the background in a variety of scenes, and it must maintain a consistent appearance in subsequent frames even if its surroundings change noticeably. We introduce deep learning-based methodology for removing unwanted human-like shapes in videos. The method uses Pareto-optimized Generative Adversarial Networks (GANs) technology, which is a novel contribution. The system automatically selects the Region of Interest (ROI) for each humanoid shape and uses a skeleton detection module to determine which humanoid shape to retain. The semantic masks of human like shapes are created using a semantic-aware occlusion-robust model that has four primary components: feature extraction, and local, global, and semantic branches. The global branch encodes occlusion-aware information to make the extracted features resistant to occlusion, while the local branch retrieves fine-grained local characteristics. A modified big mask inpainting approach is employed to eliminate a person from the image, leveraging Fast Fourier convolutions and utilizing polygonal chains and rectangles with unpredictable aspect ratios. The inpainter network takes the input image and the mask to create an output image excluding the background humanoid shapes. The generator uses an encoder-decoder structure with included skip connections to recover spatial information and dilated convolution and squeeze and excitation blocks to make the regions behind the humanoid shapes consistent with their surroundings. The discriminator avoids dissimilar structure at the patch scale, and the refiner network catches features around the boundaries of each background humanoid shape. The efficiency was assessed using the Structural Learned Perceptual Image Patch Similarity, Frechet Inception Distance, and Similarity Index Measure metrics and showed promising results in fully automated background person removal task. The method is evaluated on two video object segmentation datasets (DAVIS indicating respective values of 0.02, FID of 5.01 and SSIM of 0.79 and YouTube-VOS, resulting in 0.03, 6.22, 0.78 respectively) as well a database of 66 distinct video sequences of people behind a desk in an office environment (0.02, 4.01, and 0.78 respectively).publishedVersio

    The Model for Learning Objects Design Based on Semantic Technologies

    Get PDF
    The paper presents a comparison of state of the art methods and techniques on implementation of learning objects (LO) in the field of information and communication technologies (ICT) using semantic web services for e-learning. The web can serve as a perfect technological environment for individualized learning which is often based on interactive learning objects. This allows learners to be uniquely identified, content to be specifically personalized, and, as a result, a learner’s progress can be monitored, supported, and assessed. While a range of technological solutions for the development of integrated e-learning environments already exists, the most appropriate solutions require further improvement on implementation of novel learning objects, unification of standardization and integration of learning environments based on semantic web services (SWS) that are still in the early stages of development. This paper introduces a proprietary architectural model for distributed e-learning environments based on semantic web services (SWS), enabling the implementation of a successive learning process by developing innovative learning objects based on modern learning methods. A successful technical implementation of our approach in the environment of Kaunas University of Technology is further detailed and evaluated

    Medical Internet-of-Things Based Breast Cancer Diagnosis Using Hyperparameter-Optimized Neural Networks

    Get PDF
    In today’s healthcare setting, the accurate and timely diagnosis of breast cancer is critical for recovery and treatment in the early stages. In recent years, the Internet of Things (IoT) has experienced a transformation that allows the analysis of real-time and historical data using artificial intelligence (AI) and machine learning (ML) approaches. Medical IoT combines medical devices and AI applications with healthcare infrastructure to support medical diagnostics. The current state-of-the-art approach fails to diagnose breast cancer in its initial period, resulting in the death of most women. As a result, medical professionals and researchers are faced with a tremendous problem in early breast cancer detection. We propose a medical IoT-based diagnostic system that competently identifies malignant and benign people in an IoT environment to resolve the difficulty of identifying early-stage breast cancer. The artificial neural network (ANN) and convolutional neural network (CNN) with hyperparameter optimization are used for malignant vs. benign classification, while the Support Vector Machine (SVM) and Multilayer Perceptron (MLP) were utilized as baseline classifiers for comparison. Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. We employ a particle swarm optimization (PSO) feature selection approach to select more satisfactory features from the breast cancer dataset to enhance the classification performance using MLP and SVM, while grid-based search was used to find the best combination of the hyperparameters of the CNN and ANN models. The Wisconsin Diagnostic Breast Cancer (WDBC) dataset was used to test the proposed approach. The proposed model got a classification accuracy of 98.5% using CNN, and 99.2% using ANN.publishedVersio

    Detection of COVID-19 from deep breathing sounds using sound spectrum with image augmentation and deep learning techniques

    Get PDF
    The COVID-19 pandemic is one of the most disruptive outbreaks of the 21st century considering its impacts on our freedoms and social lifestyle. Several methods have been used to monitor and diagnose this virus, which includes the use of RT-PCR test and chest CT/CXR scans. Recent studies have employed various crowdsourced sound data types such as coughing, breathing, sneezing, etc., for the detection of COVID-19. However, the application of artificial intelligence methods and machine learning algorithms on these sound datasets still suffer some limitations such as the poor performance of the test results due to increase of misclassified data, limited datasets resulting in the overfitting of deep learning methods, the high computational cost of some augmentation models, and varying quality feature-extracted images resulting in poor reliability. We propose a simple yet effective deep learning model, called DeepShufNet, for COVID-19 detection. A data augmentation method based on the color transformation and noise addition was used for generating synthetic image datasets from sound data. The efficiencies of the synthetic dataset were evaluated using two feature extraction approaches, namely Mel spectrogram and GFCC. The performance of the proposed DeepShufNet model was evaluated using a deep breathing COSWARA dataset, which shows improved performance with a lower misclassification rate of the minority class. The proposed model achieved an accuracy, precision, recall, specificity, and f-score of 90.1%, 77.1%, 62.7%, 95.98%, and 69.1%, respectively, for positive COVID-19 detection using the Mel COCOA-2 augmented training datasets. The proposed model showed an improved performance compared to some of the state-of-the-art-methods

    Malignant skin melanoma detection using image augmentation by oversampling in nonlinear lower-dimensional embedding manifold

    Get PDF
    The continuous rise in skin cancer cases, especially in malignant melanoma, has resulted in a high mortality rate of the affected patients due to late detection. Some challenges affecting the success of skin cancer detection include small datasets or data scarcity problem, noisy data, imbalanced data, inconsistency in image sizes and resolutions, unavailability of data, reliability of labeled data (ground truth), and imbalance of skin cancer datasets. This study presents a novel data augmentation technique based on covariant Synthetic Minority Oversampling Technique (SMOTE) to address the data scarcity and class imbalance problem. We propose an improved data augmentation model for effective detection of melanoma skin cancer. Our method is based on data oversampling in a nonlinear lower-dimensional embedding manifold for creating synthetic melanoma images. The proposed data augmentation technique is used to generate a new skin melanoma dataset using dermoscopic images from the publicly available P H2 dataset. The augmented images were used to train the SqueezeNet deep learning model. The experimental results in binary classification scenario show a significant improvement in detection of melanoma with respect to accuracy (92.18%), sensitivity (80.77%), specificity (95.1%), and F1-score (80.84%). We also improved the multiclass classification results in melanoma detection to 89.2% (sensitivity), 96.2% (specificity) for atypical nevus detection, 65.4% (sensitivity), 72.2% (specificity), and for common nevus detection 66% (sensitivity), 77.2% (specificity). The proposed classification framework outperforms some of the state-of-the-art methods in detecting skin melanoma.publishedVersio

    iReportNow: A Mobile-Based Lost and Stolen Reporting System

    Get PDF
    Modern society is faced with a growing degree of security threats both internally and externally, but access to the right and timely information can be a defining factor. Technological advancements have been advantageous to the police in terms of reporting, monitoring and responding to crime and criminality. At present, crime reportage is being done manually in most police stations via the use of pen and paper to document statements. This method is not only slow and ineffective but could lead to loss of valuable time and productivity. This work presents the design and implementation of “iReportNow”, a digital mobile application for reporting loss or theft. It leverages the current proliferation of mobile devices. The application makes information about reported cases of loss or theft readily available and accessible to the populace. The agile model of the system development life cycle (SDLC) was adopted throughout the stages of requirement gathering, to the design, analysis, implementation and testing. The application was implemented using Extensible Mark-up Language (XML) for the user interface, Java programming language for code binding, MySQL and PHP for a robust database. The application was deployed on 20 Android mobile devices for usability testing and it achieved a mean score of 83.5 based on the system usability scale (SUS). iReportNow will enhance efficiency in reporting and accessing the information on reported cases of loss and theft. Furthermore, it would help to allay uncertainties of buying stolen properties as users can easily check up on an item put up for sale

    Are you ashamed? Can a gaze tracker tell?

    Get PDF
    Our aim was to determine the possibility of detecting cognitive emotion information (neutral, disgust, shameful, “sensory pleasure”) by using a remote eye tracker within an approximate range of 1 meter. Our implementation was based on a self-learning ANN used for profile building, emotion status identification and recognition. Participants of the experiment were provoked with audiovisual stimuli (videos with sounds) to measure the emotional feedback. The proposed system was able to classify each felt emotion with an average of 90% accuracy (2 second measuring interval)
    corecore