10 research outputs found

    Robust Brain Tissue Segmentation in AD Using Comparative Linear Transformation and Deep Learning

    Get PDF
    As a progressive neurological disease, Alzheimer's disease (AD), if no preventative measures are   taken, can result in dementia and a severe decline in brain function, making it difficult to perform basic tasks. Over 1 in 9 people suffer from dementia caused by Alzheimer's disease and require uncompensated care. The hippocampus is extracted from MRI scans of the brain via image segmentation have been useful for diagnosing Alzheimer's disease (AD).The segmentation of the CSF region in brain MRI is critical for analyzing the stages of AD. The extraction of Hippocampus from an MRI of the brain is greatly influenced by the contrast of the images. Using comparative linear transformation in the horizontal and vertical dimensions as well as statistical edge-based features, this article proposes a robust method for segmentation technique for the extraction of Hippocampus from brain MRI. These transformations aid in balancing the brain image's thin and dense fluid extractions. Through use of the ADNI dataset, the proposed approach had a 99% success rate in segmentation

    Integrating Temporal Fluctuations in Crop Growth with Stacked Bidirectional LSTM and 3D CNN Fusion for Enhanced Crop Yield Prediction

    Get PDF
    Optimizing farming methods and guaranteeing a steady supply of food depend critically on accurate predictions of crop yields. The dynamic temporal changes that occur during crop growth are generally ignored by conventional crop growth models, resulting in less precise projections. Using a stacked bidirectional Long Short-Term Memory (LSTM) structure and a 3D Convolutional Neural Network (CNN) fusion, we offer a novel neural network model that accounts for temporal oscillations in the crop growth process. The 3D CNN efficiently recovers spatial and temporal features from the crop development data, while the bidirectional LSTM cells capture the sequential dependencies and allow the model to learn from both past and future temporal information. Our model's prediction accuracy is improved by combining the LSTM and 3D CNN layers at the top, which better captures temporal and spatial patterns. We also provide a novel label-related loss function that is optimized for agricultural yield forecasting. Because of the relevance of temporal oscillations in crop development and the dynamic character of crop growth, a new loss function has been developed. This loss function encourages our model to learn and take advantage of the temporal trends, which improves our ability to estimate crop yield. We perform comprehensive experiments on real-world crop growth datasets to verify the efficacy of our suggested approach. The outcomes prove that our unified strategy performs far better than both baseline crop growth prediction algorithms and cutting-edge applications of deep learning. Improved crop yield prediction accuracy is achieved with the integration of temporal variations via the merging of bidirectional LSTM and 3D CNN and a unique loss function. This study helps move the science of estimating crop yields forward, which is important for informing agricultural policy and ensuring a steady supply of food

    HYBRID MODEL AND FRAMEWORK FOR PREDICTING AIR POLLUTANTS IN SMART CITIES

    Get PDF
    The pollution index of any urban area is indicated by its air quality. It also shows a fine balance is maintained between the needs of the populace and the industrial ecosystem. To mitigate such pollution in real-time, smart cities have a significant role to play. It's common knowledge that air pollution in a city severely affects the health of its dependents. More alarmingly, human health damage and disease burden are caused by phenomena like acid rain, and global warming. More precisely, lung ailments, CPOD, heart problems and skin cancer are caused by polluted air in congested urban places. Amongst the worst air pollutants, CO, C6H6, SO2, NO2, O3, RSPM/PM10, and PM2.5 cause maximum havoc. The climatic variables like atmospheric wind velocity, direction, relative humidity, and temperature control air contaminants in the air. Lately, numerous techniques have been applied by researchers and environmentalists to determine the Air Quality Index over a place. However, not a single technique has found acceptance from all quarters as being effective in every situation or scenario. Here, the main aspect relates to achieving authentic prediction in AQI levels by applying Machine Learning algorithms so worst situations can be averted by timely action. To enhance the performance of Machine Learning methods study adopted imputation and feature selection methods. When feature selection is applied, the experimental outcomes indicate a more accurate prediction over other techniques, showing promise for the application of the model in smart cities by syncing data from different monitoring stations

    Skin Cancer classification using Convolutional Capsule Network (CapsNet)

    Get PDF
    Researchers are proficient in preprocessing skin images but fail in identifying efficient classifiers for classifying skin cancer due to the complex variety of lesion sizes, colors, and shapes. As such, no single classifier is sufficient for classifying skin cancer legions. Convolutional Neural Networks (CNNs) have played an important role in deep learning, as CNNs have proven successful in classification tasks across many fields. However, present day models available for skin cancer classification suffer from not taking important spatial relations between features into consideration. They classify effectively only if certain features are present in the test data, ignoring their relative spatial relation with each other, which results in false negatives. They also lack rotational invariance, meaning that the same legion viewed at different angles may be assigned to different classes, leading to false positives. The Capsule Network (CapsNet) is designed to overcome the above-mentioned problems. Capsule Networks use modules or capsules other than pooling as an alternative to translational invariance. The Capsule Network uses layer-based squashing and dynamic routing. It uses vector-output capsules and max-pooling with routing by agreement, unlike scale-output feature detectors of traditional CNNs. All of which assist in avoiding false positives and false negatives. The Capsule Network architecture is created with many convolution layers and one capsule layer as the final layer.  Hence, in the proposed work, skin cancer classification is performed based on CapsNet architecture which can work well with high dimensional hyperspectral images of skin

    Skin Cancer Classification using Convolutional Capsule Network (CapsNet)

    Get PDF
    994-1001Researchers are proficient in preprocessing skin images but fail in identifying efficient classifiers for classifying skin cancer due to the complex variety of lesion sizes, colors, and shapes. As such, no single classifier is sufficient for classifying skin cancer legions. Convolutional Neural Networks (CNNs) have played an important role in deep learning, as CNNs have proven successful in classification tasks across many fields. However, present day models available for skin cancer classification suffer from not taking important spatial relations between features into consideration. They classify effectively only if certain features are present in the test data, ignoring their relative spatial relation with each other, which results in false negatives. They also lack rotational invariance, meaning that the same legion viewed at different angles may be assigned to different classes, leading to false positives. The Capsule Network (CapsNet) is designed to overcome the above-mentioned problems. Capsule Networks use modules or capsules other than pooling as an alternative to translational invariance. The Capsule Network uses layer-based squashing and dynamic routing. It uses vector-output capsules and max-pooling with routing by agreement, unlike scale-output feature detectors of traditional CNNs. All of which assist in avoiding false positives and false negatives. The Capsule Network architecture is created with many convolution layers and one capsule layer as the final layer. Hence, in the proposed work, skin cancer classification is performed based on CapsNet architecture which can work well with high dimensional hyperspectral images of skin

    Analysis of COVID-19 Pandemic - Origin, Global Impact and Indian Therapeutic Solutions for infectious diseases

    Get PDF
    The first case of COVID-19 was reported in China on December 2019[1] and almost 213 countries has reported around 5,350,000 COVID-19 cases all over the world with the mortality rate up to 3.4% as of May 23,2020. On March 11, 2020 WHO (World Health Organization) declared COVID-19 as global pandemic. Moving towards from epidemic to global pandemic situation just in two months, COVID-19 has caused tremendous negative effects on people's wellbeing and the economy all over the world. Scientists and researchers all over the world have a vested interest in researching and mitigating to handle the dire situation. This paper covers the COVID-19's origin, characteristics of the virus, and reasons behind the outbreak and precautionary measures that have to be followed to handle the critical situation. Several therapeutic solutions in Indian healing tradition have been discussed to improve the immune system in order to equip ourselves to deal with the outbreak of COVID-19.

    Precise segmentation of fetal head in ultrasound images using improved U-Net model

    No full text
    Monitoring fetal growth in utero is crucial to anomaly diagnosis. However, current computer-vision models struggle to accurately assess the key metrics (i.e., head circumference and occipitofrontal and biparietal diameters) from ultrasound images, largely owing to a lack of training data. Mitigation usually entails image augmentation (e.g., flipping, rotating, scaling, and translating). Nevertheless, the accuracy of our task remains insufficient. Hence, we offer a U-Net fetal head measurement tool that leverages a hybrid Dice and binary cross-entropy loss to compute the similarity between actual and predicted segmented regions. Ellipse-fitted two-dimensional ultrasound images acquired from the HC18 dataset are input, and their lower feature layers are reused for efficiency. During regression, a novel region of interest pooling layer extracts elliptical feature maps, and during segmentation, feature pyramids fuse fieldlayer data with a new scale attention method to reduce noise. Performance is measured by Dice similarity, mean pixel accuracy, and mean intersectionover- union, giving 97.90%, 99.18%, and 97.81% scores, respectively, which match or outperform the best U-Net models

    Forest Fire Identification in UAV Imagery Using X-MobileNet

    No full text
    Forest fires are caused naturally by lightning, high atmospheric temperatures, and dryness. Forest fires have ramifications for both climatic conditions and anthropogenic ecosystems. According to various research studies, there has been a noticeable increase in the frequency of forest fires in India. Between 1 January and 31 March 2022, the country had 136,604 fire points. They activated an alerting system that indicates the location of a forest fire detected using MODIS sensor data from NASA Aqua and Terra satellite images. However, the satellite passes the country only twice and sends the information to the state forest departments. The early detection of forest fires is crucial, as once they reach a certain level, it is hard to control them. Compared with the satellite monitoring and detection of fire incidents, video-based fire detection on the ground identifies the fire at a faster rate. Hence, an unmanned aerial vehicle equipped with a GPS and a high-resolution camera can acquire quality images referencing the fire location. Further, deep learning frameworks can be applied to efficiently classify forest fires. In this paper, a cheaper UAV with extended MobileNet deep learning capability is proposed to classify forest fires (97.26%) and share the detection of forest fires and the GPS location with the state forest departments for timely action

    DocCompare: an approach to prevent the problem of character injection in document similarity algorithm

    Get PDF
    There is a constant rise in the amount of data being copied or plagiarized because of the 1 abundance of content and information freely available across the internet. Even though the systems 2 try to check documents for the plagiarism, there have been trials to overcome these system checks. 3 In this paper, a concept of character injection used to trick the plagiarism is presented. It is also 4 showcased that how the the similarity check algorithms based on k-grams fails to detect the character 5 injection. In order to eradicate the problem or error in similarity rates caused due to the problem of 6 character injection, image processing based approach of multiple histogram projections are used. An 7 application is developed to detect the character injection in the document and produce the accurate 8 similarity rate. The results are shown with some test documents and the proposed method eliminates 9 any kind of character injected in the document that tricks the plagiarism. The proposed method has 10 addressed the problem of character injection with image processing based changes in the existing 11 methods of document-similarity check algorithms based on k-grams
    corecore