20 research outputs found

    Mineral and heavy metals content in tilapia fish (Oreochromis niloticus) collected from the River Nile in Damietta governorate, Egypt and evaluation of health risk from tilapia consumption

    Get PDF
    This study was conducted to determine heavy metals and trace elements content in tilapia fish collected from three sources in Damietta governorate, Egypt and to evaluate the human health risk due to tilapia consumption. Tilapia samples were collected from two locations in the River Nile stream, tow fish farms and two sluiceways. Health risk assessment was evaluated based on the consumption habits of adult human. The results revealed that all samples vary in elements concentrations. The calculation of human health risk revealed that the consumption of tilapia in the three tested area does not pose any health risk except for Selenium. It could be concluded that consumption of such fish may be a risk for consumers who eat fish more than one time per week. Consequently, precautions should be taken and warning against eating tilapia fish caught from these regions should be announced.This study was conducted to determine heavy metals and trace elements content in tilapia fish collected from three sources in Damietta governorate, Egypt and to evaluate the human health risk due to tilapia consumption. Tilapia samples were collected from two locations in the River Nile stream, tow fish farms and two sluiceways. Health risk assessment was evaluated based on the consumption habits of adult human. The results revealed that all samples vary in elements concentrations. The calculation of human health risk revealed that the consumption of tilapia in the three tested area does not pose any health risk except for Selenium. It could be concluded that consumption of such fish may be a risk for consumers who eat fish more than one time per week. Consequently, precautions should be taken and warning against eating tilapia fish caught from these regions should be announced

    Deep Learning Cascaded Feature Selection Framework for Breast Cancer Classification: Hybrid CNN with Univariate-Based Approach

    No full text
    With the help of machine learning, many of the problems that have plagued mammography in the past have been solved. Effective prediction models need many normal and tumor samples. For medical applications such as breast cancer diagnosis framework, it is difficult to gather labeled training data and construct effective learning frameworks. Transfer learning is an emerging strategy that has recently been used to tackle the scarcity of medical data by transferring pre-trained convolutional network knowledge into the medical domain. Despite the well reputation of the transfer learning based on the pre-trained Convolutional Neural Networks (CNN) for medical imaging, several hurdles still exist to achieve a prominent breast cancer classification performance. In this paper, we attempt to solve the Feature Dimensionality Curse (FDC) problem of the deep features that are derived from the transfer learning pre-trained CNNs. Such a problem is raised due to the high space dimensionality of the extracted deep features with respect to the small size of the available medical data samples. Therefore, a novel deep learning cascaded feature selection framework is proposed based on the pre-trained deep convolutional networks as well as the univariate-based paradigm. Deep learning models of AlexNet, VGG, and GoogleNet are randomly selected and used to extract the shallow and deep features from the INbreast mammograms, whereas the univariate strategy helps to overcome the dimensionality curse and multicollinearity issues for the extracted features. The optimized key features via the univariate approach are statistically significant (p-value ≤ 0.05) and have good capability to efficiently train the classification models. Using such optimal features, the proposed framework could achieve a promising evaluation performance in terms of 98.50% accuracy, 98.06% sensitivity, 98.99% specificity, and 98.98% precision. Such performance seems to be beneficial to develop a practical and reliable computer-aided diagnosis (CAD) framework for breast cancer classification

    Deep Learning Cascaded Feature Selection Framework for Breast Cancer Classification: Hybrid CNN with Univariate-Based Approach

    No full text
    With the help of machine learning, many of the problems that have plagued mammography in the past have been solved. Effective prediction models need many normal and tumor samples. For medical applications such as breast cancer diagnosis framework, it is difficult to gather labeled training data and construct effective learning frameworks. Transfer learning is an emerging strategy that has recently been used to tackle the scarcity of medical data by transferring pre-trained convolutional network knowledge into the medical domain. Despite the well reputation of the transfer learning based on the pre-trained Convolutional Neural Networks (CNN) for medical imaging, several hurdles still exist to achieve a prominent breast cancer classification performance. In this paper, we attempt to solve the Feature Dimensionality Curse (FDC) problem of the deep features that are derived from the transfer learning pre-trained CNNs. Such a problem is raised due to the high space dimensionality of the extracted deep features with respect to the small size of the available medical data samples. Therefore, a novel deep learning cascaded feature selection framework is proposed based on the pre-trained deep convolutional networks as well as the univariate-based paradigm. Deep learning models of AlexNet, VGG, and GoogleNet are randomly selected and used to extract the shallow and deep features from the INbreast mammograms, whereas the univariate strategy helps to overcome the dimensionality curse and multicollinearity issues for the extracted features. The optimized key features via the univariate approach are statistically significant (p-value ≤ 0.05) and have good capability to efficiently train the classification models. Using such optimal features, the proposed framework could achieve a promising evaluation performance in terms of 98.50% accuracy, 98.06% sensitivity, 98.99% specificity, and 98.98% precision. Such performance seems to be beneficial to develop a practical and reliable computer-aided diagnosis (CAD) framework for breast cancer classification

    Mineral and heavy metals content in tilapia fish (Oreochromis niloticus) collected from the River Nile in Damietta governorate, Egypt and evaluation of health risk from tilapia consumption

    No full text
    This study was conducted to determine heavy metals and trace elements content in tilapia fish collected from three sources in Damietta governorate, Egypt and to evaluate the human health risk due to tilapia consumption. Tilapia samples were collected from two locations in the River Nile stream, tow fish farms and two sluiceways. Health risk assessment was evaluated based on the consumption habits of adult human. The results revealed that all samples vary in elements concentrations. The calculation of human health risk revealed that the consumption of tilapia in the three tested area does not pose any health risk except for Selenium. It could be concluded that consumption of such fish may be a risk for consumers who eat fish more than one time per week. Consequently, precautions should be taken and warning against eating tilapia fish caught from these regions should be announced

    Using a Resnet50 with a Kernel Attention Mechanism for Rice Disease Diagnosis

    No full text
    The domestication of animals and the cultivation of crops have been essential to human development throughout history, with the agricultural sector playing a pivotal role. Insufficient nutrition often leads to plant diseases, such as those affecting rice crops, resulting in yield losses of 20–40% of total production. These losses carry significant global economic consequences. Timely disease diagnosis is critical for implementing effective treatments and mitigating financial losses. However, despite technological advancements, rice disease diagnosis primarily depends on manual methods. In this study, we present a novel self-attention network (SANET) based on the ResNet50 architecture, incorporating a kernel attention mechanism for accurate AI-assisted rice disease classification. We employ attention modules to extract contextual dependencies within images, focusing on essential features for disease identification. Using a publicly available rice disease dataset comprising four classes (three disease types and healthy leaves), we conducted cross-validated classification experiments to evaluate our proposed model. The results reveal that the attention-based mechanism effectively guides the convolutional neural network (CNN) in learning valuable features, resulting in accurate image classification and reduced performance variation compared to state-of-the-art methods. Our SANET model achieved a test set accuracy of 98.71%, surpassing that of current leading models. These findings highlight the potential for widespread AI adoption in agricultural disease diagnosis and management, ultimately enhancing efficiency and effectiveness within the sector

    Dietary omega-3 and antioxidants improve long-chain omega-3 and lipid oxidation of broiler meat

    No full text
    Abstract Background This study aimed to investigate the possibility of producing broiler meat rich in long-chain n-3 polyunsaturated fatty acids especially eicosapentaenoic acid (EPA, C20:5 n-3) and docosahexaenoic acid (DHA, C22:6 n-3) with preventing lipid oxidation of the produced meat by supplementing the diets with linseed oil or fish oil along with vitamin E (Vit. E) or sweet chestnut tannins (SCT) as antioxidants. A total of 144 1-day-old Cobb broiler chicks were divided into six treatments with three replicates, eight chicks each. The treatments were basal diets containing 2 g linseed oil/100 g (T1), 2 g linseed oil/100 g + 200 mg Vit. E/kg (T2) and 2 g linseed oil/100 g + 2 g SCT/kg (T3), 2 g fish oil/100 g (T4), 2 g fish oil/100 g + 200 mg Vit. E/kg (T5), and 2 g fish oil/100 g + 2 g SCT/kg (T6) for 5 weeks. Fatty acid composition, thiobarbituric acid (TBA), and 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging activity were determined. Results Dietary 2 g fish oil/100 g elevated (P ≤ 0.001) long-chain omega-3 polyunsaturated fatty acids in broiler meat mainly EPA and DHA. At the same time, dietary fish oil resulted in a significant decrease (P ≤ 0.001) in α-linolenic acid in broiler meat (6%). However, total omega-3 fatty acids in meat were higher (P ≤ 0.001) with dietary fish oil than with dietary linseed oil. The ratio of n-6:n-3 PUFA was decreased (P ≤ 0.001) in the meat of broilers fed diets containing 2 g fish oil/100 g compared with broilers fed diets containing 2 g linseed oil/100 g. The two sources of antioxidant decreased (P ≤ 0.05) TBA value and increased (P ≤ 0.05) the DPPH radical scavenging activity in broiler meat compared to the diet without antioxidant. No significant differences observed between chicks fed 2 g SCT/kg or 200 mg Vit. E/kg on TBA and DPPH radical scavenging activity. Conclusions It is concluded that inclusion of 2 g fish oil/100 g in broiler diets elevated levels of long-chain omega-3 PUFA mainly EPA and DHA, but decreased n-6:n-3 ratio. Moreover, the addition of 2 g SCT/kg diet or 200 mg Vit. E/kg diet as antioxidant sources inhibited lipid oxidation and enhanced antioxidant activity value in broiler meat, and each of them had the same effect

    A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms

    No full text
    One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis

    A Hybrid Workflow of Residual Convolutional Transformer Encoder for Breast Cancer Classification Using Digital X-ray Mammograms

    No full text
    Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer Encoder with Multiple Layer Perceptron (MLP), this study proposes a novel hybrid deep learning Computer-Aided Diagnosis (CAD) system for breast lesions. While the backbone residual deep learning network is employed to create the deep features, the transformer is utilized to classify breast cancer according to the self-attention mechanism. The proposed CAD system has the capability to recognize breast cancer in two scenarios: Scenario A (Binary classification) and Scenario B (Multi-classification). Data collection and preprocessing, patch image creation and splitting, and artificial intelligence-based breast lesion identification are all components of the execution framework that are applied consistently across both cases. The effectiveness of the proposed AI model is compared against three separate deep learning models: a custom CNN, the VGG16, and the ResNet50. Two datasets, CBIS-DDSM and DDSM, are utilized to construct and test the proposed CAD system. Five-fold cross validation of the test data is used to evaluate the accuracy of the performance results. The suggested hybrid CAD system achieves encouraging evaluation results, with overall accuracies of 100% and 95.80% for binary and multiclass prediction challenges, respectively. The experimental results reveal that the proposed hybrid AI model could identify benign and malignant breast tissues significantly, which is important for radiologists to recommend further investigation of abnormal mammograms and provide the optimal treatment plan

    An Ontology-Based Approach to Reduce the Negative Impact of Code Smells in Software Development Projects

    No full text
    The quality of software systems may be seriously impacted by specific types of source code anomalies. For example, poor programming practices result in Code Smells (CSs), which are a specific type of source code anomalies. They lead to architectural problems that consequently impact some significant software quality attributes, such as maintainability, portability, and reuse. To reduce the risk of introducing CSs and alleviate their consequences, the knowledge and skills of developers and architects is essential. On the other hand, ontologies, which are an artificial intelligence technique, have been used as a solution to deal with different software engineering challenges. Hence, the aim of this paper is to describe an ontological approach to representing and analyzing code smells. Since ontologies are a formal language based on description logics, this approach may contribute to formally analyzing the information about code smells, for example, to detect inconsistencies or infer new knowledge with the support of a reasoner. In addition, this proposal may support the training of software developers by providing the most relevant information on code smells. This ontology can also be a means of representing the knowledge on CSs from different sources (documents in natural language, relational databases, HTML documents, etc.). Therefore, it could be a valuable knowledge base to support the struggle of software developers and architects either to avoid CSs or to detect and remove them. The ontology was developed following a sound methodology. The well-known tool Protégé was used to manage the ontology and it was validated by using different techniques. An experiment was conducted to demonstrate the applicability of the ontology and evaluate its impact on speeding up the analysis of CSs
    corecore