19 research outputs found

    The Role of Machine Learning and Deep Learning Approaches for the Detection of Skin Cancer

    Get PDF
    Machine learning (ML) can enhance a dermatologist’s work, from diagnosis to customized care. The development of ML algorithms in dermatology has been supported lately regarding links to digital data processing (e.g., electronic medical records, Image Archives, omics), quicker computing and cheaper data storage. This article describes the fundamentals of ML-based implementations, as well as future limits and concerns for the production of skin cancer detection and classification systems. We also explored five fields of dermatology using deep learning applications: (1) the classification of diseases by clinical photos, (2) der moto pathology visual classification of cancer, and (3) the measurement of skin diseases by smartphone applications and personal tracking systems. This analysis aims to provide dermatologists with a guide that helps demystify the basics of ML and its different applications to identify their possible challenges correctly. This paper surveyed studies on skin cancer detection using deep learning to assess the features and advantages of other techniques. Moreover, this paper also defined the basic requirements for creating a skin cancer detection application, which revolves around two main issues: the full segmentation image and the tracking of the lesion on the skin using deep learning. Most of the techniques found in this survey address these two problems. Some of the methods also categorize the type of cancer too

    Central nervous system metastases: A rare presentation of nasopharyngeal carcinoma

    No full text
    We report a case of a male patient. He presented with nasal obstruction and epistaxis. The MRI of the brain showed a mass in the nasopharynx and enlarged cervical lymph nodes. Besides this, there was an extra-axial, dural-based lesion in brain and subcentimetre nodules in both lungs. He received induction chemotherapy followed by chemoradiation therapy to the primary site and dural-based metastatic deposit. He re-presented with bilateral lower limb weakness. The MRI of the spine showed metastatic deposits within the thoracic cord parenchyma and meningeal deposits at the lumbar region. Palliative radiation was used to treat spinal cord and meningeal metastases. He died a few months later because of systemic disease progression. Considering the rarity of this presentation in nasopharyngeal carcinoma and lack of clear guidelines for standard treatment, we report this case to share our experience of management

    Determination of the heavy metal contents of frequently used herbal products in Pakistan

    Get PDF
    Purpose: To determine the heavy metal content of selected local and international herbal medicines sold for the treatment of various diseases in Pakistan. Methods: The different dosage forms of herbal medicines assessed were crude forms of syrups, gel, capsule, powder and tonic. Wet digestion method was used to prepare the herbal samples using nitric acid, and then analyzed for arsenic (As), cadmium (Cd), lead (Pb) and mercury (Hg), using MHS-15 mercury/hydride system and flame atomic absorption spectrometry (FAAS). Results: The investigated results displayed the Arsenic level (0.00 ppm to 0.580 ppm); Cadmium (0.001 ppm to 0.006 ppm); Lead (0.00 ppm to 1.078 ppm) and Mercury (0.001 ppm to 0.012 ppm). All results were found below the permissible limit of acceptability intake of the World Health Organization (WHO) and American Herbal Products Association (AHPA). The pH of the samples were in the range of 1.52 to 6.99. Conclusion: The findings reveal that the investigated herbal products available in Pakistan are safe with reference to heavy metals, and considered non-toxic for human consumption

    Analysis of Cyber Security Attacks and Its Solutions for the Smart grid Using Machine Learning and Blockchain Methods

    No full text
    Smart grids are rapidly replacing conventional networks on a worldwide scale. A smart grid has drawbacks, just like any other novel technology. A smart grid cyberattack is one of the most challenging things to stop. The biggest problem is caused by millions of sensors constantly sending and receiving data packets over the network. Cyberattacks can compromise the smart grid’s dependability, availability, and privacy. Users, the communication network of smart devices and sensors, and network administrators are the three layers of an innovative grid network vulnerable to cyberattacks. In this study, we look at the many risks and flaws that can affect the safety of critical, innovative grid network components. Then, to protect against these dangers, we offer security solutions using different methods. We also provide recommendations for reducing the chance that these three categories of cyberattacks may occur

    Enhancing software defect prediction: a framework with improved feature selection and ensemble machine learning

    No full text
    Effective software defect prediction is a crucial aspect of software quality assurance, enabling the identification of defective modules before the testing phase. This study aims to propose a comprehensive five-stage framework for software defect prediction, addressing the current challenges in the field. The first stage involves selecting a cleaned version of NASA’s defect datasets, including CM1, JM1, MC2, MW1, PC1, PC3, and PC4, ensuring the data’s integrity. In the second stage, a feature selection technique based on the genetic algorithm is applied to identify the optimal subset of features. In the third stage, three heterogeneous binary classifiers, namely random forest, support vector machine, and naïve Bayes, are implemented as base classifiers. Through iterative tuning, the classifiers are optimized to achieve the highest level of accuracy individually. In the fourth stage, an ensemble machine-learning technique known as voting is applied as a master classifier, leveraging the collective decision-making power of the base classifiers. The final stage evaluates the performance of the proposed framework using five widely recognized performance evaluation measures: precision, recall, accuracy, F-measure, and area under the curve. Experimental results demonstrate that the proposed framework outperforms state-of-the-art ensemble and base classifiers employed in software defect prediction and achieves a maximum accuracy of 95.1%, showing its effectiveness in accurately identifying software defects. The framework also evaluates its efficiency by calculating execution times. Notably, it exhibits enhanced efficiency, significantly reducing the execution times during the training and testing phases by an average of 51.52% and 52.31%, respectively. This reduction contributes to a more computationally economical solution for accurate software defect prediction

    Security risk models against attacks in smart grid using big data and artificial intelligence

    No full text
    The need to update the electrical infrastructure led directly to the idea of smart grids (SG). Modern security technologies are almost perfect for detecting and preventing numerous attacks on the smart grid. They are unable to meet the challenging cyber security standards, nevertheless. We need many methods and techniques to effectively defend against cyber threats. Therefore, a more flexible approach is required to assess data sets and identify hidden risks. This is possible for vast amounts of data due to recent developments in artificial intelligence, machine learning, and deep learning. Due to adaptable base behavior models, machine learning can recognize new and unexpected attacks. Security will be significantly improved by combining new and previously released data sets with machine learning and predictive analytics. Artificial Intelligence (AI) and big data are used to learn more about the current situation and potential solutions for cybersecurity issues with smart grids. This article focuses on different types of attacks on the smart grid. Furthermore, it also focuses on the different challenges of AI in the smart grid. It also focuses on using big data in smart grids and other applications like healthcare. Finally, a solution to smart grid security issues using artificial intelligence and big data methods is discussed. In the end, some possible future directions are also discussed in this article. Researchers and graduate students are the audience of our article

    Analysis of Feature Selection Methods in Software Defect Prediction Models

    No full text
    Improving software quality by proactively detecting potential defects during development is a major goal of software engineering. Software defect prediction plays a central role in achieving this goal. The power of data analytics and machine learning allows us to focus our efforts where they are needed most. A key factor in the success of software fault prediction is selecting relevant features and reducing data dimensionality. Feature selection methods contribute by filtering out the most critical attributes from a plethora of potential features. These methods have the potential to significantly improve the accuracy and efficiency of fault prediction models. However, the field of feature selection in the context of software fault prediction is vast and constantly evolving, with a variety of techniques and tools available. Based on these considerations, our systematic literature review conducts a comprehensive investigation of feature selection methods used in the context of software fault prediction. The research uses a refined search strategy involving four reputable digital libraries, including IEEE Explore, Science Direct, ACM Digital Library, and Springer Link, to provide a comprehensive and exhaustive review through a rigorous analysis of 49 selected primary studies from 2014. The results highlight several important issues. First, there is a prevalence of filtering and hybrid feature selection methods. Second, single classifiers such as Naïve Bayes, Support Vector Machine, and Decision Tree, as well as ensemble classifiers such as Random Forest, Bagging, and AdaBoost are commonly used. Third, evaluation metrics such as area under the curve, accuracy, and F-measure are commonly used for performance evaluation. Finally, there is a clear preference for tools such as WEKA, MATLAB, and Python. By providing insights into current trends and practices in the field, this study offers valuable guidance to researchers and practitioners to make informed decisions to improve software fault prediction models and contribute to the overall improvement of software quality

    Lung Nodules Localization and Report Analysis from Computerized Tomography (CT) Scan Using a Novel Machine Learning Approach

    No full text
    A lung nodule is a tiny growth that develops in the lung. Non-cancerous nodules do not spread to other sections of the body. Malignant nodules can spread rapidly. One of the numerous dangerous kinds of cancer is lung cancer. It is responsible for taking the lives of millions of individuals each year. It is necessary to have a highly efficient technology capable of analyzing the nodule in the pre-cancerous phases of the disease. However, it is still difficult to detect nodules in CT scan data, which is an issue that has to be overcome if the following treatment is going to be effective. CT scans have been used for several years to diagnose nodules for future therapy. The radiologist can make a mistake while determining the nodule’s presence and size. There is room for error in this process. Radiologists will compare and analyze the images obtained from the CT scan to ascertain the nodule’s location and current status. It is necessary to have a dependable system that can locate the nodule in the CT scan images and provide radiologists with an automated report analysis that is easy to comprehend. In this study, we created and evaluated an algorithm that can identify a nodule by comparing multiple photos. This gives the radiologist additional data to work with in diagnosing cancer in its earliest stages in the nodule. In addition to accuracy, various characteristics were assessed during the performance assessment process. The final CNN algorithm has 84.8% accuracy, 90.47% precision, and 90.64% specificity. These numbers are all relatively close to one another. As a result, one may argue that CNN is capable of minimizing the number of false positives through in-depth training that is performed frequently

    Lung Nodules Localization and Report Analysis from Computerized Tomography (CT) Scan Using a Novel Machine Learning Approach

    No full text
    A lung nodule is a tiny growth that develops in the lung. Non-cancerous nodules do not spread to other sections of the body. Malignant nodules can spread rapidly. One of the numerous dangerous kinds of cancer is lung cancer. It is responsible for taking the lives of millions of individuals each year. It is necessary to have a highly efficient technology capable of analyzing the nodule in the pre-cancerous phases of the disease. However, it is still difficult to detect nodules in CT scan data, which is an issue that has to be overcome if the following treatment is going to be effective. CT scans have been used for several years to diagnose nodules for future therapy. The radiologist can make a mistake while determining the nodule’s presence and size. There is room for error in this process. Radiologists will compare and analyze the images obtained from the CT scan to ascertain the nodule’s location and current status. It is necessary to have a dependable system that can locate the nodule in the CT scan images and provide radiologists with an automated report analysis that is easy to comprehend. In this study, we created and evaluated an algorithm that can identify a nodule by comparing multiple photos. This gives the radiologist additional data to work with in diagnosing cancer in its earliest stages in the nodule. In addition to accuracy, various characteristics were assessed during the performance assessment process. The final CNN algorithm has 84.8% accuracy, 90.47% precision, and 90.64% specificity. These numbers are all relatively close to one another. As a result, one may argue that CNN is capable of minimizing the number of false positives through in-depth training that is performed frequently

    YOLO and residual network for colorectal cancer cell detection and counting

    No full text
    The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new techniques for accurate cancer diagnosis. This study introduces an efficient deep learning-based method for detecting and counting colorectal cancer cells (HT-29). The colorectal cancer cell line was procured from a company. Further, the cancer cells were cultured, and a transwell experiment was conducted in the lab to collect the dataset of colorectal cancer cell images via fluorescence microscopy. Of the 566 images, 80 % were allocated to the training set, and the remaining 20 % were assigned to the testing set. The HT-29 cell detection and counting in medical images is performed by integrating YOLOv2, ResNet-50, and ResNet-18 architectures. The accuracy achieved by ResNet-18 is 98.70 % and ResNet-50 is 96.66 %. The study achieves its primary objective by focusing on detecting and quantifying congested and overlapping colorectal cancer cells within the images. This innovative work constitutes a significant development in overlapping cancer cell detection and counting, paving the way for novel advancements and opening new avenues for research and clinical applications. Researchers can extend the study by exploring variations in ResNet and YOLO architectures to optimize object detection performance. Further investigation into real-time deployment strategies will enhance the practical applicability of these models
    corecore