6 research outputs found

    A Transfer Learning Approach with a Convolutional Neural Network for the Classification of Lung Carcinoma

    No full text
    Lung cancer is among the most hazardous types of cancer in humans. The correct diagnosis of pathogenic lung disease is critical for medication. Traditionally, determining the pathological form of lung cancer involves an expensive and time-consuming process investigation. Lung cancer is a leading cause of mortality worldwide, with lung tissue nodules being the most prevalent way for doctors to identify it. The proposed model is based on robust deep-learning-based lung cancer detection and recognition. This study uses a deep neural network as an extraction of features approach in a computer-aided diagnosing (CAD) system to assist in detecting lung illnesses at high definition. The proposed model is categorized into three phases: first, data augmentation is performed, classification is then performed using the pretrained CNN model, and lastly, localization is completed. The amount of obtained data in medical image assessment is occasionally inadequate to train the learning network. We train the classifier using a technique known as transfer learning (TL) to solve the issue introduced into the process. The proposed methodology offers a non-invasive diagnostic tool for use in the clinical assessment that is effective. The proposed model has a lower number of parameters that are much smaller compared to the state-of-the-art models. We also examined the desired dataset’s robustness depending on its size. The standard performance metrics are used to assess the effectiveness of the proposed architecture. In this dataset, all TL techniques perform well, and VGG 16, VGG 19, and Xception for 20 epoch structure are compared. Preprocessing functions as a wonderful bridge to build a dependable model and eventually helps to forecast future scenarios by including the interface at a faster phase for any model. At the 20th epoch, the accuracy of VGG 16, VGG 19, and Xception is 98.83 percent, 98.05 percent, and 97.4 percent

    Framework for Detecting Breast Cancer Risk Presence Using Deep Learning

    No full text
    Cancer is a complicated global health concern with a significant fatality rate. Breast cancer is among the leading causes of mortality each year. Advancements in prognoses have been progressively based primarily on the expression of genes, offering insight into robust and appropriate healthcare decisions, owing to the fast growth of advanced throughput sequencing techniques and the use of various deep learning approaches that have arisen in the past few years. Diagnostic-imaging disease indicators such as breast density and tissue texture are widely used by physicians and automated technology. The effective and specific identification of cancer risk presence can be used to inform tailored screening and preventive decisions. For several classifications and prediction applications, such as breast imaging, deep learning has increasingly emerged as an effective method. We present a deep learning model approach for predicting breast cancer risk primarily on this foundation. The proposed methodology is based on transfer learning using the InceptionResNetV2 deep learning model. Our experimental work on a breast cancer dataset demonstrates high model performance, with 91% accuracy. The proposed model includes risk markers that are used to improve breast cancer risk assessment scores and presents promising results compared to existing approaches. Deep learning models include risk markers that are used to improve accuracy scores. This article depicts breast cancer risk indicators, defines the proper usage, features, and limits of each risk forecasting model, and examines the increasing role of deep learning (DL) in risk detection. The proposed model could potentially be used to automate various types of medical imaging techniques

    Framework for Improved Sentiment Analysis via Random Minority Oversampling for User Tweet Review Classification

    No full text
    Social networks such as twitter have emerged as social platforms that can impart a massive knowledge base for people to share their unique ideas and perspectives on various topics and issues with friends and families. Sentiment analysis based on machine learning has been successful in discovering the opinion of the people using redundantly available data. However, recent studies have pointed out that imbalanced data can have a negative impact on the results. In this paper, we propose a framework for improved sentiment analysis through various ordered preprocessing steps with the combination of resampling of minority classes to produce greater performance. The performance of the technique can vary depending on the dataset as its initial focus is on feature selection and feature combination. Multiple machine learning algorithms are utilized for the classification of tweets into positive, negative, or neutral. Results have revealed that random minority oversampling can provide improved performance and it can tackle the issue of class imbalance

    Framework for Improved Sentiment Analysis via Random Minority Oversampling for User Tweet Review Classification

    No full text
    Social networks such as twitter have emerged as social platforms that can impart a massive knowledge base for people to share their unique ideas and perspectives on various topics and issues with friends and families. Sentiment analysis based on machine learning has been successful in discovering the opinion of the people using redundantly available data. However, recent studies have pointed out that imbalanced data can have a negative impact on the results. In this paper, we propose a framework for improved sentiment analysis through various ordered preprocessing steps with the combination of resampling of minority classes to produce greater performance. The performance of the technique can vary depending on the dataset as its initial focus is on feature selection and feature combination. Multiple machine learning algorithms are utilized for the classification of tweets into positive, negative, or neutral. Results have revealed that random minority oversampling can provide improved performance and it can tackle the issue of class imbalance

    Enhancing Sentiment Analysis via Random Majority Under-Sampling with Reduced Time Complexity for Classifying Tweet Reviews

    No full text
    Twitter has become a unique platform for social interaction from people all around the world, leading to an extensive amount of knowledge that can be used for various reasons. People share and spread their own ideologies and point of views on unique topics leading to the production of a lot of content. Sentiment analysis is of extreme importance to various businesses as it can directly impact their important decisions. Several challenges related to the research subject of sentiment analysis includes issues such as imbalanced dataset, lexical uniqueness, and processing time complexity. Most machine learning models are sequential: they need a considerable amount of time to complete execution. Therefore, we propose a model sentiment analysis specifically designed for imbalanced datasets that can reduce the time complexity of the task by using various text sequenced preprocessing techniques combined with random majority under-sampling. Our proposed model provides competitive results to other models while simultaneously reducing the time complexity for sentiment analysis. The results obtained after the experimentation corroborate that our model provides great results producing the accuracy of 86.5% and F1 score of 0.874 through XGB

    Delivering Digital Healthcare for Elderly: A Holistic Framework for the Adoption of Ambient Assisted Living

    No full text
    Adoption of Ambient Assisted Living (AAL) technologies for geriatric healthcare is suboptimal. This study aims to present the AAL Adoption Diamond Framework, encompassing a set of key enablers/barriers as factors, and describe our approach to developing this framework. A systematic literature review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. SCOPUS, IEEE Xplore, PubMed, ProQuest, Science Direct, ACM Digital Library, SpringerLink, Wiley Online Library and grey literature were searched. Thematic analysis was performed to identify factors reported or perceived to be important for adopting AAL technologies. Of 3717 studies initially retrieved, 109 were thoroughly screened and 52 met our inclusion criteria. Nineteen unique technology adoption factors were identified. The most common factor was privacy (50%) whereas data accuracy and affordability were the least common factors (4%). The highest number of factors found per a given study was eleven whereas the average number of factors across all studies included in our sample was four (mean = 3.9). We formed an AAL technology adoption framework based on the retrieved information and named it the AAL Adoption Diamond Framework. This holistic framework was formed by organising the identified technology adoption factors into four key dimensions: Human, Technology, Business, and Organisation. To conclude, the AAL Adoption Diamond Framework is holistic in term of recognizing key factors for the adoption of AAL technologies, and novel and unmatched in term of structuring them into four overarching themes or dimensions, bringing together the individual and the systemic factors evolving around the adoption of AAL technology. This framework is useful for stakeholders (e.g., decision-makers, healthcare providers, and caregivers) to adopt and implement AAL technologies
    corecore