38 research outputs found

    Prediction-Based Channel Selection Prediction in Mobile Cognitive Radio Network

    Get PDF
    The emerging 5G wireless communications enabled diverse multimedia applications and smart devices in the network. It promises very high mobile traffic data rates, quality of service as in very low latency and improvement in user’s perceived quality of experience compared to current 4G wireless network. This encourages the increasing demand of significant bandwidth which results a significant urge of efficient spectrum utilization. In this paper, modelling, performance analysis and optimization of future channel selection for cognitive radio network by jointly exploiting both CR mobility and primary user activity to provide efficient spectrum access is studied.  The modelling and prediction method is implemented by using Hidden Markov Model algorithm. The movement of CR in wireless network yields location-varying spectrum opportunities. The current approaches in most literatures which only depend on reactive selection spectrum opportunities result of inefficient channel usages. Moreover, conventional random selection method tends to observe a higher handoff and operation delays in network performance.  This inefficiency can cause continuous transmission interruptions leading to the degradation of advance wireless services. This work goal is to improve the performance of CR in terms number of handoffs and operation delays. We perform simulation on our prediction strategy with a commonly used random sensing method with and without location. Through simulations, it is shown that the proposed prediction and learning strategy can obtain significant improvements in number of handoffs and operation delays performance parameters. It is also shown that future CR location is beneficial in increasing mobile CR performance. This study also shows that the number of primary user in the network and the PU protection range affect the performance of mobile CR channel selection for all methods

    Breast Cancer Classification: Features Investigation using Machine Learning Approaches

    Get PDF
    Breast cancer is the second most common cancer after lung cancer and one of the main causes of death worldwide. Women have a higher risk of breast cancer as compared to men. Thus, one of the early diagnosis with an accurate and reliable system is critical in breast cancer treatment. Machine learning techniques are well known and popular among researchers, especially for classification and prediction. An investigation was conducted to evaluate the performance of breast cancer classification for malignant tumors and benign tumors using various machine learning techniques, namely k-Nearest Neighbors (k-NN), Random Forest, and Support Vector Machine (SVM) and ensemble techniques to compute the prediction of the breast cancer survival by implementing 10-fold cross validation. This study used a dataset obtained from Wisconsin Diagnostic Breast Cancer (WDBC) with 23 selected features measured from 569 patients, from which 212 patients have malignant tumors and 357 patients have benign tumors. The analysis was performed to investigate the feature of the tumors based on its mean, standard error, and worst. Each feature has ten properties which are radius, texture, perimeter, area, smoothness, compactness, concavity, concave, symmetry and fractal dimensions. The selection of features was considered a significant influence to the breast cancer. The analysis is compared and evaluated with thirty features to determine the features used for breast cancer classification. The result shown AdaBoost has obtained the highest accuracy for thirty features at 98.95%, ten features of mean at 98.07%, and ten features of worst at 98.77% with a lowest error rate. Additionally, the proposed methods are classified using 2-fold, 3-fold, and 5-fold cross validation to meet the best accuracy rate. Comparison results between all methods show that AdaBoost ensemble methods gave the highest accuracy at 98.77% for 10-fold cross validation, while 2-fold and 3-fold cross validation at 98.41% and 98.24%, respectively. Nevertheless, the result with 5-fold cross validation shows SVM produced the best accuracy rate at 98.60% with the lowest error rate

    Joint Source Channel Decoding Exploiting 2D Source Correlation with Parameter Estimation for Image Transmission over Rayleigh Fading Channels

    Get PDF
    This  paper  investigates  the  performance  of  a  2- Dimensional  (2D)  Joint  Source  Channel  Coding  (JSCC)  system assisted  with  parameter  estimation  for  2D  image  transmission over  an  Additive  White  Gaussian  Noise  (AWGN)  channel  and a  Rayleigh  fading  channel.  Baum-Welsh  Algorithm  (BWA)  is employed  in  the  proposed  2D  JSCC  system  to  estimate  the source correlation statistics during channel decoding. The source correlation is then exploited during channel decoding using a Modified Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. The performance of the 2D JSCC system with the BWA-based parameter estimation technique (2D-JSCC-PET1) is evaluated via image transmission simulations.  Two  images,  each  exhibits  strong  and weak  source  correlation  are  considered  in  the  evaluation  by measuring the Peak Signal Noise Ratio of the decoded images at the  receiver.  The proposed 2D-JSCC-PET1 system is compared with various benchmark systems. Simulation results reveal that the 2D-JSCC-PET1 system outperforms the other benchmark systems (performance gain of 4.23 dB over the 2D-JSCC-PET2 system and 6.10 dB over the 2D JSCC system).  The proposed system also can perform very close to the ideal 2D JSCC system relying on the assumption of perfect source correlation knowledge at the receiver that shown only 0.88 dB difference in performance gain

    PAPR reduction techniques in generalized inverse discrete fourier transform non-orthogonal frequency division multiplexing system

    Get PDF
    A promising system of Generalized Inverse Discrete Fourier Transform Non-Orthogonal Frequency Division Multiplexing (GIDFT n-OFDM) system can fulfil the requirement of supporting higher data rate in Fifth Generation (5G) technology. However, this system experience High Peak to Average Power Ratio (PAPR) due to massive number of subcarriers signal is transmitted. In this paper, three types of usual PAPR reduction techniques were applied in GIDFT n-OFDM system which are Clipping, Partial transmit Transform (PTS) and Selective Mapping (SLM). The system performance is compared and evaluated using Complementary Cumulative Distribution Function (CCDF) plot. Simulation results show that SLM technique give significant reduction of PAPR 9 dB of the original performance

    COVID-19: Symptoms Clustering and Severity Classification Using Machine Learning Approach

    Get PDF
    COVID-19 is an extremely contagious illness that causes illnesses varying from either the common cold to more chronic illnesses or even death. The constant mutation of a new variant of COVID-19 makes it important to identify the symptom of COVID-19 in order to contain the infection. The use of clustering and classification in machine learning is in mainstream use in different aspects of research, especially in recent years to generate useful knowledge on COVID-19 outbreak. Many researchers have shared their COVID-19 data on public database and a lot of studies have been carried out. However, the merit of the dataset is unknown and analysis need to be carried by the researchers to check on its reliability. The dataset that is used in this work was sourced from the Kaggle website. The data was obtained through a survey collected from participants of various gender and age who had been to at least ten countries. There are four levels of severity based on the COVID-19 symptom, which was developed in accordance to World Health Organization (WHO) and the Indian Ministry of Health and Family Welfare recommendations.  This paper presented an inquiry on the dataset utilising supervised and unsupervised machine learning approaches in order to better comprehend the dataset. In this study, the analysis of the severity group based on the COVID-19 symptoms using supervised learning techniques employed a total of seven classifiers, namely the K-NN, Linear SVM, Naive Bayes, Decision Tree (J48), Ada Boost, Bagging, and Stacking. For the unsupervised learning techniques, the clustering algorithm utilized in this work are Simple K-Means and Expectation-Maximization. From the result obtained from both supervised and unsupervised learning techniques, we observed that the result analysis yielded relatively poor classification and clustering results. The findings for the dataset analysed in this study do not appear to be providing the correct result for the symptoms categorized against the severity level which raises concerns about the validity and reliability of the dataset

    Exploiting 2-Dimensional Source Correlation in Channel Decoding with Parameter Estimation

    Get PDF
    Traditionally, it is assumed that source coding is perfect and therefore, the redundancy of the source encoded bit-stream is zero. However, in reality, this is not the case as the existing source encoders are imperfect and yield residual redundancy at the output. The residual redundancy can be exploited by using Joint Source Channel Coding (JSCC) with Markov chain as the source. In several studies, the statistical knowledge of the sources has been assumed to be perfectly available at the receiver. Although the result was better in terms of the BER performance, practically, the source correlation knowledge were not always available at the receiver and thus, this could affect the reliability of the outcome. The source correlation on all rows and columns of the 2D sources were well exploited by using a modified Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm in the decoder. A parameter estimation technique was used jointly with the decoder to estimate the source correlation knowledge. Hence, this research aims to investigate the parameter estimation for 2D JSCC system which reflects a practical scenario where the source correlation knowledge are not always available. We compare the performance of the proposed joint decoding and estimation technique with the ideal 2D JSCC system with perfect knowledge of the source correlation knowledge. Simulation results reveal that our proposed coding scheme performs very close to the ideal 2D JSCC system

    COVID-19: Symptoms Clustering and Severity Classification Using Machine Learning Approach

    Get PDF
    COVID-19 is an extremely contagious illness that causes illnesses varying from either the common cold to more chronic illnesses or even death. The constant mutation of a new variant of COVID-19 makes it important to identify the symptom of COVID-19 in order to contain the infection. The use of clustering and classification in machine learning is in mainstream use in different aspects of research, especially in recent years to generate useful knowledge on COVID-19 outbreak. Many researchers have shared their COVID-19 data on public database and a lot of studies have been carried out. However, the merit of the dataset is unknown and analysis need to be carried by the researchers to check on its reliability. The dataset that is used in this work was sourced from the Kaggle website. The data was obtained through a survey collected from participants of various gender and age who had been to at least ten countries. There are four levels of severity based on the COVID-19 symptom, which was developed in accordance to World Health Organization (WHO) and the Indian Ministry of Health and Family Welfare recommendations.  This paper presented an inquiry on the dataset utilising supervised and unsupervised machine learning approaches in order to better comprehend the dataset. In this study, the analysis of the severity group based on the COVID-19 symptoms using supervised learning techniques employed a total of seven classifiers, namely the K-NN, Linear SVM, Naive Bayes, Decision Tree (J48), Ada Boost, Bagging, and Stacking. For the unsupervised learning techniques, the clustering algorithm utilized in this work are Simple K-Means and Expectation-Maximization. From the result obtained from both supervised and unsupervised learning techniques, we observed that the result analysis yielded relatively poor classification and clustering results. The findings for the dataset analysed in this study do not appear to be providing the correct result for the symptoms categorized against the severity level which raises concerns about the validity and reliability of the dataset

    Internet of Things: A Monitoring and Control System for Rockmelon Farming

    Get PDF
    This paper describes the internet of things application in agriculture production especially for rock melon farming. In order to boost agricultural production on a commercial basis, a more systematic approach should be developed and organized to be adopted by operators to increase production and income. This work combines fertilization and irrigation in one system under protective structures to ensure that high-quality plant production and an alternative to conventional cropping systems. In addition, the use of technology for online monitoring and control is an improvement to the system in parallel with the rapid development of information technology today. The proposed system is focusing on automation, wireless, data analytics and simplified design with minimal and scalable skid size for rock melon in the Klang valley. Other than that, a monitoring and control system were developed besides, applied Internet of Thing (IoT) platform with additional user-friendly programmable farming routine Human-machine interfacing (HMI). HMI is a software interface that is capable to conduct the systems directly via autonomous cloud control. It provides time management and contributes to a more efficient workforce. The aim to develop a monitoring and controlling system using IoT for rock melon in this study was achieved. The rock melon harvesting succeeds according to the scheduled routine and the quality of yields also satisfied

    Mobile communication (2G, 3G and 4G) and future interest of 5G in Pakistan: A review

    Get PDF
    The use of mobile communication is growing radically with every passing year. The new reality is the fifth generation (5G) of mobile communication technology. 5G requires expensive infrastructural adjustment and upgradation. Currently, Pakistan has one of the most significant numbers of biometrically verified mobile users. However, at the same time, the country lags incredibly in the field of mobile internet adoption, with just half of the mobile device owners avail broadband subscription. It is a viable market with a large segment yet to be tapped. With the advancing progression in Pakistan towards the internet of things (IoT) connectivity, i.e., solar-powered home solutions, smart city projects, and on-board diagnostics (OBD), the urgency for speed, bandwidth and reliability are on the rise. In this paper, Pakistan's prevalent mobile communication networks, i.e., second, third and fourth generation (2G, 3G and 4G), were analyzed and examined in light of the country's demographics and challenges. The future of 5G in Pakistan was also discussed. The study revealed that non-infrastructural barriers influence the low adoption rate, which is the main reason behind the spectrum utilization gap, i.e., the use of 3G, and the 4G spectrum is minimal
    corecore