11 research outputs found

    Review on Channel Estimation in OFDM

    Get PDF
    OFDM is a wireless connectivity technique that sends multiple data streams over a particular channel while efficiently handling inter-symbol interference and boosting the frequency and bandwidth available.  Since the antenna is used for signal transmission, predicting the noise present  in a noisy channel is essential. In noisy channels, the evaluation method for estimating channel can be used to explore the impact of noise on transmitted signal. Orthogonal frequency division multiplexing (OFDM) is important in wireless communication for its elevated transmission rate. Thus this paper is based on the analysis of Orthogonal frequency division multiplexing and modulation techniques in multiple input multiple output (MIMO) user

    Deep Learning Based Channel Estimation in Data Driven MIMO Receiver

    Get PDF
    OFDM (orthogonal frequency division multiplexing) is a wireless network methodology that sends multiple data streams across a particular channel while effectiently handling inter-symbol interference and enhancing frequency band available. And since the antenna is sending signals, evaluating the noise in a noisy channel is essential. This research aims into compressed sensing (CS) as a way to improve throughput and BER performance by transmitting additional data bits within every subcarrier frame whilst still limiting detector unpredictability. The Neuro-LS methodology is used in this study to generate a soft trellis decoding algorithm through channel estimation. Trellis decoding performs better BER, and DNN relying channel estimation outperforms BER, according to the findings

    Arabic Opinion Mining Using a Hybrid Recommender System Approach

    Full text link
    Recommender systems nowadays are playing an important role in the delivery of services and information to users. Sentiment analysis (also known as opinion mining) is the process of determining the attitude of textual opinions, whether they are positive, negative or neutral. Data sparsity is representing a big issue for recommender systems because of the insufficiency of user rating or absence of data about users or items. This research proposed a hybrid approach combining sentiment analysis and recommender systems to tackle the problem of data sparsity problems by predicting the rating of products from users reviews using text mining and NLP techniques. This research focuses especially on Arabic reviews, where the model is evaluated using Opinion Corpus for Arabic (OCA) dataset. Our system was efficient, and it showed a good accuracy of nearly 85 percent in predicting rating from review

    Curriculum Restructuring and Job Creation Among Nigerian Graduates: The Mediating Role of Emerging Internet Applications

    Get PDF
    Existing literature on entrepreneurship education has continually highlighted its potential for job creation. However, much attention has not been paid to the restructuring of the curriculum that can enable entrepreneurship education to thrive for job creation. This study used a structural equation modelling approach to understand the mediating role that the deployment of emerging Internet Applications (IAs) play in the nexus between curriculum restructuring and job creation. Being a quantitative study, a virtual snowball sample of 4,628 higher education graduates (males = 2,362; females = 2,266) participated in an electronic survey that was designed by the researchers. Results indicate that curriculum restructuring has a substantial link with the deployment of emerging Internet Applications and job creation respectively. The deployment of emerging Internet applications substantially contributes to the job creation activities of Nigerian graduates. There is a significant positive mediation effect of the deployment of emerging Internet applications on the link between curriculum restructuring and job creation by Nigerian graduates. Based on these results practical implications are discussed, while it was concluded that curriculum restructuring and the deployment of emerging Internet applications are very important variables for job creation

    Sparse online collaborative filtering with dynamic regularization

    Get PDF
    Abstract(#br)Collaborative filtering (CF) approaches are widely applied in recommender systems. Traditional CF approaches have high costs to train the models and cannot capture changes in user interests and item popularity. Most CF approaches assume that user interests remain unchanged throughout the whole process. However, user preferences are always evolving and the popularity of items is always changing. Additionally, in a sparse matrix, the amount of known rating data is very small. In this paper, we propose a method of online collaborative filtering with dynamic regularization (OCF-DR), that considers dynamic information and uses the neighborhood factor to track the dynamic change in online collaborative filtering (OCF). The results from experiments on the MovieLens100K, MovieLens1M, and HetRec2011 datasets show that the proposed methods are significant improvements over several baseline approaches

    A novel hybrid recommendation system for library book selection

    Get PDF
    Abstract. Increasing number of books published in a year and decreasing budgets have made collection development increasingly difficult in libraries. Despite the data to help decision making being available in the library systems, the librarians have little means to utilize the data. In addition, modern key technologies, such as machine learning, that generate more value out data have not yet been utilized in the field of libraries to their full extent. This study was set to discover a way to build a recommendation system that could help librarians who are struggling with book selection process. This thesis proposed a novel hybrid recommendation system for library book selection. The data used to build the system consisted of book metadata and book circulation data of books located in Joensuu City Library’s adult fiction collection. The proposed system was based on both rule-based components and a machine learning model. The user interface for the system was build using web technologies so that the system could be used via using web browser. The proposed recommendation system was evaluated using two different methods: automated tests and focus group methodology. The system achieved an accuracy of 79.79% and F1 score of 0.86 in automated tests. Uncertainty rate of the system was 27.87%. With these results in automated tests, the proposed system outperformed baseline machine learning models. The main suggestions that were gathered from focus group evaluation were that while the proposed system was found interesting, librarians thought it would need more features and configurability in order to be usable in real world scenarios. Results indicate that making good quality recommendations using book metadata is challenging because the data is high dimensional categorical data by its nature. Main implications of the results are that recommendation systems in domain of library collection development should focus on data pre-processing and feature engineering. Further investigation is suggested to be carried out regarding knowledge representation

    On-Demand Programming Recommendation System using Knowledge Graphs

    Get PDF
    With the increasing advancement of the internet and mobile technology, we are facing the information overload phenomenon. One of the solutions to this information overload is to filter data for the end-users. The information filtering process that provides more personalized results constitutes the main component of a recommendation system. Recommendation systems aim to provide closer results to the users’ preferences. Therefore, if users have access to content that meets their needs, higher user satisfaction would be obtained. One of the domains that can benefit from recommendation systems is helping programmers write more efficient codes and develop faster by presenting them with solutions or code samples relating to their requirements. Although major repositories such as Stackoverflow and GitHub are trying to overcome this problem, there are still considerable shortcomings regarding the problem formulation and personalized results. In this thesis, we propose an on-demand programming assistance system that first helps developers present their problems. Then, through a natural language processing (NLP) module, the platform extracts valuable data from the presented problem. The users’ questions which are asked on our platform form knowledge objects from which a knowledge graph is constructed following an efficient data model created on a graph database. With regard to the extracted valuable data from a knowledge object, the search module provides results from the Stack- overflow and the GitHub APIs. The end-users who ask their questions on the platform can save search results for the future or express their feelings about the results by marking them as useful libraries. Besides, our on-demand programming assistance platform provides a list of developers who have experience in different end-users’ problems. After the interaction, the end-users can mark those developers as experts, and a sub-graph of the ex- pert developers is appended to the knowledge graph. The platform collects 191 real-world programming problems for eight different programming languages via its powerful data model and represents the problem remarkably in the graph database in the form of nodes and edges. The proposed recommendation system relies on the constructed knowledge graph to provide the end-user recommendation list containing libraries and experts from similar knowledge objects in the knowledge graph. Two main recommendation techniques, namely collaborative-based and content-based filtering, are used to create a robust recommendation system. The content-based recommendation method is used when an end-user is new to the system or there are no similar knowledge objects in the knowledge graph. The Jaccard in- dex similarity, a weighting algorithm, and two different similarity measurement algorithms are used to build the mentioned content-based recommendation system. Moreover, when there is enough information regarding the knowledge object, the collaborative method is employed to solve the recommendation problem. The cosine similarity algorithm is utilized to apply collaborative-based recommendations on the knowledge graph. The two main algorithms, the Jaccard index similarity, and the cosine similarity were tested in different situations. First, they were applied in algorithms’ standard forms, second with the proposed optimized form via gaining benefits from auxiliary data on the knowledge graph. These auxiliary data are nodes and edges, which help provide more filtering on the results. The proposed method brings more accurate results in comparison with standard baseline algorithms

    How much do we know about the User-Item Matrix?: Deep Feature Extraction for Recommendation

    Get PDF
    Collaborative filtering-based recommender systems typically operate on a high-dimensional sparse user-item matrix. Matrix completion is one of the most common formulations where rows and columns represent users and items, and predicting user’s ratings in items corresponds to filling in the missing entries of the matrix. In practice, it is a very challenging task to predict one's interest based on millions of other users having each seen a small subset of thousands of items. We considered how to extract the key features of users and items in the rating matrix to capture their features in a low-dimensional vector and how to create embeddings that well represent the characteristics of users and items by exploring what kind of user/item information to use in the matrix. However, recent studies have focused on utilising side information, such as user's age or movie's genre, but it is not always available and is hard to extract. More importantly, there has been no recent research on how to efficiently extract the important latent features from a sparse data matrix with no side information (1st problem). The next (2nd) problem is that most matrix completion techniques have mainly focused on semantic similarity between users and items with data structure transformation from a rating matrix to a user/item similarity matrix or a graph, neglecting the position of each element (user, item and rating) in the matrix. However, we think that a position is one of the fundamental points in matrix completion, since a specific point to be filled is presented based on the positions of its row and column in the matrix. In order to address the first (1st) problem, we aim to generalise and represent a high-dimensional sparse user-item matrix entry into a low-dimensional space with a small number of important features, and propose a Global-Local Kernel-based matrix completion framework, named GLocal-K, which is divided into two major stages. First, we pre-train an autoencoder with the local kernelised weight matrix, which transforms the data from one space into the feature space by using a 2d-RBF kernel. Then, the pre-trained autoencoder is fine-tuned with the rating matrix, produced by a convolution-based global kernel, which captures the characteristics of each item. GLocal-K outperforms the state-of-the-art baselines on three collaborative filtering benchmarks. However, it cannot show its superior feature extraction ability when the data is very large or too extremely sparse. For the aforementioned second (2nd) problem and the GLocal-K's limitation, we propose a novel position-enhanced user/item representation training model for recommendation, SUPER-Rec. We first capture the rating position in a matrix using relative positional rating encoding and store the position-enhanced rating information and its user-item relationship to a fixed dimension of embedding that is not affected by the matrix size. Then, we apply the trained position-enhanced user and item representations to the simplest traditional machine learning models to highlight the pure novelty of the SUPER-Rec representation. We contribute to the first formal introduction and quantitative analysis of the position-enhanced user/item representation in the recommendation domain and produce a principled discussion about SUPER-Rec with the incredibly excellent RMSE/MAE/NDCG/AUC results (i.e., both rating and ranking prediction accuracy) by an enormous margin compared with various state-of-the-art matrix completion models on both explicit and implicit feedback datasets. For example, SUPER-Rec showed the 28.2% RMSE error decrease in ML-1M compared to the best baseline, while the error decrease by 0.3% to 4.1% was prevalent among all the baselines

    A survey of collaborative filtering-based recommender systems for mobile internet applications

    No full text
    With the rapid development and application of the mobile Internet, huge amounts of user data are generated and collected every day. How to take full advantages of these ubiquitous data is becoming the essential aspect of a recommender system. Collaborative filtering (CF) has been widely studied and utilized to predict the interests of mobile users and to make proper recommendations. In this paper, we first propose a framework of the CF recommender system based on various user data including user ratings and user behaviors. Key features of these two kinds of data are discussed. Moreover, several typical CF algorithms are classified as memory-based approaches and model-based approaches and compared. Two case studies are presented in an effort to validate the proposed framework
    corecore