31 research outputs found

    Using Feature Selection Methods to Discover Common Users’ Preferences for Online Recommender Systems

    Get PDF
    Recommender systems have taken over user’s choice to choose the items/services they want from online markets, where lots of merchandise is traded. Collaborative filtering-based recommender systems uses user opinions and preferences. Determination of commonly used attributes that influence preferences used for prediction and subsequent recommendation of unknown or new items to users is a significant objective while developing recommender engines.  In conventional systems, study of user behavior to know their dis/like over items would be carried-out. In this paper, presents feature selection methods to mine such preferences through selection of high influencing attributes of the items. In machine learning, feature selection is used as a data pre-processing method but extended its use on this work to achieve two objectives; removal of redundant, uninformative features and for selecting formative, relevant features based on the response variable. The latter objective, was suggested to identify and determine the frequent and shared features that would be preferred mostly by marketplace online users as they express their preferences. The dataset used for experimentation and determination was synthetic dataset.  The Jupyter Notebook™ using python was used to run the experiments. Results showed that given a number of formative features, there were those selected, with high influence to the response variable. Evidence showed that different feature selection methods resulted with different feature scores, and intrinsic method had the best overall results with 85% model accuracy. Selected features were used as frequently preferred attributes that influence users’ preferences

    Using Feature Selection Methods to Discover Common Users’ Preferences for Online Recommender Systems

    Get PDF
    Recommender systems have taken over user’s choice to choose the items/services they want from online markets, where lots of merchandise is traded. Collaborative filtering-based recommender systems uses user opinions and preferences. Determination of commonly used attributes that influence preferences used for prediction and subsequent recommendation of unknown or new items to users is a significant objective while developing recommender engines.  In conventional systems, study of user behavior to know their dis/like over items would be carried-out. In this paper, presents feature selection methods to mine such preferences through selection of high influencing attributes of the items. In machine learning, feature selection is used as a data pre-processing method but extended its use on this work to achieve two objectives; removal of redundant, uninformative features and for selecting formative, relevant features based on the response variable. The latter objective, was suggested to identify and determine the frequent and shared features that would be preferred mostly by marketplace online users as they express their preferences. The dataset used for experimentation and determination was synthetic dataset.  The Jupyter Notebook™ using python was used to run the experiments. Results showed that given a number of formative features, there were those selected, with high influence to the response variable. Evidence showed that different feature selection methods resulted with different feature scores, and intrinsic method had the best overall results with 85% model accuracy. Selected features were used as frequently preferred attributes that influence users’ preferences

    Developing Hybrid-Based Recommender System with NaĂŻve Bayes Optimization to Increase Prediction Efficiency

    Get PDF
    Commerce and entertainment world today have shifted to the digital platforms where customer preferences are suggested by recommender systems. Recommendations have been made using a variety of methods such as content-based, collaborative filtering-based or their hybrids. Collaborative systems are common recommenders, which use similar users’ preferences. They however have issues such as data sparsity, cold start problem and lack of scalability. When a small percentage of users express their preferences, data becomes highly sparse, thus affecting quality of recommendations. New users or items with no preferences, forms cold start issues affecting recommendations. High amount of sparse data affects how the user-item matrices are formed thus affecting the overall recommendation results. How to handle data input in the recommender engine while reducing data sparsity and increase its potential to scale up is proposed. This paper proposed development of hybrid model with data optimization using a Naïve Bayes classifier, with an aim of reducing data sparsity problem and a blend of collaborative filtering model and association rule mining-based ensembles, for recommending items with an aim of improving their predictions. Machine learning using python on Jupyter notebook was used to develop the hybrid. The models were tested using MovieLens 100k and 1M datasets. We demonstrate the final recommendations of the hybrid having new top ten highly rated movies with 68% approved recommendations. We confirm new items suggested to the active user(s) while less sparse data was input and an improved scaling up of collaborative filtering model, thus improving model efficacy and better predictions

    A Survey of Deep Learning Solutions for Anomaly Detection in Surveillance Videos

    Get PDF
    Deep learning has proven to be a landmark computing approach to the computer vision domain. Hence, it has been widely applied to solve complex cognitive tasks like the detection of anomalies in surveillance videos. Anomaly detection in this case is the identification of abnormal events in the surveillance videos which can be deemed as security incidents or threats. Deep learning solutions for anomaly detection has outperformed other traditional machine learning solutions. This review attempts to provide holistic benchmarking of the published deep learning solutions for videos anomaly detection since 2016. The paper identifies, the learning technique, datasets used and the overall model accuracy. Reviewed papers were organised into five deep learning methods namely; autoencoders, continual learning, transfer learning, reinforcement learning and ensemble learning. Current and emerging trends are discussed as well

    Feature Extraction using Histogram of Oriented Gradients for Image Classification in Maize Leaf Diseases

    Get PDF
    The paper presents feature extraction methods and classification algorithms used to classify maize leaf disease images. From maize disease images, features are extracted and passed to the machine learning classification algorithm to identify the possible disease based on the features detected using the feature extraction method. The maize disease images used include images of common rust, leaf spot, and northern leaf blight and healthy images. An evaluation was done for the feature extraction method to see which feature extraction method performs best with image classification algorithms. Based on the evaluation, the outcomes revealed Histogram of Oriented Gradients performed best with classifiers compared to KAZE and Oriented FAST and rotated BRIEF. The random forest classifier emerged the best in terms of image classification, based on four performance metrics which are accuracy, precision, recall, and F1-score. The experimental outcome indicated that the random forest had 0.74 accuracy, 0.77 precision, 0.77 recall, and 0.75 F1-score

    Feature Extraction using Histogram of Oriented Gradients for Image Classification in Maize Leaf Diseases

    Get PDF
    The paper presents feature extraction methods and classification algorithms used to classify maize leaf disease images. From maize disease images, features are extracted and passed to the machine learning classification algorithm to identify the possible disease based on the features detected using the feature extraction method. The maize disease images used include images of common rust, leaf spot, and northern leaf blight and healthy images. An evaluation was done for the feature extraction method to see which feature extraction method performs best with image classification algorithms. Based on the evaluation, the outcomes revealed Histogram of Oriented Gradients performed best with classifiers compared to KAZE and Oriented FAST and rotated BRIEF. The random forest classifier emerged the best in terms of image classification, based on four performance metrics which are accuracy, precision, recall, and F1-score. The experimental outcome indicated that the random forest had 0.74 accuracy, 0.77 precision, 0.77 recall, and 0.75 F1-score

    Organizational Implementation of Information Systems Innovations (OIISI) Framework was developed in the context of Universities in Kenya

    No full text
    Organizational Implementation of Information Systems Innovations (OIISI) Framework was developed in the context of Universities in Kenya and can be used to understand the implementation of Information Systems (IS) Innovations in Higher Education Institutions (HEIs). The aim of this study was to determine the degree of associations and relationships in the OIISI framework in HEIs and, in so doing, provided researchers and practitioners with a valid and reliable instrument that covered all the key constructs identified by the framework. In this study, the framework was tested in the context of HEIs in Kenya. To do so, data was collected from identified respondents in some selected HEIs that have implemented IS or were in the implementation process, analyzed and the outcomes presented, thereby validating the relationships

    Impact of Managerial Interventions on Process in implementing Information Systems for Higher Education Institutions in a developing country

    No full text
    Organizational Implementation of Information Systems Innovations (OIISI) Framework was developed in the context of University in Kenya and can be used to understand the implementation of Information Systems (IS) Innovations in Higher Education Institutions (HEIs). A quantitative approach to research was taken to determine the degree of associations and relationships in the OIISI framework in HEIs and, in so doing, aimed at providing researchers and practitioners with a valid and reliable instrument that covered all the key constructs identified by the framework. In this study, data was collected from identified respondents in some selected HEIs that have implemented IS or are in the implementation process, analyzed and the outcomes presented, thereby validating the relationships. Judgmental and convenience sampling design was used to select HEIs. A questionnaire based on a seven point Likert scale was administered to different participants of IS implementation in selected HEIs in Kenya and confirmatory factor analysis (CFA) used to determine regression coefficients between constructs of interest. The Chi-square goodness-of-fit test was used to test model adequacy together with other goodness of fit statistics. The null hypothesis for this test was that the model adequately accounts for the data, while the alternative was that there is a significant amount of discrepancy. To test the hypothesis, correlation coefficients were found, hypothesis tested and coefficient of determination calculated for explanation purposes. Results of this study indicates a correlation coefficient of 0.6 between Managerial Intervention (MI) and Implementation Process (IP) which is positive and significant at the 0.01 level which indicates a statistically significant relationship between MI and IP

    Population status of Jackson\u27s Widowbird Euplectes jacksoni in Mau Narok-Molo grasslands Important Bird Area, Kenya

    No full text
    Volume: 27Start Page: 10End Page: 1

    Automatic Debugging Approaches: A literature Review.

    No full text
    Fixing failed computer programs involves completing two fundamental debugging tasks: first, the programmer has to reproduce the failure; second, s/he has to find the failure cause. Software debugging is the process of locating and correcting erroneous statements in a faulty program as a result of testing. It is extremely time consuming and very expensive. The term debugging collectively refers to fault localization, understanding and correction. Automated tools to locate and correct the erroneous statements in a program can significantly reduce the cost of software development and improve the overall quality of the software. This paper discusses fault localization, program slicing and delta debugging techniques. It identifies statistical fault localization tools such as Tarantula, GZoltar and others such as dbx and Microsoft Visual C++ debugger that provides a snapshot of the program state at various break points along an execution path. In conclusion we note that most software development companies spend a huge amount of resources in testing and debugging. A lot more research need to be conducted to fully automate the debugging process thereby reducing software production cost, time and improve quality
    corecore