777 research outputs found

    Fuzzy Side Information Clustering-Based Framework for Effective Recommendations

    Get PDF
    Collaborative filtering (CF) is the most successful and widely implemented algorithm in the area of recommender systems (RSs). It generates recommendations using a set of user-product ratings by matching similarity between the profiles of different users. Computing similarity among user profiles efficiently in case of sparse data is the most crucial component of the CF technique. Data sparsity and accuracy are the two major issues associated with the classical CF approach. In this paper, we try to solve these issues using a novel approach based on the side information (user-product background content) and the Mahalanobis distance measure. The side information has been incorporated into RSs to further improve their performance, especially in the case of data sparsity. However, incorporation of side information into traditional two-dimensional recommender systems would increase the dimensionality and complexity of the system. Therefore, to alleviate the problem of dimensionality, we cluster users based on their side information using k-means clustering algorithm and each user's similarity is computed using the Mahalanobis distance method. Additionally, we use fuzzy sets to represent the side information more efficiently. Results of the experimentation with two benchmark datasets show that our framework improves the recommendations quality and predictive accuracy of both traditional and clustering-based collaborative recommendations

    Hybrid Recommender Systems: A Systematic Literature Review

    Get PDF
    Recommender systems are software tools used to generate and provide suggestions for items and other entities to the users by exploiting various strategies. Hybrid recommender systems combine two or more recommendation strategies in different ways to benefit from their complementary advantages. This systematic literature review presents the state of the art in hybrid recommender systems of the last decade. It is the first quantitative review work completely focused in hybrid recommenders. We address the most relevant problems considered and present the associated data mining and recommendation techniques used to overcome them. We also explore the hybridization classes each hybrid recommender belongs to, the application domains, the evaluation process and proposed future research directions. Based on our findings, most of the studies combine collaborative filtering with another technique often in a weighted way. Also cold-start and data sparsity are the two traditional and top problems being addressed in 23 and 22 studies each, while movies and movie datasets are still widely used by most of the authors. As most of the studies are evaluated by comparisons with similar methods using accuracy metrics, providing more credible and user oriented evaluations remains a typical challenge. Besides this, newer challenges were also identified such as responding to the variation of user context, evolving user tastes or providing cross-domain recommendations. Being a hot topic, hybrid recommenders represent a good basis with which to respond accordingly by exploring newer opportunities such as contextualizing recommendations, involving parallel hybrid algorithms, processing larger datasets, etc

    Towards Integration of Artificial Intelligence into Medical Devices as a Real-Time Recommender System for Personalised Healthcare:State-of-the-Art and Future Prospects

    Get PDF
    In the era of big data, artificial intelligence (AI) algorithms have the potential to revolutionize healthcare by improving patient outcomes and reducing healthcare costs. AI algorithms have frequently been used in health care for predictive modelling, image analysis and drug discovery. Moreover, as a recommender system, these algorithms have shown promising impacts on personalized healthcare provision. A recommender system learns the behaviour of the user and predicts their current preferences (recommends) based on their previous preferences. Implementing AI as a recommender system improves this prediction accuracy and solves cold start and data sparsity problems. However, most of the methods and algorithms are tested in a simulated setting which cannot recapitulate the influencing factors of the real world. This review article systematically reviews prevailing methodologies in recommender systems and discusses the AI algorithms as recommender systems specifically in the field of healthcare. It also provides discussion around the most cutting-edge academic and practical contributions present in the literature, identifies performance evaluation matrices, challenges in the implementation of AI as a recommender system, and acceptance of AI-based recommender systems by clinicians. The findings of this article direct researchers and professionals to comprehend currently developed recommender systems and the future of medical devices integrated with real-time recommender systems for personalized healthcare

    Non-invasive Detection and Compression of Fetal Electrocardiogram

    Get PDF
    Noninvasive detection of fetal electrocardiogram (FECG) from abdominal ECG recordings is highly dependent on typical statistical signal processing techniques such as independent component analysis (ICA), adaptive noise filtering, and multichannel blind deconvolution. In contrast to the previous multichannel FECG extraction methods, several recent schemes for single‐channel FECG extraction such as the extended Kalman filter (EKF), extended Kalman smoother (EKS), template subtraction (TS), and support vector regression (SVR) for detecting R waves on ECG, are evaluated via the quantitative metrics such as sensitivity (SE), positive predictive value (PPV), F‐score, detection error rate (DER), and range of accuracy. A correlation predictor that combines with multivariable gray model (GM) is also proposed for sequential ECG data compression, which displays better percent root mean-square difference (PRD) than those of Sabah’s scheme for fixed and predicted compression ratio (CR). Automatic calculation on fetal heart rate (FHR) on the reconstructed FECG from mixed signals of abdominal ECG recordings is also experimented with sample synthetic ECG data. Sample data on FHR and T/QRS for both physiological case and pathological case are simulated in a 10-min time sequence

    Collaborative Filtering Similarity Algorithm Using Common Items

    Get PDF
    Collaborative filtering (CF) plays an important role in reducing information overload by providing personalized services. CF is widely applied, but common items are not taken account in the similarity algorithm, which reduces the recommendation effect. To address this issue, we propose several methods to improve the similarity algorithm by considering common items, and apply the proposed methods to CF recommender systems. Experiments show that our methods demonstrate significant improvements over traditional CF

    Application of Improved Collaborative Filtering in the Recommendation of E-commerce Commodities

    Get PDF
    Problems such as low recommendation precision and efficiency often exist in traditional collaborative filtering because of the huge basic data volume. In order to solve these problems, we proposed a new algorithm which combines collaborative filtering and support vector machine (SVM). Different with traditional collaborative filtering, we used SVM to classify commodities into positive and negative feedbacks. Then we selected the commodities that have positive feedback to calculate the comprehensive grades of marks and comments. After that, we build SVM-based collaborative filtering algorithm. Experiments on Taobao data (a Chinese online shopping website owned by Alibaba) showed that the algorithm has good recommendation precision and recommendation efficiency, thus having certain practical value in the E-commerce industry

    A recommender system based on collaborative filtering using ontology and dimensionality reduction techniques

    Get PDF
    Improving the efficiency of methods has been a big challenge in recommender systems. It has been also important to consider the trade-off between the accuracy and the computation time in recommending the items by the recommender systems as they need to produce the recommendations accurately and meanwhile in real-time. In this regard, this research develops a new hybrid recommendation method based on Collaborative Filtering (CF) approaches. Accordingly, in this research we solve two main drawbacks of recommender systems, sparsity and scalability, using dimensionality reduction and ontology techniques. Then, we use ontology to improve the accuracy of recommendations in CF part. In the CF part, we also use a dimensionality reduction technique, Singular Value Decomposition (SVD), to find the most similar items and users in each cluster of items and users which can significantly improve the scalability of the recommendation method. We evaluate the method on two real-world datasets to show its effectiveness and compare the results with the results of methods in the literature. The results showed that our method is effective in improving the sparsity and scalability problems in CF

    Context-Specific Preference Learning of One Dimensional Quantitative Geospatial Attributes Using a Neuro-Fuzzy Approach

    Get PDF
    Change detection is a topic of great importance for modern geospatial information systems. Digital aerial imagery provides an excellent medium to capture geospatial information. Rapidly evolving environments, and the availability of increasing amounts of diverse, multiresolutional imagery bring forward the need for frequent updates of these datasets. Analysis and query of spatial data using potentially outdated data may yield results that are sometimes invalid. Due to measurement errors (systematic, random) and incomplete knowledge of information (uncertainty) it is ambiguous if a change in a spatial dataset has really occurred. Therefore we need to develop reliable, fast, and automated procedures that will effectively report, based on information from a new image, if a change has actually occurred or this change is simply the result of uncertainty. This thesis introduces a novel methodology for change detection in spatial objects using aerial digital imagery. The uncertainty of the extraction is used as a quality estimate in order to determine whether change has occurred. For this goal, we develop a fuzzy-logic system to estimate uncertainty values fiom the results of automated object extraction using active contour models (a.k.a. snakes). The differential snakes change detection algorithm is an extension of traditional snakes that incorporates previous information (i.e., shape of object and uncertainty of extraction) as energy functionals. This process is followed by a procedure in which we examine the improvement of the uncertainty at the absence of change (versioning). Also, we introduce a post-extraction method for improving the object extraction accuracy. In addition to linear objects, in this thesis we extend differential snakes to track deformations of areal objects (e.g., lake flooding, oil spills). From the polygonal description of a spatial object we can track its trajectory and areal changes. Differential snakes can also be used as the basis for similarity indices for areal objects. These indices are based on areal moments that are invariant under general affine transformation. Experimental results of the differential snakes change detection algorithm demonstrate their performance. More specifically, we show that the differential snakes minimize the false positives in change detection and track reliably object deformations
    corecore