341 research outputs found

    Fuzzy Side Information Clustering-Based Framework for Effective Recommendations

    Get PDF
    Collaborative filtering (CF) is the most successful and widely implemented algorithm in the area of recommender systems (RSs). It generates recommendations using a set of user-product ratings by matching similarity between the profiles of different users. Computing similarity among user profiles efficiently in case of sparse data is the most crucial component of the CF technique. Data sparsity and accuracy are the two major issues associated with the classical CF approach. In this paper, we try to solve these issues using a novel approach based on the side information (user-product background content) and the Mahalanobis distance measure. The side information has been incorporated into RSs to further improve their performance, especially in the case of data sparsity. However, incorporation of side information into traditional two-dimensional recommender systems would increase the dimensionality and complexity of the system. Therefore, to alleviate the problem of dimensionality, we cluster users based on their side information using k-means clustering algorithm and each user's similarity is computed using the Mahalanobis distance method. Additionally, we use fuzzy sets to represent the side information more efficiently. Results of the experimentation with two benchmark datasets show that our framework improves the recommendations quality and predictive accuracy of both traditional and clustering-based collaborative recommendations

    AN ONTOLOGY-BASED KNOWLEDGE REPRESENTATION USING ANALYTIC HIERARCHY PROCESS FOR ENHANCING SELECTION OF PRODUCT PREFERENCES

    Get PDF
    Product alternatives, which emerges from large number of websites during searching, accounts for some hesitation experienced by customers in selecting satisfying product. As a result, making useful decision with many trade-off considerations becomes a major cause of such problem. Several approaches have been employed for product selection such as, fuzzy logic, Neuro-fuzzy, and weighted least square. However, these could not solve the problem of inconsistency and irrelevant judgement that occur in decision making. In this study, Ontology-based Analytic Hierarchy Process (AHP) was used for enhancing selection of product preferences. The model involved three fundamental components: product gathering, selection and decision making. Ontology Web Language (OWL) was utilized to define ontology in expressing product information gathering in a standard and structured manner for the purpose of interoperability while AHP was employed in making optimal choices. The procedure accepts customers’ perspectives as inputs which are classified into criteria and sub-criteria. Owl was created to foster customers’ interaction and priority estimation tool for AHP in order to generate the consistency ratio of individual judgements. The model was benchmarked with Geometric Mean (GM), Eigenvector (EV), Normalized Column Sum (NCS) Weighted Least Square (WLS) and Fuzzy Preference Programming (FPP). First and second order total deviations and violation rate were the performance parameters evaluation with AHP. The results showed that the minimum and maximum units of products are 2,452and 3,574, respectively. These implied that the proposed model was consistent, relevant and reflected a non-violation of judgment in selection of product preferences. &nbsp

    Hybrid Recommender Systems: A Systematic Literature Review

    Get PDF
    Recommender systems are software tools used to generate and provide suggestions for items and other entities to the users by exploiting various strategies. Hybrid recommender systems combine two or more recommendation strategies in different ways to benefit from their complementary advantages. This systematic literature review presents the state of the art in hybrid recommender systems of the last decade. It is the first quantitative review work completely focused in hybrid recommenders. We address the most relevant problems considered and present the associated data mining and recommendation techniques used to overcome them. We also explore the hybridization classes each hybrid recommender belongs to, the application domains, the evaluation process and proposed future research directions. Based on our findings, most of the studies combine collaborative filtering with another technique often in a weighted way. Also cold-start and data sparsity are the two traditional and top problems being addressed in 23 and 22 studies each, while movies and movie datasets are still widely used by most of the authors. As most of the studies are evaluated by comparisons with similar methods using accuracy metrics, providing more credible and user oriented evaluations remains a typical challenge. Besides this, newer challenges were also identified such as responding to the variation of user context, evolving user tastes or providing cross-domain recommendations. Being a hot topic, hybrid recommenders represent a good basis with which to respond accordingly by exploring newer opportunities such as contextualizing recommendations, involving parallel hybrid algorithms, processing larger datasets, etc

    Smart territories

    Get PDF
    The concept of smart cities is relatively new in research. Thanks to the colossal advances in Artificial Intelligence that took place over the last decade we are able to do all that that we once thought impossible; we build cities driven by information and technologies. In this keynote, we are going to look at the success stories of smart city-related projects and analyse the factors that led them to success. The development of interactive, reliable and secure systems, both connectionist and symbolic, is often a time-consuming process in which numerous experts are involved. However, intuitive and automated tools like “Deep Intelligence” developed by DCSc and BISITE, facilitate this process. Furthermore, in this talk we will analyse the importance of complementary technologies such as IoT and Blockchain in the development of intelligent systems, as well as the use of edge platforms or fog computing

    Smart Buildings

    Get PDF
    This talk presents an efficient cyberphysical platform for the smart management of smart buildings http://www.deepint.net. It is efficient because it facilitates the implementation of data acquisition and data management methods, as well as data representation and dashboard configuration. The platform allows for the use of any type of data source, ranging from the measurements of a multi-functional IoT sensing devices to relational and non-relational databases. It is also smart because it incorporates a complete artificial intelligence suit for data analysis; it includes techniques for data classification, clustering, forecasting, optimization, visualization, etc. It is also compatible with the edge computing concept, allowing for the distribution of intelligence and the use of intelligent sensors. The concept of smart building is evolving and adapting to new applications; the trend to create intelligent neighbourhoods, districts or territories is becoming increasingly popular, as opposed to the previous approach of managing an entire megacity. In this paper, the platform is presented, and its architecture and functionalities are described. Moreover, its operation has been validated in a case study at Salamanca - Ecocasa. This platform could enable smart building to develop adapted knowledge management systems, adapt them to new requirements and to use multiple types of data, and execute efficient computational and artificial intelligence algorithms. The platform optimizes the decisions taken by human experts through explainable artificial intelligence models that obtain data from IoT sensors, databases, the Internet, etc. The global intelligence of the platform could potentially coordinate its decision-making processes with intelligent nodes installed in the edge, which would use the most advanced data processing techniques

    Managing smart cities with deepint.net

    Get PDF
    In this keynote, the evolution of intelligent computer systems will be examined. The need for human capital will be emphasised, as well as the need to follow one’s “gut instinct” in problem-solving. We will look at the benefits of combining information and knowledge to solve complex problems and will examine how knowledge engineering facilitates the integration of different algorithms. Furthermore, we will analyse the importance of complementary technologies such as IoT and Blockchain in the development of intelligent systems. It will be shown how tools like "Deep Intelligence" make it possible to create computer systems efficiently and effectively. "Smart" infrastructures need to incorporate all added-value resources so they can offer useful services to the society, while reducing costs, ensuring reliability and improving the quality of life of the citizens. The combination of AI with IoT and with blockchain offers a world of possibilities and opportunities

    Learning AI with deepint.net

    Get PDF
    This keynote will examine the evolution of intelligent computer systems over the last years, underscoring the need for human capital in this field, so that further progress can be made. In this regard, learning about AI through experience is a big challenge, but it is possible thanks to tools such as deepint.net, which enable anyone to develop AI systems; knowledge of programming is no longer necessary

    Intelligent Models in Complex Problem Solving

    Get PDF
    Artificial Intelligence revived in the last decade. The need for progress, the growing processing capacity and the low cost of the Cloud have facilitated the development of new, powerful algorithms. The efficiency of these algorithms in Big Data processing, Deep Learning and Convolutional Networks is transforming the way we work and is opening new horizons

    AIoT for Smart territories

    Get PDF
    Artificial Intelligence revived in the last decade. The need for progress, the growing processing capacity and the low cost of the Cloud have facilitated the development of new, powerful algorithms. The efficiency of these algorithms in Big Data processing, Deep Learning and Convolutional Networks is transforming the way we work and is opening new horizons. Thanks to them, we can now analyse data and obtain unimaginable solutions to today’s problems. Nevertheless, our success is not entirely based on algorithms, it also comes from our ability to follow our “gut” when choosing the best combination of algorithms for an intelligent artefact. It's about approaching engineering with a lot of knowledge and tact. This involves the use of both connectionist and symbolic systems, and of having a full understanding of the algorithms used. Moreover, to address today’s problems we must work with both historical and real-time data
    • …
    corecore