1,094 research outputs found
Prediction Techniques in Internet of Things (IoT) Environment: A Comparative Study
Socialization and Personalization in Internet of Things (IOT) environment are the current trends in computing research. Most of the research work stresses the importance of predicting the service & providing socialized and personalized services. This paper presents a survey report on different techniques used for predicting user intention in wide variety of IOT based applications like smart mobile, smart television, web mining, weather forecasting, health-care/medical, robotics, road-traffic, educational data mining, natural calamities, retail banking, e-commerce, wireless networks & social networking. As per the survey made the prediction techniques are used for: predicting the application that can be accessed by the mobile user, predicting the next page to be accessed by web user, predicting the users favorite TV program, predicting user navigational patterns and usage needs on websites & also to extract the users browsing behavior, predicting future climate conditions, predicting whether a patient is suffering from a disease, predicting user intention to make implicit and human-like interactions possible by accepting implicit commands, predicting the amount of traffic occurring at a particular location, predicting student performance in schools & colleges, predicting & estimating the frequency of natural calamities occurrences like floods, earthquakes over a long period of time & also to take precautionary measures, predicting & detecting false user trying to make transaction in the name of genuine user, predicting the actions performed by the user to improve the business, predicting & detecting the intruder acting in the network, predicting the mood transition information of the user by using context history, etc. This paper also discusses different techniques like Decision Tree algorithm, Artificial Intelligence and Data Mining based Machine learning techniques, Content and Collaborative based Recommender algorithms used for prediction
Hybrid Recommender Systems: A Systematic Literature Review
Recommender systems are software tools used to generate and provide suggestions for items
and other entities to the users by exploiting various strategies. Hybrid recommender systems
combine two or more recommendation strategies in different ways to benefit from their complementary
advantages. This systematic literature review presents the state of the art in hybrid
recommender systems of the last decade. It is the first quantitative review work completely focused
in hybrid recommenders. We address the most relevant problems considered and present
the associated data mining and recommendation techniques used to overcome them. We also
explore the hybridization classes each hybrid recommender belongs to, the application domains,
the evaluation process and proposed future research directions. Based on our findings, most of
the studies combine collaborative filtering with another technique often in a weighted way. Also
cold-start and data sparsity are the two traditional and top problems being addressed in 23 and
22 studies each, while movies and movie datasets are still widely used by most of the authors.
As most of the studies are evaluated by comparisons with similar methods using accuracy metrics,
providing more credible and user oriented evaluations remains a typical challenge. Besides
this, newer challenges were also identified such as responding to the variation of user context,
evolving user tastes or providing cross-domain recommendations. Being a hot topic, hybrid
recommenders represent a good basis with which to respond accordingly by exploring newer
opportunities such as contextualizing recommendations, involving parallel hybrid algorithms,
processing larger datasets, etc
Content-Based Book Recommending Using Learning for Text Categorization
Recommender systems improve access to relevant products and information by
making personalized suggestions based on previous examples of a user's likes
and dislikes. Most existing recommender systems use social filtering methods
that base recommendations on other users' preferences. By contrast,
content-based methods use information about an item itself to make suggestions.
This approach has the advantage of being able to recommended previously unrated
items to users with unique interests and to provide explanations for its
recommendations. We describe a content-based book recommending system that
utilizes information extraction and a machine-learning algorithm for text
categorization. Initial experimental results demonstrate that this approach can
produce accurate recommendations.Comment: 8 pages, 3 figures, Submission to Fourth ACM Conference on Digital
Librarie
Multi-dimensional clustering in user profiling
User profiling has attracted an enormous number of technological methods and
applications. With the increasing amount of products and services, user profiling
has created opportunities to catch the attention of the user as well as achieving
high user satisfaction. To provide the user what she/he wants, when and how,
depends largely on understanding them. The user profile is the representation of
the user and holds the information about the user. These profiles are the
outcome of the user profiling.
Personalization is the adaptation of the services to meet the user’s needs and
expectations. Therefore, the knowledge about the user leads to a personalized
user experience. In user profiling applications the major challenge is to build and
handle user profiles. In the literature there are two main user profiling methods,
collaborative and the content-based. Apart from these traditional profiling
methods, a number of classification and clustering algorithms have been used
to classify user related information to create user profiles. However, the profiling,
achieved through these works, is lacking in terms of accuracy. This is because,
all information within the profile has the same influence during the profiling even
though some are irrelevant user information.
In this thesis, a primary aim is to provide an insight into the concept of user
profiling. For this purpose a comprehensive background study of the literature
was conducted and summarized in this thesis. Furthermore, existing user
profiling methods as well as the classification and clustering algorithms were investigated. Being one of the objectives of this study, the use of these
algorithms for user profiling was examined. A number of classification and
clustering algorithms, such as Bayesian Networks (BN) and Decision Trees
(DTs) have been simulated using user profiles and their classification accuracy
performances were evaluated. Additionally, a novel clustering algorithm for the
user profiling, namely Multi-Dimensional Clustering (MDC), has been proposed.
The MDC is a modified version of the Instance Based Learner (IBL) algorithm.
In IBL every feature has an equal effect on the classification regardless of their
relevance. MDC differs from the IBL by assigning weights to feature values to
distinguish the effect of the features on clustering. Existing feature weighing
methods, for instance Cross Category Feature (CCF), has also been
investigated. In this thesis, three feature value weighting methods have been
proposed for the MDC. These methods are; MDC weight method by Cross
Clustering (MDC-CC), MDC weight method by Balanced Clustering (MDC-BC)
and MDC weight method by changing the Lower-limit to Zero (MDC-LZ). All of
these weighted MDC algorithms have been tested and evaluated. Additional
simulations were carried out with existing weighted and non-weighted IBL
algorithms (i.e. K-Star and Locally Weighted Learning (LWL)) in order to
demonstrate the performance of the proposed methods. Furthermore, a real life scenario is implemented to show how the MDC can be used for the user
profiling to improve personalized service provisioning in mobile environments.
The experiments presented in this thesis were conducted by using user profile
datasets that reflect the user’s personal information, preferences and interests.
The simulations with existing classification and clustering algorithms (e.g. Bayesian Networks (BN), Naïve Bayesian (NB), Lazy learning of Bayesian
Rules (LBR), Iterative Dichotomister 3 (Id3)) were performed on the WEKA
(version 3.5.7) machine learning platform. WEKA serves as a workbench to
work with a collection of popular learning schemes implemented in JAVA. In
addition, the MDC-CC, MDC-BC and MDC-LZ have been implemented on
NetBeans IDE 6.1 Beta as a JAVA application and MATLAB. Finally, the real life
scenario is implemented as a Java Mobile Application (Java ME) on NetBeans
IDE 7.1. All simulation results were evaluated based on the error rate and
accuracy
Computing with Granular Words
Computational linguistics is a sub-field of artificial intelligence; it is an interdisciplinary field dealing with statistical and/or rule-based modeling of natural language from a computational perspective. Traditionally, fuzzy logic is used to deal with fuzziness among single linguistic terms in documents. However, linguistic terms may be related to other types of uncertainty. For instance, different users search ‘cheap hotel’ in a search engine, they may need distinct pieces of relevant hidden information such as shopping, transportation, weather, etc. Therefore, this research work focuses on studying granular words and developing new algorithms to process them to deal with uncertainty globally. To precisely describe the granular words, a new structure called Granular Information Hyper Tree (GIHT) is constructed. Furthermore, several technologies are developed to cooperate with computing with granular words in spam filtering and query recommendation. Based on simulation results, the GIHT-Bayesian algorithm can get more accurate spam filtering rate than conventional method Naive Bayesian and SVM; computing with granular word also generates better recommendation results based on users’ assessment when applied it to search engine
Understanding and Personalising Smart City Services Using Machine Learning, the Internet-of-Things and Big Data
This paper explores the potential of Machine Learning (ML) and Artificial Intelligence (AI) to lever Internet of Things (IoT) and Big Data in the development of personalised services in Smart Cities. We do this by studying the performance of four well-known ML classification algorithms (Bayes Network (BN), Naïve Bayesian (NB), J48, and Nearest Neighbour (NN)) in correlating the effects of weather data (especially rainfall and temperature) on short journeys made by cyclists in London. The performance of the algorithms was assessed in terms of accuracy, trustworthy and speed. The data sets were provided by Transport for London (TfL) and the UK MetOffice. We employed a random sample of some 1,800,000 instances, comprising six individual datasets, which we analysed on the WEKA platform. The results revealed that there were a high degree of correlations between weather-based attributes and the Big Data being analysed. Notable observations were that, on average, the decision tree J48 algorithm performed best in terms of accuracy while the kNN IBK algorithm was the fastest to build models. Finally we suggest IoT Smart City applications that may benefit from our work
Recommender systems in industrial contexts
This thesis consists of four parts: - An analysis of the core functions and
the prerequisites for recommender systems in an industrial context: we identify
four core functions for recommendation systems: Help do Decide, Help to
Compare, Help to Explore, Help to Discover. The implementation of these
functions has implications for the choices at the heart of algorithmic
recommender systems. - A state of the art, which deals with the main techniques
used in automated recommendation system: the two most commonly used algorithmic
methods, the K-Nearest-Neighbor methods (KNN) and the fast factorization
methods are detailed. The state of the art presents also purely content-based
methods, hybridization techniques, and the classical performance metrics used
to evaluate the recommender systems. This state of the art then gives an
overview of several systems, both from academia and industry (Amazon, Google
...). - An analysis of the performances and implications of a recommendation
system developed during this thesis: this system, Reperio, is a hybrid
recommender engine using KNN methods. We study the performance of the KNN
methods, including the impact of similarity functions used. Then we study the
performance of the KNN method in critical uses cases in cold start situation. -
A methodology for analyzing the performance of recommender systems in
industrial context: this methodology assesses the added value of algorithmic
strategies and recommendation systems according to its core functions.Comment: version 3.30, May 201
Social Media Based Deep Auto-Encoder Model for Clinical Recommendation
One of the most actively studied topics in modern medicine is the use of deep learning and patient clinical data to make medication and ADR recommendations. However, the clinical community still has some work to do in order to build a model that hybridises the recommendation system. As a social media learning based deep auto-encoder model for clinical recommendation, this research proposes a hybrid model that combines deep self-decoder with Top n similar co-patient information to produce a joint optimisation function (SAeCR). Implicit clinical information can be extracted using the network representation learning technique. Three experiments were conducted on two real-world social network data sets to assess the efficacy of the SAeCR model. As demonstrated by the experiments, the suggested model outperforms the other classification method on a larger and sparser data set. In addition, social network data can help doctors determine the nature of a patient's relationship with a co-patient. The SAeCR model is more effective since it incorporates insights from network representation learning and social theory
- …