429 research outputs found
Linear Temporal Public Announcement Logic: a new perspective for reasoning the knowledge of multi-classifiers
Current applied intelligent systems have crucial shortcomings either in
reasoning the gathered knowledge, or representation of comprehensive integrated
information. To address these limitations, we develop a formal transition
system which is applied to the common artificial intelligence (AI) systems, to
reason about the findings. The developed model was created by combining the
Public Announcement Logic (PAL) and the Linear Temporal Logic (LTL), which will
be done to analyze both single-framed data and the following time-series data.
To do this, first, the achieved knowledge by an AI-based system (i.e.,
classifiers) for an individual time-framed data, will be taken, and then, it
would be modeled by a PAL. This leads to developing a unified representation of
knowledge, and the smoothness in the integration of the gathered and external
experiences. Therefore, the model could receive the classifier's predefined --
or any external -- knowledge, to assemble them in a unified manner. Alongside
the PAL, all the timed knowledge changes will be modeled, using a temporal
logic transition system. Later, following by the translation of natural
language questions into the temporal formulas, the satisfaction leads the model
to answer that question. This interpretation integrates the information of the
recognized input data, rules, and knowledge. Finally, we suggest a mechanism to
reduce the investigated paths for the performance improvements, which results
in a partial correction for an object-detection system.Comment: 11 pages, 1 figure
Extracting Implicit Social Relation for Social Recommendation Techniques in User Rating Prediction
Recommendation plays an increasingly important role in our daily lives.
Recommender systems automatically suggest items to users that might be
interesting for them. Recent studies illustrate that incorporating social trust
in Matrix Factorization methods demonstrably improves accuracy of rating
prediction. Such approaches mainly use the trust scores explicitly expressed by
users. However, it is often challenging to have users provide explicit trust
scores of each other. There exist quite a few works, which propose Trust
Metrics to compute and predict trust scores between users based on their
interactions. In this paper, first we present how social relation can be
extracted from users' ratings to items by describing Hellinger distance between
users in recommender systems. Then, we propose to incorporate the predicted
trust scores into social matrix factorization models. By analyzing social
relation extraction from three well-known real-world datasets, which both:
trust and recommendation data available, we conclude that using the implicit
social relation in social recommendation techniques has almost the same
performance compared to the actual trust scores explicitly expressed by users.
Hence, we build our method, called Hell-TrustSVD, on top of the
state-of-the-art social recommendation technique to incorporate both the
extracted implicit social relations and ratings given by users on the
prediction of items for an active user. To the best of our knowledge, this is
the first work to extend TrustSVD with extracted social trust information. The
experimental results support the idea of employing implicit trust into matrix
factorization whenever explicit trust is not available, can perform much better
than the state-of-the-art approaches in user rating prediction
Hadoop-Oriented SVM-LRU (H-SVM-LRU): An Intelligent Cache Replacement Algorithm to Improve MapReduce Performance
Modern applications can generate a large amount of data from different
sources with high velocity, a combination that is difficult to store and
process via traditional tools. Hadoop is one framework that is used for the
parallel processing of a large amount of data in a distributed environment,
however, various challenges can lead to poor performance. Two particular issues
that can limit performance are the high access time for I/O operations and the
recomputation of intermediate data. The combination of these two issues can
result in resource wastage. In recent years, there have been attempts to
overcome these problems by using caching mechanisms. Due to cache space
limitations, it is crucial to use this space efficiently and avoid cache
pollution (the cache contains data that is not used in the future). We propose
Hadoop-oriented SVM-LRU (HSVM- LRU) to improve Hadoop performance. For this
purpose, we use an intelligent cache replacement algorithm, SVM-LRU, that
combines the well-known LRU mechanism with a machine learning algorithm, SVM,
to classify cached data into two groups based on their future usage.
Experimental results show a significant decrease in execution time as a result
of an increased cache hit ratio, leading to a positive impact on Hadoop
performance
Impact of Traffic Characteristics on Request Aggregation in an NDN Router
The paper revisits the performance evaluation of caching in a Named Data
Networking (NDN) router where the content store (CS) is supplemented by a
pending interest table (PIT). The PIT aggregates requests for a given content
that arrive within the download delay and thus brings an additional reduction
in upstream bandwidth usage beyond that due to CS hits. We extend prior work on
caching with non-zero download delay (non-ZDD) by proposing a novel
mathematical framework that is more easily applicable to general traffic models
and by considering alternative cache insertion policies. Specifically we
evaluate the use of an LRU filter to improve CS hit rate performance in this
non-ZDD context. We also consider the impact of time locality in demand due to
finite content lifetimes. The models are used to quantify the impact of the PIT
on upstream bandwidth reduction, demonstrating notably that this is significant
only for relatively small content catalogues or high average request rate per
content. We further explore how the effectiveness of the filter with finite
content lifetimes depends on catalogue size and traffic intensity
A Combined Analytical Modeling Machine Learning Approach for Performance Prediction of MapReduce Jobs in Hadoop Clusters
Nowadays MapReduce and its open source implementation, Apache Hadoop, are the most widespread solutions for handling massive dataset on clusters of commodity hardware. At the expense of a somewhat reduced performance in comparison to HPC technologies, the MapReduce framework provides fault tolerance and automatic parallelization without any efforts by developers. Since in many cases Hadoop is adopted to support business critical activities, it is often important to predict with fair confidence the execution time of submitted jobs, for instance when SLAs are established with end-users. In this work, we propose and validate a hybrid approach exploiting both queuing networks and support vector regression, in order to achieve a good accuracy without too many costly experiments on a real setup. The experimental results show how the proposed approach attains a 21% improvement in accuracy over applying machine learning techniques without any support from analytical models
- …