7,063 research outputs found
Counterexample Generation in Probabilistic Model Checking
Providing evidence for the refutation of a property is an essential, if not the most important, feature of model checking. This paper considers algorithms for counterexample generation for probabilistic CTL formulae in discrete-time Markov chains. Finding the strongest evidence (i.e., the most probable path) violating a (bounded) until-formula is shown to be reducible to a single-source (hop-constrained) shortest path problem. Counterexamples of smallest size that deviate most from the required probability bound can be obtained by applying (small amendments to) k-shortest (hop-constrained) paths algorithms. These results can be extended to Markov chains with rewards, to LTL model checking, and are useful for Markov decision processes. Experimental results show that typically the size of a counterexample is excessive. To obtain much more compact representations, we present a simple algorithm to generate (minimal) regular expressions that can act as counterexamples. The feasibility of our approach is illustrated by means of two communication protocols: leader election in an anonymous ring network and the Crowds protocol
A new technique for intelligent web personal recommendation
Personal recommendation systems nowadays are very important in web applications
because of the available huge volume of information on the World Wide Web, and the
necessity to save users’ time, and provide appropriate desired information, knowledge,
items, etc. The most popular recommendation systems are collaborative filtering systems,
which suffer from certain problems such as cold-start, privacy, user identification, and
scalability. In this thesis, we suggest a new method to solve the cold start problem taking
into consideration the privacy issue. The method is shown to perform very well in
comparison with alternative methods, while having better properties regarding user privacy.
The cold start problem covers the situation when recommendation systems have not
sufficient information about a new user’s preferences (the user cold start problem), as well
as the case of newly added items to the system (the item cold start problem), in which case
the system will not be able to provide recommendations. Some systems use users’
demographical data as a basis for generating recommendations in such cases (e.g. the
Triadic Aspect method), but this solves only the user cold start problem and enforces user’s
privacy. Some systems use users’ ’stereotypes’ to generate recommendations, but
stereotypes often do not reflect the actual preferences of individual users. While some other
systems use user’s ’filterbots’ by injecting pseudo users or bots into the system and consider
these as existing ones, but this leads to poor accuracy.
We propose the active node method, that uses previous and recent users’ browsing targets
and browsing patterns to infer preferences and generate recommendations (node
recommendations, in which a single suggestion is given, and batch recommendations, in
which a set of possible target nodes are shown to the user at once). We compare the active
node method with three alternative methods (Triadic Aspect Method, Naïve Filterbots
Method, and MediaScout Stereotype Method), and we used a dataset collected from online
web news to generate recommendations based on our method and based on the three
alternative methods. We calculated the levels of novelty, coverage, and precision in these
experiments, and we found that our method achieves higher levels of novelty in batch
recommendation while achieving higher levels of coverage and precision in node
recommendations comparing to these alternative methods. Further, we develop a variant of
the active node method that incorporates semantic structure elements. A further
experimental evaluation with real data and users showed that semantic node
recommendation with the active node method achieved higher levels of novelty than nonsemantic
node recommendation, and semantic-batch recommendation achieved higher levels
of coverage and precision than non-semantic batch recommendation
Efficient Diversification of Web Search Results
In this paper we analyze the efficiency of various search results
diversification methods. While efficacy of diversification approaches has been
deeply investigated in the past, response time and scalability issues have been
rarely addressed. A unified framework for studying performance and feasibility
of result diversification solutions is thus proposed. First we define a new
methodology for detecting when, and how, query results need to be diversified.
To this purpose, we rely on the concept of "query refinement" to estimate the
probability of a query to be ambiguous. Then, relying on this novel ambiguity
detection method, we deploy and compare on a standard test set, three different
diversification methods: IASelect, xQuAD, and OptSelect. While the first two
are recent state-of-the-art proposals, the latter is an original algorithm
introduced in this paper. We evaluate both the efficiency and the effectiveness
of our approach against its competitors by using the standard TREC Web
diversification track testbed. Results shown that OptSelect is able to run two
orders of magnitude faster than the two other state-of-the-art approaches and
to obtain comparable figures in diversification effectiveness.Comment: VLDB201
Improving Web Site Structure to Facilitate Effective User Navigation
Web sites are most effective when they meet both the contents and usability needs of their users. It is revealed, however, that designing usable Web sites is not a trivial task. A primary reason is that Web developers’ perceptions and knowledge can be very different from those of the target users. Such differences result in cases in which users cannot easily locate the relevant information in a Web site. In this paper, we propose a math programming model to improve the navigation effectiveness of a Web site while preserving its original structure whenever possible. Our approach minimizes unnecessary changes to the present structure of a Web site and hence can be applied for Web site maintenance on a regular basis. Our test on a real Web site shows that the approach can provide significant improvements over the Web site structure by introducing only a small number of new links
An intuitionistic fuzzy component based appoach for identifying web usage patterns
An intuitionistic fuzzy component based appoach fo
A complete framework for Web mining
With the rapid growing number of WWW users, hidden information becomes ever increasingly valuable. As a consequence of this phenomenon, mining Web data and analysing on-line users' behaviour and their on-line traversal pattern have emerged as a new area of research. Primarily based on the Web servers' log files, the main objective of traversal pattern mining is to discover the frequent patterns in users' browsing paths and behaviors. This paper presents a complete framework for Web mining, allowing users to pre-define physical constraints when analysing complex traversal patterns in order to improve the efficiency of algorithms and offer flexibility in producing the results
- …