3 research outputs found
RSS v2.0: Spamming, User Experience and Formalization
RSS, once the most popular publish/subscribe system is believed to have come to an end due to reasons unexplored yet. The aim of this thesis is to examine one such reason, spamming. The context of this thesis is limited to spamming related to RSS v2.0. The study discusses RSS as a publish/subscribe system and investigates the possible reasons for the decline in the use of such a system and possible solutions to address RSS spamming. The thesis introduces RSS (being dependent on feed readers) and tries to find its relationship with spamming. In addition, the thesis tries to investigate possible socio-technical influences on spamming in RSS.
The author presents the idea of applying formalization (formal specification technique) to open standards, RSSv2.0 in particular. Formal specifications are more concise, consistent, unambiguous and highly reusable in many cases. The merging of formal specification methods and open standards allows for i) a more concrete standard design, ii) an improved understanding of the environment under design, iii) an enforced certain level of precision into the specification, and also iv) provides software engineers with extended property checking/verification capabilities. The author supports and proposes the use of formalization in RSS.
Based on the inferences gathered from the user experiment conducted during the course of this study, an analysis on the downfall of RSS is presented. However, the user experiment opens up different directions for future work in the evolution of RSS v3.0 which could be supported by formalization. The thesis concludes that RSS is on the verge of death/discontinuation due to the adverse effects of spamming and lack of its development which is evident from the limited amount of available research literature.
RSS Feeds is a perfect example of what happens to a software if it fails to evolve itself with time
A Semantic Map of RSS Feeds to support Discovery
International audienceFinding specific, valid, complete, and up-to-date information on the Web is a critical problem experienced daily by all users, regardless of their expertise. Many Web usage scenarios require access not only to good quality information but also to updates and new data when they become available. This paper presents an approach to support continuous Web information discovery by applying a powerful declarative semantic resource description for building domain-specific RSS mashups in the context of a distributed query-based RSS aggregation system
A series of case studies to enhance the social utility of RSS
RSS (really simple syndication, rich site summary or RDF site summary) is a dialect of
XML that provides a method of syndicating on-line content, where postings consist of
frequently updated news items, blog entries and multimedia. RSS feeds, produced by
organisations or individuals, are often aggregated, and delivered to users for consumption
via readers. The semi-structured format of RSS also allows the delivery/exchange of
machine-readable content between different platforms and systems.
Articles on web pages frequently include icons that represent social media services
which facilitate social data. Amongst these, RSS feeds deliver data which is typically
presented in the journalistic style of headline, story and snapshot(s). Consequently, applications
and academic research have employed RSS on this basis. Therefore, within the
context of social media, the question arises: can the social function, i.e. utility, of RSS be
enhanced by producing from it data which is actionable and effective?
This thesis is based upon the hypothesis that the
fluctuations in the keyword frequencies
present in RSS can be mined to produce actionable and effective data, to enhance
the technology's social utility. To this end, we present a series of laboratory-based case
studies which demonstrate two novel and logically consistent RSS-mining paradigms. Our first paradigm allows users to define mining rules to mine data from feeds. The second
paradigm employs a semi-automated classification of feeds and correlates this with sentiment.
We visualise the outputs produced by the case studies for these paradigms, where
they can benefit users in real-world scenarios, varying from statistics and trend analysis
to mining financial and sporting data.
The contributions of this thesis to web engineering and text mining are the demonstration
of the proof of concept of our paradigms, through the integration of an array of
open-source, third-party products into a coherent and innovative, alpha-version prototype
software implemented in a Java JSP/servlet-based web application architecture