5 research outputs found
Classificação e agregação automática de notícias desportivas
Mestrado em Engenharia Informática - Área de Especialização em Arquiteturas, Sistemas e RedesEste relatório foi elaborado no âmbito da dissertação para obtenção do Grau de Mestre em Engenharia Informática do Instituto Superior de Engenharia do Porto
Foi desenvolvido com vista o auxílio da implementação de um módulo de classificação e agregação (clustering) automática de notícias desportivas. Este módulo será implementado numa aplicação web relacionada com o desporto a ser desenvolvida futuramente.
O principal objetivo do trabalho desenvolvido é perceber entre inúmeras possibilidades existentes para classificação e clustering de documentos quais as que melhor se adequam face às exigências necessárias. Aqueles que apresentaram melhores resultados foram os escolhidos para a fase de implementação do módulo de classificação e clustering de notícias.
Em primeiro lugar foi realizado um levantamento do estado da arte de forma a se ter conhecimento de todas as possibilidades existentes. Face a essas possibilidades, foram selecionados dois algoritmos para cada um dos temas a abordar. Os algoritmos escolhidos foram aquelas que se verificaram os mais adequados.
Para a classificação foram selecionados o Support Vector Machine (SVM) e K-Nearest Neighbors. Para o clustering, algoritmos hierárquicos e o K-means adaptável. Cada uma dessas possibilidades foi devidamente avaliada de forma a perceber quais as melhores soluções face aos problemas propostos.
Foi também feita uma breve abordagem à sumarização de documentos, contudo, este é um tema secundário. O principal foco do trabalho desenvolvido é a classificação e clustering de texto.
Este trabalho foi feito em cooperação com LIAAD/INESC TEC - Laboratório de Inteligência Artificial e Apoio à Decisão sob a supervisão do Dr. Nuno EscudeiroThis report has been made as part of the Computer Engineering Master’s dissertation from School of Engineering – Polytechnic of Porto.
The report has been developed in order to aid the implementation of an automatic process for sports news classification and clustering. That module will be implemented in a web application related with sports.
The main goal for this research is to understand among various possibilities which ones fit best given the necessary requirements of the module to be developed. Those who present the best evaluations will be chosen to be implemented in the classification and clustering module.
Firstly has been made a survey of the state of the art in order to have knowledge of all possibilities. Given those possibilities, for each topic were selected two algorithms. The chosen algorithms were those that found to be the most suitable.
For text categorization were selected the Support Vector Machine (SVM) and the K-Nearest Neighbors (KNN) algorithms. For document clustering, were selected hierarchical algorithms and the adaptable k-means algorithm. Then, each of these possibilities have been properly evaluated in order to understand which are the best solutions.
Was also made a brief approach to the documents summarization, however, this is a secondary topic. The main focus of this report is document classification and clustering.
This work was made in cooperation with LIAAD/INESC TEC – “Laboratório de Inteligência Artificial e Apoio à Decisão” with supervision of Dr. Nuno Escudeir
A series of case studies to enhance the social utility of RSS
RSS (really simple syndication, rich site summary or RDF site summary) is a dialect of
XML that provides a method of syndicating on-line content, where postings consist of
frequently updated news items, blog entries and multimedia. RSS feeds, produced by
organisations or individuals, are often aggregated, and delivered to users for consumption
via readers. The semi-structured format of RSS also allows the delivery/exchange of
machine-readable content between different platforms and systems.
Articles on web pages frequently include icons that represent social media services
which facilitate social data. Amongst these, RSS feeds deliver data which is typically
presented in the journalistic style of headline, story and snapshot(s). Consequently, applications
and academic research have employed RSS on this basis. Therefore, within the
context of social media, the question arises: can the social function, i.e. utility, of RSS be
enhanced by producing from it data which is actionable and effective?
This thesis is based upon the hypothesis that the
fluctuations in the keyword frequencies
present in RSS can be mined to produce actionable and effective data, to enhance
the technology's social utility. To this end, we present a series of laboratory-based case
studies which demonstrate two novel and logically consistent RSS-mining paradigms. Our first paradigm allows users to define mining rules to mine data from feeds. The second
paradigm employs a semi-automated classification of feeds and correlates this with sentiment.
We visualise the outputs produced by the case studies for these paradigms, where
they can benefit users in real-world scenarios, varying from statistics and trend analysis
to mining financial and sporting data.
The contributions of this thesis to web engineering and text mining are the demonstration
of the proof of concept of our paradigms, through the integration of an array of
open-source, third-party products into a coherent and innovative, alpha-version prototype
software implemented in a Java JSP/servlet-based web application architecture
A series of case studies to enhance the social utility of RSS
RSS (really simple syndication, rich site summary or RDF site summary) is a dialect of
XML that provides a method of syndicating on-line content, where postings consist of
frequently updated news items, blog entries and multimedia. RSS feeds, produced by
organisations or individuals, are often aggregated, and delivered to users for consumption
via readers. The semi-structured format of RSS also allows the delivery/exchange of
machine-readable content between different platforms and systems.
Articles on web pages frequently include icons that represent social media services
which facilitate social data. Amongst these, RSS feeds deliver data which is typically
presented in the journalistic style of headline, story and snapshot(s). Consequently, applications
and academic research have employed RSS on this basis. Therefore, within the
context of social media, the question arises: can the social function, i.e. utility, of RSS be
enhanced by producing from it data which is actionable and effective?
This thesis is based upon the hypothesis that the
fluctuations in the keyword frequencies
present in RSS can be mined to produce actionable and effective data, to enhance
the technology's social utility. To this end, we present a series of laboratory-based case
studies which demonstrate two novel and logically consistent RSS-mining paradigms. Our first paradigm allows users to define mining rules to mine data from feeds. The second
paradigm employs a semi-automated classification of feeds and correlates this with sentiment.
We visualise the outputs produced by the case studies for these paradigms, where
they can benefit users in real-world scenarios, varying from statistics and trend analysis
to mining financial and sporting data.
The contributions of this thesis to web engineering and text mining are the demonstration
of the proof of concept of our paradigms, through the integration of an array of
open-source, third-party products into a coherent and innovative, alpha-version prototype
software implemented in a Java JSP/servlet-based web application architecture