4 research outputs found

    Filtered-page ranking: uma abordagem para ranqueamento de documentos HTML previamente filtrados

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2016.Algoritmos de ranking de páginas Web podem ser criados usando técnicas baseadas em elementos estruturais da página Web, em segmentação da página ou na busca personalizada. Esta pesquisa aborda um método de ranking de documentos previamente filtrados, que segmenta a página Web em blocos de três categorias para delas eliminar conteúdo irrelevante. O método de ranking proposto, chamado Filtered-Page Ranking (FPR), consta de duas etapas principais: (i) segmentação da página web e eliminação de conteúdo irrelevante e (ii) ranking de páginas Web. O foco da extração de conteúdo irrelevante é eliminar conteúdos não relacionados à consulta do usuário, através do algoritmo proposto Query-Based Blocks Mining (QBM), para que o ranking considere somente conteúdo relevante. O foco da etapa de ranking é calcular quão relevante cada página Web é para determinada consulta, usando critérios considerados em estudos de recuperação da informação. Com a presente pesquisa pretende-se demonstrar que o QBM extrai eficientemente o conteúdo irrelevante e que os critérios utilizados para calcular quão próximo uma página Web é da consulta são relevantes, produzindo uma média de resultados de ranking de páginas Web de qualidade melhor que a do clássico modelo vetorial.Abstract : Web page ranking algorithms can be created using content-based, structure-based or user search-based techniques. This research addresses an user search-based approach applied over previously filtered documents ranking, which relies in a segmentation process to extract irrelevante content from documents before ranking. The process splits the document into three categories of blocks in order to fragment the document and eliminate irrelevante content. The ranking method, called Page Filtered Ranking, has two main steps: (i) irrelevante content extraction; and (ii) document ranking. The focus of the extraction step is to eliminate irrelevante content from the document, by means of the Query-Based Blocks Mining algorithm, creating a tree that is evaluated in the ranking process. During the ranking step, the focus is to calculate the relevance of each document for a given query, using criteria that give importance to specific parts of the document and to the highlighted features of some HTML elements. Our proposal is compared to two baselines: the classic vectorial model, and the CETR noise removal algorithm, and the results demonstrate that our irrelevante content removal algorithm improves the results and our relevance criteria are relevant to the process

    Automatic Web Content Extraction by Combination of Learning and Grouping

    Full text link
    Web pages consist of not only actual content, but also other ele-ments such as branding banners, navigational elements, advertise-ments, copyright etc. This noisy content is typically not related to the main subjects of the webpages. Identifying the part of ac-tual content, or clipping web pages, has many applications, such as high quality web printing, e-reading on mobile devices and data mining. Although there are many existing methods attempting to address this task, most of them can either work only on certain types of Web pages, e.g. article pages, or has to develop differ-ent models for different websites. We formulate the actual content identifying problem as a DOM tree node selection problem. We develop multiple features by utilizing the DOM tree node proper-ties to train a machine learning model. Then candidate nodes are selected based on the learning model. Based on the observation that the actual content is usually located in a spatially continuous block, we develop a grouping technology to further filter out noisy data and pick missing data for the candidate nodes. We conduct ex-tensive experiments on a real dataset and demonstrate our solution has high quality outputs and outperforms several baseline methods

    Algoritmo não supervisionado para segmentação e remoção de ruído de páginas web utilizando tag paths

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2014Segmentação e remoção de ruído de páginas web são etapas essenciais no processo de extração de dados estruturados. Identificar a região principal da página, eliminando o que não é importante (menus, anúncios,etc.), pode melhorar significativamente o desempenho do processo de extração. Para essa tarefa e proposto um novo algoritmo, totalmente automático, que utiliza uma sequência de tag paths (TPS) como representação da página web. A TPS é composta por uma sequência de símbolos (string), cada um representando um tag path diferente. O algoritmo proposto procura por posições na TPS onde é possível dividi-la em duas regiões de tal forma que seus alfabetos não se intersectem, o que significa que as regiões têm conjuntos de tag paths completamente distintos e, portanto, são regiões diferentes da página. Os resultados mostram que o algoritmo é muito efetivo em identificar o conteúdo principal de vários sites, e melhora a precisão da extração, removendo resultados irrelevantes.Abstract: Web page segmentation and data cleaning are essential steps in structured web data extraction. Identifying a web page main content region, removing what is not important (menus, ads, etc.), can greatly improve the performance of the extraction process. We propose, for this task, a novel and fully automatic algorithm that uses a tag path sequence (TPS) representation of the web page. The TPS consists of a sequence of symbols (string), each one representing a diferent tag path. The proposed technique searches for positions in the TPS where it is possible to split it in two regions where each region's alphabet do not intersect, which means that they have completely dierent sets of tag paths and, thus, are diferent regions. The results show that the algorithm is very effective in identifying the main content block of several major web sites, and improves the precision of the extraction step by removing irrelevant results

    Indexation et interrogation de pages web décomposées en blocs visuels

    Get PDF
    Cette thèse porte sur l'indexation et l'interrogation de pages Web. Dans ce cadre, nous proposons un nouveau modèle : BlockWeb, qui s'appuie sur une décomposition de pages Web en une hiérarchie de blocs visuels. Ce modèle prend en compte, l'importance visuelle de chaque bloc et la perméabilité des blocs au contenu de leurs blocs voisins dans la page. Les avantages de cette décomposition sont multiples en terme d'indexation et d'interrogation. Elle permet notamment d'effectuer une interrogation à une granularité plus fine que la page : les blocs les plus similaires à une requête peuvent être renvoyés à la place de la page complète. Une page est représentée sous forme d'un graphe acyclique orienté dont chaque nœud est associé à un bloc et étiqueté par l'importance de ce bloc et chaque arc est étiqueté la perméabilité du bloc cible au bloc source. Afin de construire ce graphe à partir de la représentation en arbre de blocs d'une page, nous proposons un nouveau langage : XIML (acronyme de XML Indexing Management Language), qui est un langage de règles à la façon de XSLT. Nous avons expérimenté notre modèle sur deux applications distinctes : la recherche du meilleur point d'entrée sur un corpus d'articles de journaux électroniques et l'indexation et la recherche d'images sur un corpus de la campagne d'ImagEval 2006. Nous en présentons les résultats.This thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments.AIX-MARSEILLE3-Bib. élec. (130559903) / SudocSudocFranceF
    corecore