313 research outputs found
A Word Embedding Based Approach for Focused Web Crawling Using the Recurrent Neural Network
Learning-based focused crawlers download relevant uniform resource locators (URLs) from the web for a specific topic. Several studies have used the term frequency-inverse document frequency (TF-IDF) weighted cosine vector as an input feature vector for learning algorithms. TF-IDF-based crawlers calculate the relevance of a web page only if a topic word co-occurs on the said page, failing which it is considered irrelevant. Similarity is not considered even if a synonym of a term co-occurs on a web page. To resolve this challenge, this paper proposes a new methodology that integrates the Adagrad-optimized Skip Gram Negative Sampling (A-SGNS)-based word embedding and the Recurrent Neural Network (RNN).The cosine similarity is calculated from the word embedding matrix to form a feature vector that is given as an input to the RNN to predict the relevance of the website. The performance of the proposed method is evaluated using the harvest rate (hr) and irrelevance ratio (ir). The proposed methodology outperforms existing methodologies with an average harvest rate of 0.42 and irrelevance ratio of 0.58
Look back, look around:A systematic analysis of effective predictors for new outlinks in focused Web crawling
Small and medium enterprises rely on detailed Web analytics to be informed
about their market and competition. Focused crawlers meet this demand by
crawling and indexing specific parts of the Web. Critically, a focused crawler
must quickly find new pages that have not yet been indexed. Since a new page
can be discovered only by following a new outlink, predicting new outlinks is
very relevant in practice. In the literature, many feature designs have been
proposed for predicting changes in the Web. In this work we provide a
structured analysis of this problem, using new outlinks as our running
prediction target. Specifically, we unify earlier feature designs in a
taxonomic arrangement of features along two dimensions: static versus dynamic
features, and features of a page versus features of the network around it.
Within this taxonomy, complemented by our new (mainly, dynamic network)
features, we identify best predictors for new outlinks. Our main conclusion is
that most informative features are the recent history of new outlinks on a page
itself, and of its content-related pages. Hence, we propose a new 'look back,
look around' (LBLA) model, that uses only these features. With the obtained
predictions, we design a number of scoring functions to guide a focused crawler
to pages with most new outlinks, and compare their performance. The LBLA
approach proved extremely effective, outperforming other models including those
that use a most complete set of features. One of the learners we use, is the
recent NGBoost method that assumes a Poisson distribution for the number of new
outlinks on a page, and learns its parameters. This connects the two so far
unrelated avenues in the literature: predictions based on features of a page,
and those based on probabilistic modelling. All experiments were carried out on
an original dataset, made available by a commercial focused crawler.Comment: 23 pages, 15 figures, 4 tables, uses arxiv.sty, added new title,
heuristic features and their results added, figures 7, 14, and 15 updated,
accepted versio
Dynamic Task Scheduling in Remote Sensing Data Acquisition from Open-Access Data Using CloudSim
With the rapid development of cloud computing and network technologies, large-scale remote sensing data collection tasks are receiving more interest from individuals and small and medium-sized enterprises. Large-scale remote sensing data collection has its challenges, including less available node resources, short collection time, and lower collection efficiency. Moreover, public remote data sources have restrictions on user settings, such as access to IP, frequency, and bandwidth. In order to satisfy users’ demand for accessing public remote sensing data collection nodes and effectively increase the data collection speed, this paper proposes a TSCD-TSA dynamic task scheduling algorithm that combines the BP neural network prediction algorithm with PSO-based task scheduling algorithms. Comparative experiments were carried out using the proposed task scheduling algorithms on an acquisition task using data from Sentinel2. The experimental results show that the MAX-MAX-PSO dynamic task scheduling algorithm has a smaller fitness value and a faster convergence speed
An Ant Colony Optimization Based Feature Selection for Web Page Classification
The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines’ performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods
Reading the news through its structure: new hybrid connectivity based approaches
In this thesis a solution for the problem of identifying the structure of news published
by online newspapers is presented. This problem requires new approaches and algorithms
that are capable of dealing with the massive number of online publications in existence
(and that will grow in the future). The fact that news documents present a high degree of
interconnection makes this an interesting and hard problem to solve. The identification
of the structure of the news is accomplished both by descriptive methods that expose the
dimensionality of the relations between different news, and by clustering the news into
topic groups. To achieve this analysis this integrated whole was studied using different
perspectives and approaches.
In the identification of news clusters and structure, and after a preparatory data collection
phase, where several online newspapers from different parts of the globe were
collected, two newspapers were chosen in particular: the Portuguese daily newspaper
Público and the British newspaper The Guardian.
In the first case, it was shown how information theory (namely variation of information)
combined with adaptive networks was able to identify topic clusters in the news published
by the Portuguese online newspaper Público.
In the second case, the structure of news published by the British newspaper The
Guardian is revealed through the construction of time series of news clustered by a kmeans
process. After this approach an unsupervised algorithm, that filters out irrelevant
news published online by taking into consideration the connectivity of the news labels
entered by the journalists, was developed. This novel hybrid technique is based on Qanalysis
for the construction of the filtered network followed by a clustering technique to
identify the topical clusters. Presently this work uses a modularity optimisation clustering technique but this step is general enough that other hybrid approaches can be used without
losing generality.
A novel second order swarm intelligence algorithm based on Ant Colony Systems
was developed for the travelling salesman problem that is consistently better than the
traditional benchmarks. This algorithm is used to construct Hamiltonian paths over the
news published using the eccentricity of the different documents as a measure of distance.
This approach allows for an easy navigation between published stories that is dependent
on the connectivity of the underlying structure.
The results presented in this work show the importance of taking topic detection in
large corpora as a multitude of relations and connectivities that are not in a static state.
They also influence the way of looking at multi-dimensional ensembles, by showing that
the inclusion of the high dimension connectivities gives better results to solving a particular
problem as was the case in the clustering problem of the news published online.Neste trabalho resolvemos o problema da identificação da estrutura das notícias publicadas
em linha por jornais e agências noticiosas. Este problema requer novas abordagens e
algoritmos que sejam capazes de lidar com o número crescente de publicações em linha
(e que se espera continuam a crescer no futuro). Este facto, juntamente com o elevado
grau de interconexão que as notícias apresentam tornam este problema num problema
interessante e de difícil resolução. A identificação da estrutura do sistema de notícias foi
conseguido quer através da utilização de métodos descritivos que expõem a dimensão das
relações existentes entre as diferentes notícias, quer através de algoritmos de agrupamento
das mesmas em tópicos. Para atingir este objetivo foi necessário proceder a ao estudo deste
sistema complexo sob diferentes perspectivas e abordagens.
Após uma fase preparatória do corpo de dados, onde foram recolhidos diversos jornais
publicados online optou-se por dois jornais em particular: O Público e o The Guardian.
A escolha de jornais em línguas diferentes deve-se à vontade de encontrar estratégias de
análise que sejam independentes do conhecimento prévio que se tem sobre estes sistemas.
Numa primeira análise é empregada uma abordagem baseada em redes adaptativas
e teoria de informação (nomeadamente variação de informação) para identificar tópicos
noticiosos que são publicados no jornal português Público.
Numa segunda abordagem analisamos a estrutura das notícias publicadas pelo jornal
Britânico The Guardian através da construção de séries temporais de notícias. Estas foram
seguidamente agrupadas através de um processo de k-means. Para além disso desenvolveuse
um algoritmo que permite filtrar de forma não supervisionada notícias irrelevantes que
apresentam baixa conectividade às restantes notícias através da utilização de Q-analysis
seguida de um processo de clustering. Presentemente este método utiliza otimização de modularidade, mas a técnica é suficientemente geral para que outras abordagens híbridas
possam ser utilizadas sem perda de generalidade do método.
Desenvolveu-se ainda um novo algoritmo baseado em sistemas de colónias de formigas
para solução do problema do caixeiro viajante que consistentemente apresenta resultados
melhores que os tradicionais bancos de testes. Este algoritmo foi aplicado na construção
de caminhos Hamiltonianos das notícias publicadas utilizando a excentricidade obtida a
partir da conectividade do sistema estudado como medida da distância entre notícias. Esta
abordagem permitiu construir um sistema de navegação entre as notícias publicadas que é
dependente da conectividade observada na estrutura de notícias encontrada.
Os resultados apresentados neste trabalho mostram a importância de analisar sistemas
complexos na sua multitude de relações e conectividades que não são estáticas e que
influenciam a forma como tradicionalmente se olha para sistema multi-dimensionais.
Mostra-se que a inclusão desta dimensões extra produzem melhores resultados na resolução
do problema de identificar a estrutura subjacente a este problema da publicação de notícias em linha
Report of the Working Group on the Application of Genetics in Fisheries and Mariculture (WGAGFM) [1–3 April 2009 Sopot, Poland]
Contributors: Geir Dahle (Chair) and Torild Johanse
Climbing and Walking Robots
Nowadays robotics is one of the most dynamic fields of scientific researches. The shift of robotics researches from manufacturing to services applications is clear. During the last decades interest in studying climbing and walking robots has been increased. This increasing interest has been in many areas that most important ones of them are: mechanics, electronics, medical engineering, cybernetics, controls, and computers. Today’s climbing and walking robots are a combination of manipulative, perceptive, communicative, and cognitive abilities and they are capable of performing many tasks in industrial and non- industrial environments. Surveillance, planetary exploration, emergence rescue operations, reconnaissance, petrochemical applications, construction, entertainment, personal services, intervention in severe environments, transportation, medical and etc are some applications from a very diverse application fields of climbing and walking robots. By great progress in this area of robotics it is anticipated that next generation climbing and walking robots will enhance lives and will change the way the human works, thinks and makes decisions. This book presents the state of the art achievments, recent developments, applications and future challenges of climbing and walking robots. These are presented in 24 chapters by authors throughtot the world The book serves as a reference especially for the researchers who are interested in mobile robots. It also is useful for industrial engineers and graduate students in advanced study
Big Data Computing for Geospatial Applications
The convergence of big data and geospatial computing has brought forth challenges and opportunities to Geographic Information Science with regard to geospatial data management, processing, analysis, modeling, and visualization. This book highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates opportunities for using big data for geospatial applications. Crucial to the advancements highlighted in this book is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms
- …