1,089 research outputs found
Reading the news through its structure: new hybrid connectivity based approaches
In this thesis a solution for the problem of identifying the structure of news published
by online newspapers is presented. This problem requires new approaches and algorithms
that are capable of dealing with the massive number of online publications in existence
(and that will grow in the future). The fact that news documents present a high degree of
interconnection makes this an interesting and hard problem to solve. The identification
of the structure of the news is accomplished both by descriptive methods that expose the
dimensionality of the relations between different news, and by clustering the news into
topic groups. To achieve this analysis this integrated whole was studied using different
perspectives and approaches.
In the identification of news clusters and structure, and after a preparatory data collection
phase, where several online newspapers from different parts of the globe were
collected, two newspapers were chosen in particular: the Portuguese daily newspaper
Público and the British newspaper The Guardian.
In the first case, it was shown how information theory (namely variation of information)
combined with adaptive networks was able to identify topic clusters in the news published
by the Portuguese online newspaper Público.
In the second case, the structure of news published by the British newspaper The
Guardian is revealed through the construction of time series of news clustered by a kmeans
process. After this approach an unsupervised algorithm, that filters out irrelevant
news published online by taking into consideration the connectivity of the news labels
entered by the journalists, was developed. This novel hybrid technique is based on Qanalysis
for the construction of the filtered network followed by a clustering technique to
identify the topical clusters. Presently this work uses a modularity optimisation clustering technique but this step is general enough that other hybrid approaches can be used without
losing generality.
A novel second order swarm intelligence algorithm based on Ant Colony Systems
was developed for the travelling salesman problem that is consistently better than the
traditional benchmarks. This algorithm is used to construct Hamiltonian paths over the
news published using the eccentricity of the different documents as a measure of distance.
This approach allows for an easy navigation between published stories that is dependent
on the connectivity of the underlying structure.
The results presented in this work show the importance of taking topic detection in
large corpora as a multitude of relations and connectivities that are not in a static state.
They also influence the way of looking at multi-dimensional ensembles, by showing that
the inclusion of the high dimension connectivities gives better results to solving a particular
problem as was the case in the clustering problem of the news published online.Neste trabalho resolvemos o problema da identificação da estrutura das notícias publicadas
em linha por jornais e agências noticiosas. Este problema requer novas abordagens e
algoritmos que sejam capazes de lidar com o número crescente de publicações em linha
(e que se espera continuam a crescer no futuro). Este facto, juntamente com o elevado
grau de interconexão que as notícias apresentam tornam este problema num problema
interessante e de difícil resolução. A identificação da estrutura do sistema de notícias foi
conseguido quer através da utilização de métodos descritivos que expõem a dimensão das
relações existentes entre as diferentes notícias, quer através de algoritmos de agrupamento
das mesmas em tópicos. Para atingir este objetivo foi necessário proceder a ao estudo deste
sistema complexo sob diferentes perspectivas e abordagens.
Após uma fase preparatória do corpo de dados, onde foram recolhidos diversos jornais
publicados online optou-se por dois jornais em particular: O Público e o The Guardian.
A escolha de jornais em línguas diferentes deve-se à vontade de encontrar estratégias de
análise que sejam independentes do conhecimento prévio que se tem sobre estes sistemas.
Numa primeira análise é empregada uma abordagem baseada em redes adaptativas
e teoria de informação (nomeadamente variação de informação) para identificar tópicos
noticiosos que são publicados no jornal português Público.
Numa segunda abordagem analisamos a estrutura das notícias publicadas pelo jornal
Britânico The Guardian através da construção de séries temporais de notícias. Estas foram
seguidamente agrupadas através de um processo de k-means. Para além disso desenvolveuse
um algoritmo que permite filtrar de forma não supervisionada notícias irrelevantes que
apresentam baixa conectividade às restantes notícias através da utilização de Q-analysis
seguida de um processo de clustering. Presentemente este método utiliza otimização de modularidade, mas a técnica é suficientemente geral para que outras abordagens híbridas
possam ser utilizadas sem perda de generalidade do método.
Desenvolveu-se ainda um novo algoritmo baseado em sistemas de colónias de formigas
para solução do problema do caixeiro viajante que consistentemente apresenta resultados
melhores que os tradicionais bancos de testes. Este algoritmo foi aplicado na construção
de caminhos Hamiltonianos das notícias publicadas utilizando a excentricidade obtida a
partir da conectividade do sistema estudado como medida da distância entre notícias. Esta
abordagem permitiu construir um sistema de navegação entre as notícias publicadas que é
dependente da conectividade observada na estrutura de notícias encontrada.
Os resultados apresentados neste trabalho mostram a importância de analisar sistemas
complexos na sua multitude de relações e conectividades que não são estáticas e que
influenciam a forma como tradicionalmente se olha para sistema multi-dimensionais.
Mostra-se que a inclusão desta dimensões extra produzem melhores resultados na resolução
do problema de identificar a estrutura subjacente a este problema da publicação de notícias em linha
Mining Aircraft Telemetry Data With Evolutionary Algorithms
The Ganged Phased Array Radar - Risk Mitigation System (GPAR-RMS) was a
mobile ground-based sense-and-avoid system for Unmanned Aircraft System (UAS)
operations developed by the University of North Dakota. GPAR-RMS detected proximate
aircraft with various sensor systems, including a 2D radar and an Automatic Dependent
Surveillance - Broadcast (ADS-B) receiver. Information about those aircraft was then
displayed to UAS operators via visualization software developed by the University of
North Dakota. The Risk Mitigation (RM) subsystem for GPAR-RMS was designed to
estimate the current risk of midair collision, between the Unmanned Aircraft (UA) and a
General Aviation (GA) aircraft flying under Visual Flight Rules (VFR) in the surrounding
airspace, for UAS operations in Class E airspace (i.e. below 18,000 feet MSL). However,
accurate probabilistic models for the behavior of pilots of GA aircraft flying under VFR
in Class E airspace were needed before the RM subsystem could be implemented.
In this dissertation the author presents the results of data mining an aircraft
telemetry data set from a consecutive nine month period in 2011. This aircraft telemetry
data set consisted of Flight Data Monitoring (FDM) data obtained from Garmin G1000
devices onboard every Cessna 172 in the University of North Dakota\u27s training fleet.
Data from aircraft which were potentially within the controlled airspace surrounding
controlled airports were excluded. Also, GA aircraft in the FDM data flying in Class E
airspace were assumed to be flying under VFR, which is usually a valid assumption.
Complex subpaths were discovered from the aircraft telemetry data set using a novel
application of an ant colony algorithm. Then, probabilistic models were data mined from
those subpaths using extensions of the Genetic K-Means (GKA) and Expectation-
Maximization (EM) algorithms.
The results obtained from the subpath discovery and data mining suggest a pilot
flying a GA aircraft near to an uncontrolled airport will perform different maneuvers than
a pilot flying a GA aircraft far from an uncontrolled airport, irrespective of the altitude of
the GA aircraft. However, since only aircraft telemetry data from the University of North
Dakota\u27s training fleet were data mined, these results are not likely to be applicable to GA
aircraft operating in a non-training environment
WEB NEWS DOCUMENTS CLUSTERING IN INDONESIAN LANGUAGE USING SINGULAR VALUE DECOMPOSITION-PRINCIPAL COMPONENT ANALYSIS (SVDPCA) AND ANT ALGORITHMS
Ant-based document clustering is a cluster method of measuring text documents similarity based on the shortest path between nodes (trial phase) and determines the optimal clusters of sequence document similarity (dividing phase). The processing time of trial phase Ant algorithms to make document vectors is very long because of high dimensional Document-Term Matrix (DTM). In this paper, we proposed a document clustering method for optimizing dimension reduction using Singular Value Decomposition-Principal Component Analysis (SVDPCA) and Ant algorithms. SVDPCA reduces size of the DTM dimensions by converting freq-term of conventional DTM to score-pc of Document-PC Matrix (DPCM). Ant algorithms creates documents clustering using the vector space model based on the dimension reduction result of DPCM. The experimental results on 506 news documents in Indonesian language demonstrated that the proposed method worked well to optimize dimension reduction up to 99.7%. We could speed up execution time efficiently of the trial phase and maintain the best F-measure achieved from experiments was 0.88 (88%)
An Ant Colony Optimization Based Feature Selection for Web Page Classification
The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines’ performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods
Speech Recognition Using Combined Fuzzy and Ant Colony algorithm
In recent years various methods has been proposed for speech recognition and removing noise from the speech signal became an important issue. In this paper a fuzzy system has been proposed for speech recognition that can obtain accurate results using classification of speech signals with “Ant Colony” algorithm. First, speech samples are given to the fuzzy system to obtain a pattern for every set of signals that can be helpful for dimensionality reduction, easier checking of outcome and better recognition of signals. Then, the “ACO” algorithm is used to cluster these signals and determine a cluster for each input signal. Also, with this method we will be able to recognize noise and consider it in a separate cluster and remove it from the input signal. Results show that the accuracy for speech detection and noise removal is desirable
Bio-inspired ant colony optimization based clustering algorithm with mobile sinks for applications in consumer home automation networks
With the fast development of wireless communications, ZigBee and semiconductor devices, home automation networks have recently become very popular. Since typical consumer products deployed in home automation networks are often powered by tiny and limited batteries, one of the most challenging research issues is concerning energy reduction and the balancing of energy consumption across the network in order to prolong the home network lifetime for consumer devices. The introduction of clustering and sink mobility techniques into home automation networks have been shown to be an efficient way to improve the network performance and have received significant research attention.
Taking inspiration from nature, this paper proposes an Ant Colony Optimization (ACO) based clustering algorithm specifically with mobile sink support for home automation networks. In this work, the network is divided into several clusters and cluster heads are selected within each cluster. Then, a mobile sink communicates with each cluster head to collect data directly through short range communications. The ACO algorithm has been utilized in this work in order to find the optimal mobility trajectory for the mobile sink. Extensive simulation results from this research show that the proposed algorithm significantly improves home network performance when using mobile sinks in terms of energy consumption and network lifetime as compared to other routing algorithms currently deployed for home automation networks
- …