175 research outputs found

    Spatial clustering algorithms for areal data

    Get PDF
    The main aim of this thesis is to develop new spatial clustering approaches which can simultaneously identify different areal clusters and guarantee their geographical contiguity. The second aim is to adjust the finite mixture model in order to cope with the issues caused by outliers or singletons (clusters with only one object). In addition, the thesis also aims to extend the applications of these newly proposed spatial clustering techniques from univariate to multivariate space. In Chapter 1, I will review some available clustering techniques in grouping spatial data and will also introduce different types of clustering data and the Glasgow housing market data which will be used in the thesis’s application. At the end of this chapter, I will outline the structure of this thesis. In Chapter 2, I will give the general statistical theory and inference methodologies used across this thesis, including frequentist and Bayesian statistical inferences, multidimensional scaling and the Procrustes transformation. In Chapter 3, I will introduce techniques that could be used in transforming between two types of clustering data introduced in Chapter 1. Chapter 4 will define some cluster and graph terminology and will also introduce different clustering techniques, such as hierarchical clustering, Chameleon hierarchical clustering and model-based clustering. In this chapter, I will also cover some techniques used in cluster comparisons, methods for number of clusters decisions and number of dimensions decisions. Chapter 6 will introduce more detail about spatial hierarchical clustering. The simulation results from spatial hierarchical clustering will be used as the reference results for comparison with the results from the proposed novel spatial clustering techniques in later chapters. The newly proposed clustering techniques, Chameleon spatial hierarchical clustering, spatially constrained finite mixture model with noise component or with priors and spatially constrained Bayesian model-based clustering with dissimilarities, in clustering areal data will be introduced in Chapters 7, 8 and 9 respectively. Also, the simulations and the application in Glasgow housing market will be given at the end of each of these three chapters. Chameleon spatial hierarchical clustering combined the spatial contiguity with Chameleon hierarchical clustering, so areas grouped together are spatially contiguous. Spatially constrained finite mixture models incorporate the spatial prior distribution into the classical finite mixture model to deal with the spatial contiguity issue. Also, I will make the spatially constrained finite mixture model more robust by incorporating a uniform distribution to model the noise points or adding prior distributions to the model. In Chapter 9, I will add a spatial prior to the model-based clustering with dissimilarities model and then will use a Bayesian approach to obtain a spatial contiguous clustering. Chapter 10 will be conclusions and discussion about the newly proposed clustering methods

    User behaviour identification based on location data

    Get PDF
    Over the years there has been an almost exponential increase in the use of new technologies in various sectors. These technologies have as their main objective, to improve or facilitate our daily life. This study will focus on one of these technologies used within a theme that has been widely talked about over the last few years, the use of personal data of various people to identify certain types of behavior. More specifically, this study aims primarily to use the GPS data stored in the respective Google accounts of nine volunteers in order to identify the places they frequent most, also known as Points of Interest. This same data will also be used to identify the trajectories covered more often by each of the same volunteers. A study was carried out with a sample of 9 participants, sending them their maps with POI and trajectories, thus obtaining their validation. It was thus possible to conclude that the best way to identify POI is to use daily clusters using DBSCAN. In the case of trajectories, the Snap-to-Road method was the one that gave the best results. It was found that it was possible to respond to the initial problem, and thus a method was found that identifies most of the POI successfully and also some trajectories.Based on this work, there is a great opportunity to improve some of the algorithms and processes that have some limitations in the future, and with this in mind it's possible to develop more effective solutions.Ao longo dos anos tem-se verificado um aumento quase exponencial no que toca à utilização de novas tecnologias em vários sectores. Estas tecnologias têm como objetivo principal, melhorar ou facilitar o quotidiano. O presente estudo vai incidir sobre uma destas tecnologias utilizada dentro de um tema que tem sido muito falado nos últimos anos, a utilização de dados pessoais de um grupo de indvíduos para identificar certos tipos de comportamentos. Mais concretamente, tem como objetivo utilizar os dados de GPS, guardados nas respectivas contas Google de nove voluntários, de modo a identificar os locais que estes mais frequentam - Pontos de Interesse. Os dados são utilizados também para identificar as trajectórias percorridas mais vezes por cada um dos voluntários. Foi realizado um estudo com uma amostra de 9 participantes, enviando-lhes os respectivos mapas com POI e trajectórias obtendo assim a validação dos mesmos. Desta forma foi possível concluir que que a melhor forma de identificar POI tem como base a utilização de clusters diários utilizando DBSCAN. Para o caso das trajectórias, o método Snap-to-Road foi o que originou melhores resultados. Verificou-se que foi possível responder ao problema inicial, desta forma, foi encontrado um método que identifica a maior parte dos POI com sucesso, bem como algumas trajetórias. Com base neste trabalho, existe uma oportunidade para futuramente melhorar alguns dos algoritmos e processos que possuem algumas limitações de modo a desenvolver soluções mais eficazes

    Data Clustering: Algorithms and Its Applications

    Get PDF
    Data is useless if information or knowledge that can be used for further reasoning cannot be inferred from it. Cluster analysis, based on some criteria, shares data into important, practical or both categories (clusters) based on shared common characteristics. In research, clustering and classification have been used to analyze data, in the field of machine learning, bioinformatics, statistics, pattern recognition to mention a few. Different methods of clustering include Partitioning (K-means), Hierarchical (AGNES), Density-based (DBSCAN), Grid-based (STING), Soft clustering (FANNY), Model-based (SOM) and Ensemble clustering. Challenges and problems in clustering arise from large datasets, misinterpretation of results and efficiency/performance of clustering algorithms, which is necessary for choosing clustering algorithms. In this paper, application of data clustering was systematically discussed in view of the characteristics of the different clustering techniques that make them better suited or biased when applied to several types of data, such as uncertain data, multimedia data, graph data, biological data, stream data, text data, time series data, categorical data and big data. The suitability of the available clustering algorithms to different application areas was presented. Also investigated were some existing cluster validity methods used to evaluate the goodness of the clusters produced by the clustering algorithms

    Data Preprocessing for Improving Cluster Analysis and Its Application to Short Text Data

    Get PDF
    13301甲第4317号博士(工学)金沢大学博士論文本文Full 以下に掲載:Journal of Software Engineering and Applications 7(8) pp.639-654 2015. Scientific Research Publishing Inc. 共著者:Vu Anh Tran, Osamu Hirose, Thammakorn Saethang, Lan Anh T. Nguyen, Xuan Tho Dang, Tu Kien T. Le, Duc Luu Ngo, Gavrilov Sergey, Mamoru Kubo, Yoichi Yamada, Kenji Sato

    New methods for discovering local behaviour in mixed databases

    Full text link
    Clustering techniques are widely used. There are many applications where it is desired to find automatically groups or hidden information in the data set. Finding a model of the system based in the integration of several local models is placed among other applications. Local model could have many structures; however, a linear structure is the most common one, due to its simplicity. This work aims at finding improvements in several fields, but all them will be applied to this finding of a set of local models in a database. On the one hand, a way of codifying the categorical information into numerical values has been designed, in order to apply a numerical algorithm to the whole data set. On the other hand, a cost index has been developed, which will be optimized globally, to find the parameters of the local clusters that best define the output of the process. Each of the techniques has been applied to several experiments and results show the improvements over the actual techniques.Barceló Rico, F. (2009). New methods for discovering local behaviour in mixed databases. http://hdl.handle.net/10251/12739Archivo delegad

    Optimization and Mining Methods for Effective Real-Time Embedded Systems

    Get PDF
    L’Internet des objets (IoT) est le réseau d’objets interdépendants, comme les voitures autonomes, les appareils électroménagers, les téléphones intelligents et d’autres systèmes embarqués. Ces systèmes embarqués combinent le matériel, le logiciel et la connection réseau permettant le traitement de données à l’aide des puissants centres de données de l’informatique nuagique. Cependant, la croissance exponentielle des applications de l’IoT a remodelé notre croyance sur l’informatique nuagique, et des certitudes durables sur ses capacités ont dû être mises à jour. De nos jours, l’informatique nuagique centralisé et classique rencontre plusieurs défis, tels que la latence du trafic, le temps de réponse et la confidentialité des données. Alors, la tendance dans le traitement des données générées par les dispositifs embarqués interconnectés consiste à faire plus de calcul au niveau du dispositif au bord du réseau. Cette possibilité de faire du traitement local aide à réduire la latence pour les applications temps réel présentant des fortes contraintes temporelles. Aussi, ça permet d’améliorer le traitement des quantités massives de données générées par ces périphériques. Réussir cette transition nécessite la conception de systèmes embarqués de haute performance en explorant efficacement les alternatives de conception (i.e. Exploration efficace de l’espace des solutions), en optimisant la topologie de déploiement des applications temps réel sur des architectures multi-processeurs (i.e. la façon dont le logiciel utilise le matériel) , et des algorithme d’exploration permettant un fonctionnement plus intelligent de ces dispositifs. Des efforts de recherche récents ont conduit à diverses approches automatisées facilitant la conception et l’amélioration du fonctionnement des système embarqués. Cependant, ces techniques existantes présentent plusieurs défis majeurs. Ces défis sont fortement présents sur les systèmes embarqués temps réel. Quatre des principaux défis sont : (1) Le manque de techniques d’exploration de données en ligne permettant l’amélioration des performances des systèmes embarqués. (2) L’utilisation inefficace des ressources informatiques des systèmes multiprocesseurs lors du déploiement de logiciels là dessus ; (3) L’exploration pseudo-aléatoire de l’espace des solutions (4) La sélection de la configuration appropriée à partir de la listes des solutions optimales obtenue.----------ABSTRACT: The Internet of things (IoT) is the network of interrelated devices or objects, such as selfdriving cars, home appliances, smart-phones and other embedded computing systems. It combines hardware, software, and network connectivity enabling data processing using powerful cloud data centers. However, the exponential rise of IoT applications reshaped our belief on the cloud computing, and long-lasting certainties about its capabilities had to be updated. The classical centralized cloud computing is encountering several challenges, such as traffic latency, response time, and data privacy. Thus, the trend in the processing of the generated data of IoT inter-connected embedded devices has shifted towards doing more computation closer to the device in the edge of the network. This possibility to do on-device processing helps to reduce latency for critical real-time applications and better processing of the massive amounts of data being generated by the these devices. Succeeding this transition towards the edge computing requires the design of high-performance embedded systems by efficiently exploring design alternatives (i.e. efficient Design Space Exploration), optimizing the deployment topology of multi-processor based real-time embedded systems (i.e. the way the software utilizes the hardware), and light mining techniques enabling smarter functioning of these devices. Recent research efforts on embedded systems have led to various automated approaches facilitating the design and the improvement of their functioning. However, existing methods and techniques present several major challenges. These challenges are more relevant when it comes to real-time embedded systems. Four of the main challenges are : (1) The lack of online data mining techniques that can enhance embedded computing systems functioning on the fly ; (2) The inefficient usage of computing resources of multi-processor systems when deploying software on ; (3) The pseudo-random exploration of the design space ; (4) The selection of the suitable implementation after performing the otimization process

    Towards outlier detection for high-dimensional data streams using projected outlier analysis strategy

    Get PDF
    [Abstract]: Outlier detection is an important research problem in data mining that aims to discover useful abnormal and irregular patterns hidden in large data sets. Most existing outlier detection methods only deal with static data with relatively low dimensionality. Recently, outlier detection for high-dimensional stream data became a new emerging research problem. A key observation that motivates this research is that outliers in high-dimensional data are projected outliers, i.e., they are embedded in lower-dimensional subspaces. Detecting projected outliers from high-dimensional stream data is a very challenging task for several reasons. First, detecting projected outliers is difficult even for high-dimensional static data. The exhaustive search for the out-lying subspaces where projected outliers are embedded is a NP problem. Second, the algorithms for handling data streams are constrained to take only one pass to process the streaming data with the conditions of space limitation and time criticality. The currently existing methods for outlier detection are found to be ineffective for detecting projected outliers in high-dimensional data streams. In this thesis, we present a new technique, called the Stream Project Outlier deTector (SPOT), which attempts to detect projected outliers in high-dimensional data streams. SPOT employs an innovative window-based time model in capturing dynamic statistics from stream data, and a novel data structure containing a set of top sparse subspaces to detect projected outliers effectively. SPOT also employs a multi-objective genetic algorithm as an effective search method for finding the outlying subspaces where most projected outliers are embedded. The experimental results demonstrate that SPOT is efficient and effective in detecting projected outliers for high-dimensional data streams. The main contribution of this thesis is that it provides a backbone in tackling the challenging problem of outlier detection for high- dimensional data streams. SPOT can facilitate the discovery of useful abnormal patterns and can be potentially applied to a variety of high demand applications, such as for sensor network data monitoring, online transaction protection, etc
    corecore