85 research outputs found

    Automated processing for map generalization using web services

    Get PDF
    In map generalization various operators are applied to the features of a map in order to maintain and improve the legibility of the map after the scale has been changed. These operators must be applied in the proper sequence and the quality of the results must be continuously evaluated. Cartographic constraints can be used to define the conditions that have to be met in order to make a map legible and compliant to the user needs. The combinatorial optimization approaches shown in this paper use cartographic constraints to control and restrict the selection and application of a variety of different independent generalization operators into an optimal sequence. Different optimization techniques including hill climbing, simulated annealing and genetic deep search are presented and evaluated experimentally by the example of the generalization of buildings in blocks. All algorithms used in this paper have been implemented in a web services framework. This allows the use of distributed and parallel processing in order to speed up the search for optimized generalization operator sequence

    A genetic algorithm for tributary selection with consideration of multiple factors

    Get PDF
    Drainage systems are important components in cartography and Geographic Information Systems (GIS), and achieve different drainage patterns based on the form and texture of their network of stream channels and tributaries due to local topography and subsurface geology. The drainage pattern can reflect the geographical characteristics of a river network to a certain extent. To preserve the drainage pattern during the generalization process, this article proposes a solution to deal with many factors, such as the tributary length and the order in river tributary selection. This leads to a multi-objective optimization problem solved with a Genetic Algorithm. In the multi-objective model, different weights are used to aggregate all objective functions into a fitness function. The method is applied on a case study to evaluate the importance of each factor for different types of drainage and results are compared with a manually generalized network. The result can be controlled by assigning different weights to the factors. From this work, different weight settings according to drainage patterns are proposed for the river network generalization

    A multi-agent system for on-the-fly web map generation and spatial conflict resolution

    Get PDF
    RĂ©sumĂ© Internet est devenu un moyen de diffusion de l’information gĂ©ographique par excellence. Il offre de plus en plus de services cartographiques accessibles par des milliers d’internautes Ă  travers le monde. Cependant, la qualitĂ© de ces services doit ĂȘtre amĂ©liorĂ©e, principalement en matiĂšre de personnalisation. A cette fin, il est important que la carte gĂ©nĂ©rĂ©e corresponde autant que possible aux besoins, aux prĂ©fĂ©rences et au contexte de l’utilisateur. Ce but peut ĂȘtre atteint en appliquant les transformations appropriĂ©es, en temps rĂ©el, aux objets de l’espace Ă  chaque cycle de gĂ©nĂ©ration de la carte. L’un des dĂ©fis majeurs de la gĂ©nĂ©ration d’une carte Ă  la volĂ©e est la rĂ©solution des conflits spatiaux qui apparaissent entre les objets, essentiellement Ă  cause de l’espace rĂ©duit des Ă©crans d’affichage. Dans cette thĂšse, nous proposons une nouvelle approche basĂ©e sur la mise en Ɠuvre d’un systĂšme multiagent pour la gĂ©nĂ©ration Ă  la volĂ©e des cartes et la rĂ©solution des conflits spatiaux. Cette approche est basĂ©e sur l’utilisation de la reprĂ©sentation multiple et la gĂ©nĂ©ralisation cartographique. Elle rĂ©sout les conflits spatiaux et gĂ©nĂšre les cartes demandĂ©es selon une stratĂ©gie innovatrice : la gĂ©nĂ©ration progressive des cartes par couches d’intĂ©rĂȘt. Chaque couche d’intĂ©rĂȘt contient tous les objets ayant le mĂȘme degrĂ© d’importance pour l’utilisateur. Ce contenu est dĂ©terminĂ© Ă  la volĂ©e au dĂ©but du processus de gĂ©nĂ©ration de la carte demandĂ©e. Notre approche multiagent gĂ©nĂšre et transfĂšre cette carte suivant un mode parallĂšle. En effet, une fois une couche d’intĂ©rĂȘt gĂ©nĂ©rĂ©e, elle est transmise Ă  l’utilisateur. Dans le but de rĂ©soudre les conflits spatiaux, et par la mĂȘme occasion gĂ©nĂ©rer la carte demandĂ©e, nous affectons un agent logiciel Ă  chaque objet de l’espace. Les agents entrent ensuite en compĂ©tition pour l’occupation de l’espace disponible. Cette compĂ©tition est basĂ©e sur un ensemble de prioritĂ©s qui correspondent aux diffĂ©rents degrĂ©s d’importance des objets pour l’utilisateur. Durant la rĂ©solution des conflits, les agents prennent en considĂ©ration les besoins et les prĂ©fĂ©rences de l’utilisateur afin d’amĂ©liorer la personnalisation de la carte. Ils amĂ©liorent la lisibilitĂ© des objets importants et utilisent des symboles qui pourraient aider l’utilisateur Ă  mieux comprendre l’espace gĂ©ographique. Le processus de gĂ©nĂ©ration de la carte peut ĂȘtre interrompu en tout temps par l’utilisateur lorsque les donnĂ©es dĂ©jĂ  transmises rĂ©pondent Ă  ses besoins. Dans ce cas, son temps d’attente est rĂ©duit, Ă©tant donnĂ© qu’il n’a pas Ă  attendre la gĂ©nĂ©ration du reste de la carte. Afin d’illustrer notre approche, nous l’appliquons au contexte de la cartographie sur le web ainsi qu’au contexte de la cartographie mobile. Dans ces deux contextes, nous catĂ©gorisons nos donnĂ©es, qui concernent la ville de QuĂ©bec, en quatre couches d’intĂ©rĂȘt contenant les objets explicitement demandĂ©s par l’utilisateur, les objets repĂšres, le rĂ©seau routier et les objets ordinaires qui n’ont aucune importance particuliĂšre pour l’utilisateur. Notre systĂšme multiagent vise Ă  rĂ©soudre certains problĂšmes liĂ©s Ă  la gĂ©nĂ©ration Ă  la volĂ©e des cartes web. Ces problĂšmes sont les suivants : 1. Comment adapter le contenu des cartes, Ă  la volĂ©e, aux besoins des utilisateurs ? 2. Comment rĂ©soudre les conflits spatiaux de maniĂšre Ă  amĂ©liorer la lisibilitĂ© de la carte tout en prenant en considĂ©ration les besoins de l’utilisateur ? 3. Comment accĂ©lĂ©rer la gĂ©nĂ©ration et le transfert des donnĂ©es aux utilisateurs ? Les principales contributions de cette thĂšse sont : 1. La rĂ©solution des conflits spatiaux en utilisant les systĂšmes multiagent, la gĂ©nĂ©ralisation cartographique et la reprĂ©sentation multiple. 2. La gĂ©nĂ©ration des cartes dans un contexte web et dans un contexte mobile, Ă  la volĂ©e, en utilisant les systĂšmes multiagent, la gĂ©nĂ©ralisation cartographique et la reprĂ©sentation multiple. 3. L’adaptation des contenus des cartes, en temps rĂ©el, aux besoins de l’utilisateur Ă  la source (durant la premiĂšre gĂ©nĂ©ration de la carte). 4. Une nouvelle modĂ©lisation de l’espace gĂ©ographique basĂ©e sur une architecture multi-couches du systĂšme multiagent. 5. Une approche de gĂ©nĂ©ration progressive des cartes basĂ©e sur les couches d’intĂ©rĂȘt. 6. La gĂ©nĂ©ration et le transfert, en parallĂšle, des cartes aux utilisateurs, dans les contextes web et mobile.Abstract Internet is a fast growing medium to get and disseminate geospatial information. It provides more and more web mapping services accessible by thousands of users worldwide. However, the quality of these services needs to be improved, especially in term of personalization. In order to increase map flexibility, it is important that the map corresponds as much as possible to the user’s needs, preferences and context. This may be possible by applying the suitable transformations, in real-time, to spatial objects at each map generation cycle. An underlying challenge of such on-the-fly map generation is to solve spatial conflicts that may appear between objects especially due to lack of space on display screens. In this dissertation, we propose a multiagent-based approach to address the problems of on-the-fly web map generation and spatial conflict resolution. The approach is based upon the use of multiple representation and cartographic generalization. It solves conflicts and generates maps according to our innovative progressive map generation by layers of interest approach. A layer of interest contains objects that have the same importance to the user. This content, which depends on the user’s needs and the map’s context of use, is determined on-the-fly. Our multiagent-based approach generates and transfers data of the required map in parallel. As soon as a given layer of interest is generated, it is transmitted to the user. In order to generate a given map and solve spatial conflicts, we assign a software agent to every spatial object. Then, the agents compete for space occupation. This competition is driven by a set of priorities corresponding to the importance of objects for the user. During processing, agents take into account users’ needs and preferences in order to improve the personalization of the final map. They emphasize important objects by improving their legibility and using symbols in order to help the user to better understand the geographic space. Since the user can stop the map generation process whenever he finds the required information from the amount of data already transferred, his waiting delays are reduced. In order to illustrate our approach, we apply it to the context of tourist web and mobile mapping applications. In these contexts, we propose to categorize data into four layers of interest containing: explicitly required objects, landmark objects, road network and ordinary objects which do not have any specific importance for the user. In this dissertation, our multiagent system aims at solving the following problems related to on-the-fly web mapping applications: 1. How can we adapt the contents of maps to users’ needs on-the-fly? 2. How can we solve spatial conflicts in order to improve the legibility of maps while taking into account users’ needs? 3. How can we speed up data generation and transfer to users? The main contributions of this thesis are: 1. The resolution of spatial conflicts using multiagent systems, cartographic generalization and multiple representation. 2. The generation of web and mobile maps, on-the-fly, using multiagent systems, cartographic generalization and multiple representation. 3. The real-time adaptation of maps’ contents to users’ needs at the source (during the first generation of the map). 4. A new modeling of the geographic space based upon a multi-layers multiagent system architecture. 5. A progressive map generation approach by layers of interest. 6. The generation and transfer of web and mobile maps at the same time to users

    Stepwise Evolutionary Learning using Deep Learned Guidance Functions

    Get PDF
    This paper explores how Learned Guidance Functions (LGFs)— a pre-training method used to smooth search landscapes—can be used as a fitness function for evolutionary algorithms. A new form of LGF is introduced, based on deep neural network learning, and it is shown how this can be used as a fitness function. This is applied to a test problem: unscrambling the Rubik’s Cube. Comparisons are made with a previous LGF approach based on random forests, and with a baseline approach based on traditional error-based fitness

    Intégration des algorithmes de généralisation et des patrons géométriques pour la création des objets auto-généralisants (SGO) afin d'améliorer la généralisation cartographique à la volée

    Get PDF
    Le dĂ©veloppement technologique de ces derniĂšres annĂ©es a eu comme consĂ©quence la dĂ©mocratisation des donnĂ©es spatiales. Ainsi, des applications comme la cartographie en ligne et les SOLAP qui permettent d’accĂ©der Ă  ces donnĂ©es ont fait leur apparition. Malheureusement, ces applications sont trĂšs limitĂ©es du point de vue cartographique car elles ne permettent pas une personnalisation flexible des cartes demandĂ©es par l’utilisateur. Pour permettre de gĂ©nĂ©rer des produits plus adaptĂ©s aux besoins des utilisateurs de ces technologies, les outils de visualisation doivent permettre entre autres de gĂ©nĂ©rer des donnĂ©es Ă  des Ă©chelles variables choisies par l'utilisateur. Pour cela, une solution serait d’utiliser la gĂ©nĂ©ralisation cartographique automatique afin de gĂ©nĂ©rer les donnĂ©es Ă  diffĂ©rentes Ă©chelles Ă  partir d’une base de donnĂ©es unique Ă  grande Ă©chelle. Mais, compte tenu de la nature interactive de ces applications, cette gĂ©nĂ©ralisation doit ĂȘtre rĂ©alisĂ©e Ă  la volĂ©e. Depuis plus de trois dĂ©cennies, la gĂ©nĂ©ralisation automatique est devenue un sujet de recherche important. Malheureusement, en dĂ©pit des avancĂ©es considĂ©rables rĂ©alisĂ©es ces derniĂšres annĂ©es, les mĂ©thodes de gĂ©nĂ©ralisation cartographique existantes ne garantissent pas un rĂ©sultat exhaustif et une performance acceptable pour une gĂ©nĂ©ralisation Ă  la volĂ©e efficace. Comme, il est actuellement impossible de crĂ©er Ă  la volĂ©e des cartes Ă  des Ă©chelles arbitraires Ă  partir d’une seule carte Ă  grande Ă©chelle, les rĂ©sultats de la gĂ©nĂ©ralisation (i.e. les cartes Ă  plus petites Ă©chelles gĂ©nĂ©rĂ©es grĂące Ă  la gĂ©nĂ©ralisation cartographique) sont stockĂ©s dans une base de donnĂ©es Ă  reprĂ©sentation multiple (RM) en vue d’une Ă©ventuelle utilisation. Par contre, en plus du manque de flexibilitĂ© (car les Ă©chelles sont prĂ©dĂ©finies), la RM introduit aussi la redondance Ă  cause du fait que plusieurs reprĂ©sentations de chaque objet sont stockĂ©es dans la mĂȘme base de donnĂ©es. Tout ceci empĂȘche parfois les utilisateurs d’avoir des donnĂ©es avec un niveau d’abstraction qui correspond exactement Ă  leurs besoins. Pour amĂ©liorer le processus de la gĂ©nĂ©ralisation Ă  la volĂ©e, cette thĂšse propose une approche basĂ©e sur un nouveau concept appelĂ© SGO (objet auto-gĂ©nĂ©ralisant: Self-Generalizing Object). Le SGO permet d’encapsuler des patrons gĂ©omĂ©triques (des formes gĂ©omĂ©triques gĂ©nĂ©riques communes Ă  plusieurs objets de la carte), des algorithmes de gĂ©nĂ©ralisation et des contraintes d’intĂ©gritĂ© dans un mĂȘme objet cartographique. Les SGO se basent sur un processus d’enrichissement de la base de donnĂ©es qui permet d’introduire les connaissances du cartographe dans les donnĂ©es cartographiques plutĂŽt que de les gĂ©nĂ©rer Ă  l’aide des algorithmes comme c’est typiquement le cas. Un SGO est crĂ©Ă© pour chaque objet individuel (ex. un bĂątiment) ou groupe d’objets (ex. des bĂątiments alignĂ©s). Les SGO sont dotĂ©s de comportements spĂ©cifiques qui leur permettent de s'auto-gĂ©nĂ©raliser, c.-Ă -d. de savoir comment gĂ©nĂ©raliser l’objet qu’ils reprĂ©sentent lors d’un changement d’abstraction (ex. changement d’échelle). Comme preuve de concept, deux prototypes basĂ©s sur des technologies Open Source ont Ă©tĂ© dĂ©veloppĂ©s lors de cette thĂšse. Le premier permet la crĂ©ation des SGO et l’enrichissement de la base de donnĂ©es. Le deuxiĂšme prototype basĂ© sur la technologie multi-agent, utilise les SGO crĂ©Ă©s pour gĂ©nĂ©rer des donnĂ©es Ă  des Ă©chelles arbitraires grĂące Ă  un processus de gĂ©nĂ©ralisation Ă  la volĂ©e. Pour tester les prototypes, des donnĂ©es rĂ©elles de la ville de QuĂ©bec Ă  l’échelle 1 : 1000 ont Ă©tĂ© utilisĂ©es.With the technological development of these past years, geospatial data became increasingly accessible to general public. New applications such as Webmapping or SOLAP which allow visualising the data also appeared. However, the dynamic and interactive nature of these new applications requires that all operations, including generalization processes, must be carried on-the–fly. Automatic generalization has been an important research topic for more than thirty years. In spite of recent advances, it clearly appears that actual generalization methods can not reach alone the degree of automation and the response time needed by these new applications. To improve the process of on-the-fly map generalization, this thesis proposes an approach based on a new concept called SGO (Self-generalizing object). The SGO allows to encapsulate geometric patterns (generic geometric forms common to several map features), generalization algorithms and the spatial integrity constraints in the same object. This approach allows us to include additional human expertise in an efficient way at the level of individual cartographic features, which then leads to database enrichment that better supports automatic generalization. Thus, during a database enrichment process, a SGO is created and associated with a cartographic feature, or a group of features. Then, each created SGO is transformed into a software agent (SGO agent) in order to give them autonomy. SGO agents are equipped with behaviours which enable them to coordinate the generalization process. As a proof of concept, two prototypes based on Open Source technologies were developed in this thesis. The first prototype allows the creation of the SGO. The second prototype based on multi-agents technology, uses the created SGO in order to generate data on arbitrary scales thanks to an on-the-fly map generalization process. Real data of Quebec City at scale 1: 1000 were used in order to test the developed prototypes

    Dwelling on ontology - semantic reasoning over topographic maps

    Get PDF
    The thesis builds upon the hypothesis that the spatial arrangement of topographic features, such as buildings, roads and other land cover parcels, indicates how land is used. The aim is to make this kind of high-level semantic information explicit within topographic data. There is an increasing need to share and use data for a wider range of purposes, and to make data more definitive, intelligent and accessible. Unfortunately, we still encounter a gap between low-level data representations and high-level concepts that typify human qualitative spatial reasoning. The thesis adopts an ontological approach to bridge this gap and to derive functional information by using standard reasoning mechanisms offered by logic-based knowledge representation formalisms. It formulates a framework for the processes involved in interpreting land use information from topographic maps. Land use is a high-level abstract concept, but it is also an observable fact intimately tied to geography. By decomposing this relationship, the thesis correlates a one-to-one mapping between high-level conceptualisations established from human knowledge and real world entities represented in the data. Based on a middle-out approach, it develops a conceptual model that incrementally links different levels of detail, and thereby derives coarser, more meaningful descriptions from more detailed ones. The thesis verifies its proposed ideas by implementing an ontology describing the land use ‘residential area’ in the ontology editor ProtĂ©gĂ©. By asserting knowledge about high-level concepts such as types of dwellings, urban blocks and residential districts as well as individuals that link directly to topographic features stored in the database, the reasoner successfully infers instances of the defined classes. Despite current technological limitations, ontologies are a promising way forward in the manner we handle and integrate geographic data, especially with respect to how humans conceptualise geographic space

    Solving the Rubik’s Cube with Stepwise Deep Learning

    Get PDF
    This paper explores a novel technique for learning the fitness function for search algorithms such as evolutionary strategies and hillclimbing. The aim of the new technique is to learn a fitness function (called a Learned Guidance Function) from a set of sample solutions to the problem. These functions are learned using a supervised learning approach based on deep neural network learning, that is, neural networks with a number of hidden layers. This is applied to a test problem: unscrambling the Rubik’s Cube using evolutionary and hillclimbing algorithms. Comparisons are made with a previous LGF approach based on random forests, with a baseline approach based on traditional error-based fitness, and with other approaches in the literature. This demonstrates how a fitness function can be learned from existing solutions, rather than being provided by the user, increasing the autonomy of AI search processes

    LIPIcs, Volume 277, GIScience 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 277, GIScience 2023, Complete Volum

    Automatic and adaptive preprocessing for the development of predictive models.

    Get PDF
    In recent years, there has been an increasing interest in extracting valuable information from large amounts of data. This information can be useful for making predictions about the future or inferring unknown values. There exists a multitude of predictive models for the most common tasks of classification and regression. However, researchers often assume that data is clean and far too little attention has been paid to data pre-processing. Despite the fact that there are a number of methods for accomplishing individual pre-processing tasks (e.g. outlier detection or feature selection), the effort of performing comprehensive data preparation and cleaning can take between 60% and 80% of the whole data mining process time. One of the goals of this research is to speed up this process and make it more efficient. To this end, an approach for automating the selection and optimisation of multiple preprocessing methods and predictors has been proposed. The combination of multiple data mining methods forming a workflow is known as Multi-Component Predictive System (MCPS). There are multiple software platforms like Weka and RapidMiner to create and run MCPSs including a large variety of pre-processing methods and predictors. There is, however, no common mathematical representation of MCPSs. An objective of this thesis is to establish a common representation framework of MCPSs. This will allow validating workflows before beginning the implementation phase with any particular platform. The validation of workflows becomes even more relevant when considering the automatic generation of MCPSs. In order to automate the composition and optimisation of MCPSs, a search space is defined consisting of a number of preprocessing methods, predictive models and their hyperparameters. Then, the space is explored using a Bayesian optimisation strategy within a given time or computational budget. As a result, a parametrised sequence of methods is returned which after training form a complete predictive system. The whole process is data-driven and does not require human intervention once it has been started. The generated predictive system can then be used to make predictions in an online scenario. However, it is possible that the nature of the input data changes over time. As a result, predictive models may need to be updated to capture the new characteristics of the data in order to reduce the loss of predictive performance. Similarly, preprocessing methods may have to be adapted as well. A novel hybrid strategy combining Bayesian optimisation and common adaptive techniques is proposed to automatically adapt MCPSs. This approach performs a global adaptation of the MCPS. However, in some situations, it could be costly to update the whole predictive system when maybe just a little adjustment is needed. The consequences of adapting a single component can, however, be significant. This thesis also analyses the impact of adapting individual components in an MCPS and proposes an approach to propagate changes through the system. This thesis was initiated due to a joint research project with a chemical production company, which has provided several datasets with common raw data issues in the process industry. The final part of this thesis evaluates the feasibility of applying such automatic techniques for building and maintaining predictive models for real chemical production processes

    Automating Geospatial RDF Dataset Integration and Enrichment

    Get PDF
    Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data. The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data. The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases. The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added. The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets. The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples. Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches
    • 

    corecore