525 research outputs found

    A GeoNode-based platform for an effective exploitation of advanced DInSAR measurements

    Get PDF
    This work presents the development of an efficient tool for managing, visualizing, analysing, and integrating with other data sources, the deformation time-series obtained by applying the advanced differential interferometric synthetic aperture radar (DInSAR) techniques. To implement such a tool we extend the functionalities of GeoNode, which is a web-based platform providing an open source framework based on the Open Geospatial Consortium (OGC) standards, that allows development of Geospatial Information Systems (GIS) and Spatial Data Infrastructures (SDI). In particular, our efforts have been dedicated to enable the GeoNode platform to effectively analyze and visualize the spatio/temporal characteristics of the DInSAR deformation time-series and their related products. Moreover, the implemented multi-thread based new functionalities allow us to efficiently upload and update large data volumes of the available DInSAR results into a dedicated geodatabase. The examples we present, based on Sentinel-1 DInSAR results relevant to Italy, demonstrate the effectiveness of the extended version of the GeoNode platform

    Analysis of social trends based on artificial intelligence techniques

    Get PDF
    In order to analyze and extract information about social trends in the Spanish and Portuguese environment from an objective point of view, the implementation of this project was requested. This consists of extracting information from different and varied sources of internet information through the web scraping technique, and then using artificial intelligence techniques to process the texts obtained and extract keywords. Finally, two different ways of presenting the obtained results have been created in order to extract as much insights as possible from them.Con el fin de analizar y extraer información sobre las tendencias sociales en el entorno español y portugués desde un punto de vista objetivo, se solicitó la implementación de este proyecto. Este consiste en extraer información de diferentes y variadas fuentes de información de internet a través de la técnica del "web scraping", y posteriormente utilizar técnicas de inteligencia artificial para procesar los textos obtenidos y extraer palabras clave. Por último, se han creado dos formas diferentes de presentar los resultados obtenidos, con el fin de extraer de ellos la mayor cantidad de información posible.Per tal d'analitzar i extreure informació sobre les tendències socials de l'entorn espanyol i portuguès des d'un punt de vista objectiu, es va sol·licitar la implementació d'aquest projecte. Aquest consisteix en extreure informació de diferents i variades fonts d'informació d'internet mitjançant la tècnica del "web scraping", i després utilitzar tècniques d'intel·ligència artificial per processar els textos obtinguts i extreure'n paraules clau. Finalment, s'han creat dues maneres diferents de presentar els resultats obtinguts, per tal d'obtenir-ne el màxim d'informació possible

    Aplicações de IoT no contexto de uma cidade inteligente

    Get PDF
    Over the last few years, Smart City solutions mature very rapidly alongside IoT and cloud computing. These technologies made it easier to create services and incorporate applications devoted to improving citizen’s quality of life and offer ways for businesses to implement their solutions. Through rapid advances in the quality of sensors, new methods emerged, combining different types of devices to create a better picture of the environment. The purpose of this dissertation is to provide useful information thought public services, that can be accessed by people visiting or residing in the beach area of Costa Nova and Barra. It also provides a solution for the traffic classification problem that projects based on radar data tend to face. These applications take advantage of the devices implemented in the PASMO project, such as parking sensors, radars, and CCTV cameras. By making the service public, businesses have the opportunity to build applications on top of it, utilizing the sensor data without being directly connected to the data storage. The example developed in this dissertation offers a dashboard experience where users can navigate through charts that provide a variety of data and real-time maps. It also provides a public API that researchers and businesses can use to develop new applications in the context of PASMO. The other area tackled in this document is traffic classification. Although the data provided is reliable for the most part, one big issue is the accuracy of vehicle classification provided by the radar. Still, this device offers precise values when it comes to detection, with the cameras doing a good job in classifying traffic. The goal is to combine these two devices to present much precise information, using state-of-the-art object detection algorithms and sensor fusion methods. In the end, the system will enrich the PASMO project by making its data easily available to the public while correcting the accuracy problems of some devices.Nos últimos anos, as soluções Smart City amadurecem muito rapidamente em conjunto com IoT e serviços na cloud. Estas tecnologias facilitam a criação de serviços e a incorporação de aplicações direcionados á melhoria da qualidade de vida do cidadão, oferecendo formas das empresas implementarem suas soluções. Por meio de rápidos avanços na qualidade dos sensores, novos métodos surgiram, combinando diferentes tipos de dispositivos para criar uma melhor imagem da realidade. O objetivo desta dissertação é fornecer informações úteis através de serviços públicos, que podem ser acedidos por pessoas que visitam ou residem na Costa Nova e Barra. Também fornece uma solução para o problema de classificação de tráfego que projetos baseados em dados de radar tendem a enfrentar. Estas aplicações beneficiam dos dispositivos implementados no projeto PASMO, como sensores de estacionamento, radares e câmeras de CFTV. Ao disponibilizar os serviços publicamente, as empresas têm a oportunidade de construir as suas próprias aplicações em cima destes, usando os dados dos sensores sem estar diretamente conectado ao armazenamento de dados. O exemplo desenvolvido nesta dissertação oferece uma experiência de dashboard onde os utilizadores podem navegar por gráficos que fornecem uma variedade de dados e mapas em tempo real. Também fornece uma API pública que os investigadores e empresas podem usar para desenvolver novos aplicativos no contexto do PASMO. A outra área abordada neste documento é a classificação de tráfego. Embora os dados fornecidos sejam confiáveis, um grande problema provém da precisão da classificação dos veículos fornecida pelo radar. Ainda assim, este dispositivo oferece valores precisos quando se trata de detecção, com as câmeras fazendo um bom trabalho na parte de classificação do tráfego. O objetivo é combinar estes dois dispositivos para apresentar informações corretas, usando algoritmos de detecção de objetos e métodos de fusão de sensores. No final, o sistema irá enriquecer o projeto PASMO, tornando seus dados facilmente disponíveis ao público e corrigindo problemas de precisão de alguns dispositivos.Mestrado em Engenharia de Computadores e Telemátic

    OCTOPUS database (v.2)

    Full text link
    OCTOPUS v.2 is an Open Geospatial Consortium (OGC) compliant web-enabled database that allows users to visualise, query, and download cosmogenic radionuclide, luminescence, and radiocarbon ages and denudation rates associated with erosional landscapes, Quaternary depositional landforms, and archaeological records, along with ancillary geospatial (vector and raster) data layers. The database follows the FAIR (Findability, Accessibility, Interoperability, and Reuse) data principles and is based on open-source software deployed on the Google Cloud Platform. Data stored in the database can be accessed via a custom-built web interface and via desktop geographic information system (GIS) applications that support OGC data access protocols. OCTOPUS v.2 hosts five major data collections. CRN Denudation and ExpAge consist of published cosmogenic 10Be and 26Al measurements in modern fluvial sediment and glacial samples respectively. Both collections have a global extent; however, in addition to geospatial vector layers, CRN Denudation also incorporates raster layers, including a digital elevation model, gradient raster, flow direction and flow accumulation rasters, atmospheric pressure raster, and CRN production scaling and topographic shielding factor rasters. SahulSed consists of published optically stimulated luminescence (OSL) and thermoluminescence (TL) ages for fluvial, aeolian, and lacustrine sedimentary records across the Australian mainland and Tasmania. SahulArch consists of published OSL, TL, and radiocarbon ages for archaeological records, and FosSahul consists of published late-Quaternary records of direct and indirect non-human vertebrate (mega)fauna fossil ages that have been systematically quality rated. Supporting data are comprehensive and include bibliographic, contextual, and sample-preparation- and measurement-related information. In the case of cosmogenic radionuclide data, OCTOPUS also includes all necessary information and input files for the recalculation of denudation rates using the open-source program CAIRN. OCTOPUS v.2 and its associated data curation framework allow for valuable legacy data to be harnessed that would otherwise be lost to the research community. The database can be accessed at https://octopusdata.org (last access: 1 July 2022). The individual data collections can also be accessed via their respective digital object identifiers (DOIs) (see Table 1)

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc… In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented

    Physical database design in document stores

    Get PDF
    Tesi en modalitat de cotutela, Universitat Politècnica de Catalunya i Université libre de BruxellesNoSQL is an umbrella term used to classify alternate storage systems to the traditional Relational Database Management Systems (RDBMSs). Among these, Document stores have gained popularity mainly due to the semi-structured data storage model and the rich query capabilities. They encourage users to use a data-first approach as opposed to a design-first one. Database design on document stores is mainly carried out in a trial-and-error or ad-hoc rule-based manner instead of a formal process such as normalization in an RDBMS. However, these approaches could easily lead to a non-optimal design resulting additional costs in the long run. This PhD thesis aims to provide a novel multi-criteria-based approach to database design in document stores. Most of such existing approaches are based on optimizing query performance. However, other factors include storage requirement and complexity of the stored documents specific to each use case. There is a large solution space of alternative designs due to the different combinations of referencing and nesting of data. Thus, we believe multi-criteria optimization is ideal to solve this problem. To achieve this, we need to address several issues that will enable us to apply multi-criteria optimization for the data design problem. First, we evaluate the impact of alternate storage representations of semi-structured data. There are multiple and equivalent ways to physically represent semi-structured data, but there is a lack of evidence about the potential impact on space and query performance. Thus, we embark on the task of quantifying that precisely for document stores. We empirically compare multiple ways of representing semi-structured data, allowing us to derive a set of guidelines for efficient physical database design considering both JSON and relational options in the same palette. Then, we need a formal canonical model that can represent alternative designs. We propose a hypergraph-based approach for representing heterogeneous datastore designs. We extend and formalize an existing common programming interface to NoSQL systems as hypergraphs. We define design constraints and query transformation rules for representative data store types. Next, we propose a simple query rewriting algorithm and provide a prototype implementation together with storage statistics estimator. Next, we require a formal query cost model to estimate and evaluate query performance on alternative document store designs. Document stores use primitive approaches to query processing, such as relying on the end-user to specify the usage of indexes instead of a formal cost model. But we require a reliable approach to compare alternative designs on how they perform on a specific query. For this, we define a generic storage and query cost model based on disk access and memory allocation. As all document stores carry out data operations in memory, we first estimate the memory usage by considering the characteristics of the stored documents, their access patterns, and memory management algorithms. Then, using this estimation and metadata storage size, we introduce a cost model for random access queries. We validate our work on two well-known document store implementations. The results show that the memory usage estimates have an average precision of 91% and predicted costs are highly correlated to the actual execution times. During this work, we also managed to suggest several improvements to document stores. Finally, we implement the automated database design solution using multi-criteria optimization. We introduce an algebra of transformations that can systematically modify a design of our canonical representation. Then, using them, we implement a local search algorithm driven by a loss function that can propose near-optimal designs with high probability. We compare our prototype against an existing document store data design solution. Our proposed designs have better performance and are more compact with less redundancy.NoSQL descriu sistemes d'emmagatzematge alternatius als tradicionals de gestió de bases de dades relacionals (RDBMS). Entre aquests, els magatzems de documents han guanyat popularitat principalment a causa del model de dades semiestructurat i les riques capacitats de consulta. Animen els usuaris a utilitzar un enfocament de dades primer, en lloc d'un enfocament de disseny primer. El disseny de dades en magatzems de documents es porta a terme principalment en forma d'assaig-error o basat en regles ad-hoc en lloc d'un procés formal i sistemàtic com ara la normalització en un RDBMS. Aquest enfocament condueix fàcilment a un disseny no òptim que generarà costos addicionals a llarg termini. La majoria dels enfocaments existents es basen en l'optimització del rendiment de les consultes. Aquesta tesi pretén, en canvi, proporcionar un nou enfocament basat en diversos criteris per al disseny de bases de dades en magatzems de documents, inclouen el requisit d'espai i la complexitat dels documents emmagatzemats específics per a cada cas d'ús. En general, hi ha un gran espai de solucions de dissenys alternatives. Per tant, creiem que l'optimització multicriteri és ideal per resoldre aquest problema. Per aconseguir-ho, hem d'abordar diversos problemes que ens permetran aplicar l'optimització multicriteri. En primer, avaluem l'impacte de les representacions alternatives de dades semiestructurades. Hi ha maneres múltiples i equivalents de representar dades semiestructurades, però hi ha una manca d'evidència sobre l'impacte potencial en l'espai i el rendiment de les consultes. Així, ens embarquem en la tasca de quantificar-ho. Comparem empíricament múltiples representacions de dades semiestructurades, cosa que ens permet derivar directrius per a un disseny eficient tenint en compte les opcions dels JSON i relacionals alhora. Aleshores, necessitem un model canònic que pugui representar dissenys alternatius i proposem un enfocament basat en hipergrafs. Estenem i formalitzem una interfície de programació comuna existent als sistemes NoSQL com a hipergrafs. Definim restriccions de disseny i regles de transformació de consultes per a tipus de magatzem de dades representatius. A continuació, proposem un algorisme de reescriptura de consultes senzill i proporcionem una implementació juntament amb un estimador d'estadístiques d'emmagatzematge. Els magatzems de documents utilitzen enfocaments primitius per al processament de consultes, com ara confiar en l'usuari final per especificar l'ús d'índexs en lloc d'un model de cost. Conseqüentment, necessitem un model de cost de consulta per estimar i avaluar el rendiment en dissenys alternatius. Per això, definim un model genèric propi basat en l'accés a disc i l'assignació de memòria. Com que tots els magatzems de documents duen a terme operacions de dades a memòria, primer estimem l'ús de la memòria tenint en compte les característiques dels documents emmagatzemats, els seus patrons d'accés i els algorismes de gestió de memòria. A continuació, utilitzant aquesta estimació i la mida d'emmagatzematge de metadades, introduïm un model de costos per a consultes d'accés aleatori. Validem el nostre treball en dues implementacions conegudes. Els resultats mostren que les estimacions d'ús de memòria tenen una precisió mitjana del 91% i els costos previstos estan altament correlacionats amb els temps d'execució reals. Finalment, implementem la solució de disseny automatitzat de bases de dades mitjançant l'optimització multicriteri. Introduïm una àlgebra de transformacions que pot modificar sistemàticament un disseny en la nostra representació canònica. A continuació, utilitzant-la, implementem un algorisme de cerca local impulsat per una funció de pèrdua que pot proposar dissenys gairebé òptims amb alta probabilitat. Comparem el nostre prototip amb una solució de disseny de dades de magatzem de documents existent. Els nostres dissenys proposats tenen un millor rendiment i són més compactes, amb menys redundànciaNoSQL est un terme générique utilisé pour classer les systèmes de stockage alternatifs aux systèmes de gestion de bases de données relationnelles (SGBDR) traditionnels. Au moment de la rédaction de cet article, il existe plus de 200 systèmes NoSQL disponibles qui peuvent être classés en quatre catégories principales sur le modèle de stockage de données : magasins de valeurs-clés, magasins de documents, magasins de familles de colonnes et magasins de graphiques. Les magasins de documents ont gagné en popularité principalement en raison du modèle de stockage de données semi-structuré et des capacités de requêtes riches par rapport aux autres systèmes NoSQL, ce qui en fait un candidat idéal pour le prototypage rapide. Les magasins de documents encouragent les utilisateurs à utiliser une approche axée sur les données plutôt que sur la conception. La conception de bases de données sur les magasins de documents est principalement effectuée par essais et erreurs ou selon des règles ad hoc plutôt que par un processus formel tel que la normalisation dans un SGBDR. Cependant, ces approches pourraient facilement conduire à une conception de base de données non optimale entraînant des coûts supplémentaires de traitement des requêtes, de stockage des données et de refonte. Cette thèse de doctorat vise à fournir une nouvelle approche multicritère de la conception de bases de données dans les magasins de documents. La plupart des approches existantes de conception de bases de données sont basées sur l’optimisation des performances des requêtes. Cependant, d’autres facteurs incluent les exigences de stockage et la complexité des documents stockés spécifique à chaque cas d’utilisation. De plus, il existe un grand espace de solution de conceptions alternatives en raison des différentes combinaisons de référencement et d’imbrication des données. Par conséquent, nous pensons que l’optimisation multicritères est idéale par l’intermédiaire d’une expérience éprouvée dans la résolution de tels problèmes dans divers domaines. Cependant, pour y parvenir, nous devons résoudre plusieurs problèmes qui nous permettront d’appliquer une optimisation multicritère pour le problème de conception de données. Premièrement, nous évaluons l’impact des représentations alternatives de stockage des données semi-structurées. Il existe plusieurs manières équivalentes de représenter physiquement des données semi-structurées, mais il y a un manque de preuves concernant l’impact potentiel sur l’espace et sur les performances des requêtes. Ainsi, nous nous lançons dans la tâche de quantifier cela précisément pour les magasins de documents. Nous comparons empiriquement plusieurs façons de représenter des données semi-structurées, ce qui nous permet de dériver un ensemble de directives pour une conception de base de données physique efficace en tenant compte à la fois des options JSON et relationnelles dans la même palette. Ensuite, nous avons besoin d’un modèle canonique formel capable de représenter des conceptions alternatives. Dans cette mesure, nous proposons une approche basée sur des hypergraphes pour représenter des conceptions de magasins de données hétérogènes. Prenant une interface de programmation commune existante aux systèmes NoSQL, nous l’étendons et la formalisons sous forme d’hypergraphes. Ensuite, nous définissons les contraintes de conception et les règles de transformation des requêtes pour trois types de magasins de données représentatifs. Ensuite, nous proposons un algorithme de réécriture de requête simple à partir d’un algorithme générique dans un magasin de données sous-jacent spécifique et fournissons une implémentation prototype. De plus, nous introduisons un estimateur de statistiques de stockage sur les magasins de données sous-jacents. Enfin, nous montrons la faisabilité de notre approche sur un cas d’utilisation d’un système polyglotte existant ainsi que son utilité dans les calculs de métadonnées et de chemins de requêtes physiques. Ensuite, nous avons besoin d’un modèle de coûts de requêtes formel pour estimer et évaluer les performances des requêtes sur des conceptions alternatives de magasin de documents. Les magasins de documents utilisent des approches primitives du traitement des requêtes, telles que l’évaluation de tous les plans de requête possibles pour trouver le plan gagnant et son utilisation dans les requêtes similaires ultérieures, ou l’appui sur l’usager final pour spécifier l’utilisation des index au lieu d’un modèle de coûts formel. Cependant, nous avons besoin d’une approche fiable pour comparer deux conceptions alternatives sur la façon dont elles fonctionnent sur une requête spécifique. Pour cela, nous définissons un modèle de coûts de stockage et de requête générique basé sur l’accès au disque et l’allocation de mémoire qui permet d’estimer l’impact des décisions de conception. Étant donné que tous les magasins de documents effectuent des opérations sur les données en mémoire, nous estimons d’abord l’utilisation de la mémoire en considérant les caractéristiques des documents stockés, leurs modèles d’accès et les algorithmes de gestion de la mémoire. Ensuite, en utilisant cette estimation et la taille de stockage des métadonnées, nous introduisons un modèle de coûts pour les requêtes à accès aléatoire. Il s’agit de la première tenta ive d’une telle approche au meilleur de notre connaissance. Enfin, nous validons notre travail sur deux implémentations de magasin de documents bien connues : MongoDB et Couchbase. Les résultats démontrent que les estimations d’utilisation de la mémoire ont une précision moyenne de 91% et que les coûts prévus sont fortement corrélés aux temps d’exécution réels. Au cours de ce travail, nous avons réussi à proposer plusieurs améliorations aux systèmes de stockage de documents. Ainsi, ce modèle de coûts contribue également à identifier les discordances entre les implémentations de stockage de documents et leurs attentes théoriques. Enfin, nous implémentons la solution de conception automatisée de bases de données en utilisant l’optimisation multicritères. Tout d’abord, nous introduisons une algèbre de transformations qui peut systématiquement modifier une conception de notre représentation canonique. Ensuite, en utilisant ces transformations, nous implémentons un algorithme de recherche locale piloté par une fonction de perte qui peut proposer des conceptions quasi optimales avec une probabilité élevée. Enfin, nous comparons notre prototype à une solution de conception de données de magasin de documents existante uniquement basée sur le coût des requêtes. Nos conceptions proposées ont de meilleures performances et sont plus compactes avec moins de redondancePostprint (published version

    Artificial Intelligence for the Edge Computing Paradigm.

    Get PDF
    With modern technologies moving towards the internet of things where seemingly every financial, private, commercial and medical transaction being carried out by portable and intelligent devices; Machine Learning has found its way into every smart device and application possible. However, Machine Learning cannot be used on the edge directly due to the limited capabilities of small and battery-powered modules. Therefore, this thesis aims to provide light-weight automated Machine Learning models which are applied on a standard edge device, the Raspberry Pi, where one framework aims to limit parameter tuning while automating feature extraction and a second which can perform Machine Learning classification on the edge traditionally, and can be used additionally for image-based explainable Artificial Intelligence. Also, a commercial Artificial Intelligence software have been ported to work in a client/server setups on the Raspberry Pi board where it was incorporated in all of the Machine Learning frameworks which will be presented in this thesis. This dissertation also introduces multiple algorithms that can convert images into Time-series for classification and explainability but also introduces novel Time-series feature extraction algorithms that are applied to biomedical data while introducing the concept of the Activation Engine, which is a post-processing block that tunes Neural Networks without the need of particular experience in Machine Leaning. Also, a tree-based method for multiclass classification has been introduced which outperforms the One-to-Many approach while being less complex that the One-to-One method.\par The results presented in this thesis exhibit high accuracy when compared with the literature, while remaining efficient in terms of power consumption and the time of inference. Additionally the concepts, methods or algorithms that were introduced are particularly novel technically, where they include: • Feature extraction of professionally annotated, and poorly annotated time-series. • The introduction of the Activation Engine post-processing block. • A model for global image explainability with inference on the edge. • A tree-based algorithm for multiclass classification

    Proceedings of the 3rd Open Source Geospatial Research & Education Symposium OGRS 2014

    Get PDF
    The third Open Source Geospatial Research & Education Symposium (OGRS) was held in Helsinki, Finland, on 10 to 13 June 2014. The symposium was hosted and organized by the Department of Civil and Environmental Engineering, Aalto University School of Engineering, in partnership with the OGRS Community, on the Espoo campus of Aalto University. These proceedings contain the 20 papers presented at the symposium. OGRS is a meeting dedicated to exchanging ideas in and results from the development and use of open source geospatial software in both research and education.  The symposium offers several opportunities for discussing, learning, and presenting results, principles, methods and practices while supporting a primary theme: how to carry out research and educate academic students using, contributing to, and launching open source geospatial initiatives. Participating in open source initiatives can potentially boost innovation as a value creating process requiring joint collaborations between academia, foundations, associations, developer communities and industry. Additionally, open source software can improve the efficiency and impact of university education by introducing open and freely usable tools and research results to students, and encouraging them to get involved in projects. This may eventually lead to new community projects and businesses. The symposium contributes to the validation of the open source model in research and education in geoinformatics
    corecore