25 research outputs found

    a literature review

    Get PDF
    Fonseca, J., & Bacao, F. (2023). Tabular and latent space synthetic data generation: a literature review. Journal of Big Data, 10, 1-37. [115]. https://doi.org/10.1186/s40537-023-00792-7 --- This research was supported by two research grants of the Portuguese Foundation for Science and Technology (“Fundação para a Ciência e a Tecnologia”), references SFRH/BD/151473/2021 and DSAIPA/DS/0116/2019, and by project UIDB/04152/2020 - Centro de Investigação em Gestão de Informação (MagIC).The generation of synthetic data can be used for anonymization, regularization, oversampling, semi-supervised learning, self-supervised learning, and several other tasks. Such broad potential motivated the development of new algorithms, specialized in data generation for specific data formats and Machine Learning (ML) tasks. However, one of the most common data formats used in industrial applications, tabular data, is generally overlooked; Literature analyses are scarce, state-of-the-art methods are spread across domains or ML tasks and there is little to no distinction among the main types of mechanism underlying synthetic data generation algorithms. In this paper, we analyze tabular and latent space synthetic data generation algorithms. Specifically, we propose a unified taxonomy as an extension and generalization of previous taxonomies, review 70 generation algorithms across six ML problems, distinguish the main generation mechanisms identified into six categories, describe each type of generation mechanism, discuss metrics to evaluate the quality of synthetic data and provide recommendations for future research. We expect this study to assist researchers and practitioners identify relevant gaps in the literature and design better and more informed practices with synthetic data.publishersversionpublishe

    Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective

    Full text link
    Data-driven decision making is becoming an integral part of manufacturing companies. Data is collected and commonly used to improve efficiency and produce high quality items for the customers. IoT-based and other forms of object tracking are an emerging tool for collecting movement data of objects/entities (e.g. human workers, moving vehicles, trolleys etc.) over space and time. Movement data can provide valuable insights like process bottlenecks, resource utilization, effective working time etc. that can be used for decision making and improving efficiency. Turning movement data into valuable information for industrial management and decision making requires analysis methods. We refer to this process as movement analytics. The purpose of this document is to review the current state of work for movement analytics both in manufacturing and more broadly. We survey relevant work from both a theoretical perspective and an application perspective. From the theoretical perspective, we put an emphasis on useful methods from two research areas: machine learning, and logic-based knowledge representation. We also review their combinations in view of movement analytics, and we discuss promising areas for future development and application. Furthermore, we touch on constraint optimization. From an application perspective, we review applications of these methods to movement analytics in a general sense and across various industries. We also describe currently available commercial off-the-shelf products for tracking in manufacturing, and we overview main concepts of digital twins and their applications

    Raster Time Series: Learning and Processing

    Get PDF
    As the amount of remote sensing data is increasing at a high rate, due to great improvements in sensor technology, efficient processing capabilities are of utmost importance. Remote sensing data from satellites is crucial in many scientific domains, like biodiversity and climate research. Because weather and climate are of particular interest for almost all living organisms on earth, the efficient classification of clouds is one of the most important problems. Geostationary satellites such as Meteosat Second Generation (MSG) offer the only possibility to generate long-term cloud data sets with high spatial and temporal resolution. This work, therefore, addresses research problems on efficient and parallel processing of MSG data to enable new applications and insights. First, we address the lack of a suitable processing chain to generate a long-term Fog and Low Stratus (FLS) time series. We present an efficient MSG data processing chain that processes multiple tasks simultaneously, and raster data in parallel using the Open Computing Language (OpenCL). The processing chain delivers a uniform FLS classification that combines day and night approaches in a single method. As a result, it is possible to calculate a year of FLS rasters quite easy. The second topic presents the application of Convolutional Neural Networks (CNN) for cloud classification. Conventional approaches to cloud detection often only classify single pixels and ignore the fact that clouds are highly dynamic and spatially continuous entities. Therefore, we propose a new method based on deep learning. Using a CNN image segmentation architecture, the presented Cloud Segmentation CNN (CS-CNN) classifies all pixels of a scene simultaneously. We show that CS-CNN is capable of processing multispectral satellite data to identify continuous phenomena such as highly dynamic clouds. The proposed approach provides excellent results on MSG satellite data in terms of quality, robustness, and runtime, in comparison to Random Forest (RF), another widely used machine learning method. Finally, we present the processing of raster time series with a system for Visualization, Transformation, and Analysis (VAT) of spatio-temporal data. It enables data-driven research with explorative workflows and uses time as an integral dimension. The combination of various raster and vector data time series enables new applications and insights. We present an application that combines weather information and aircraft trajectories to identify patterns in bad weather situations

    Multidimensional process discovery

    Get PDF

    Compact data structures for large and complex datasets

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] In this thesis, we study the problem of processing large and complex collections of data, presenting new data structures and algorithms that allow us to efficiently store and analyze them. We focus on three main domains: processing of multidimensional data, representation of spatial information, and analysis of scientific data. The common nexus is the use of compact data structures, which combine in a unique data structure a compressed representation of the data and the structures to access such data. The target is to be able to manage data directly in compressed form, and in this way, to keep data always compressed, even in main memory. With this, we obtain two benefits: we can manage larger datasets in main memory and we take advantage of a better usage of the memory hierarchy. In the first part, we propose a compact data structure for multidimensional databases where the domains of each dimension are hierarchical. It allows efficient queries of aggregate information at different levels of each dimension. A typical application environment for our solution would be an OLAP system. Second, we focus on the representation of spatial information, specifically on raster data, which are commonly used in geographic information systems (GIS) to represent spatial attributes (such as the altitude of a terrain, the average temperature, etc.). The new method enables several typical spatial queries with better response times than the state of the art, at the same time that saves space in both main memory and disk. Besides, we also present a framework to run a spatial join between raster and vector datasets, that uses the compact data structure previously presented in this part of the thesis. Finally, we present a solution for the computation of empirical moments from a set of trajectories of a continuous time stochastic process observed in a given period of time. The empirical autocovariance function is an example of such operations. In this thesis, we propose a method that compresses sequences of floating numbers representing Brownian motion trajectories, although it can be used in other similar areas. In addition, we also introduce a new algorithm for the calculation of the autocovariance that uses a single trajectory at a time, instead of loading the whole dataset, reducing the memory consumption during the calculation process.[Resumo] Nesta tese estudamos o problema de procesar grandes coleccións de datos, presentando novas estruturas de datos compactas e algoritmos que nos permiten almacenalas e analizalas de forma eficiente. Centrámonos en tres dominios principais: procesamento de datos multidimensionais, representación de información espacial e análise de datos científicos. O nexo común é o uso de estruturas de datos compactas, que combinan nunha única estrutura de datos unha representación comprimida dos datos e as estruturas para acceder a tales datos. O obxectivo é poder manipular os datos directamente en forma comprimida, e desta maneira, manter os datos sempre comprimidos, incluso na memoria principal. Con esto obtemos dous beneficios: podemos xestionar conxuntos de datos máis grandes na memoria principal e aproveitar un mellor uso da xerarquía da memoria. Na primera parte propoñemos unha estructura de datos compacta para bases de datos multidimensionais onde os dominios de cada dimensión están xerarquizados. Permítenos consultar eficientemente a información agregada (sumar valor máximo, etc) a diferentes niveis de cada dimensión. Un entorno de aplicación típico para a nosa solución sería un sistema OLAP. En segundo lugar, centrámonos na representación de información espacial, especificamente en datos ráster, que se utilizan comunmente en sistemas de información xeográfica (SIX) para representar atributos espaciais (como a altitude dun terreo, a temperatura media, etc.). O novo método permite realizar eficientemente varias consultas espaciais típicas con tempos de resposta mellores que o estado da arte, ao mesmo tempo que reduce o espazo utilizado tanto na memoria principal como no disco. Ademais, tamén presentamos un marco de traballo para realizar un join espacial entre conxuntos de datos vectoriais e ráster, que usa a estructura de datos compacta previamente presentada nesta parte da tese. Por último, presentamos unha solución para o cálculo de momentos empíricos a partir dun conxunto de traxectorias dun proceso estocástico de tempo continuo observadas nun período de tempo dado. A función de autocovarianza empírica é un exemplo de tales operacións. Nesta tese propoñemos un método que comprime secuencias de números flotantes que representan traxectorias de movemento Browniano, aínda que pode ser empregado noutras áreas similares. Ademais, tamén introducimos un novo algoritmo para o cálculo da autocovarianza que emprega unha única traxectoria á vez, en lugar de cargar todo o conxunto de datos, reducindo o consumo de memoria durante o proceso de cálculo.[Resumen] En esta tesis estudiamos el problema de procesar grandes colecciones de datos, presentando nuevas estructuras de datos compactas y algoritmos que nos permiten almacenarlas y analizarlas de forma eficiente. Nos centramos principalmente en tres dominios: procesamiento de datos multidimensionales, representación de información espacial y análisis de datos científicos. El nexo común es el uso de estructuras de datos compactas, que combinan en una única estructura de datos una representación comprimida de los datos y las estructuras para acceder a dichos datos. El objetivo es poder manipular los datos directamente en forma comprimida, y de esta manera, mantener los datos siempre comprimidos, incluso en la memoria principal. Con esto obtenemos dos beneficios: podemos gestionar conjuntos de datos más grandes en la memoria principal y aprovechar un mejor uso de la jerarquía de la memoria. En la primera parte proponemos una estructura de datos compacta para bases de datos multidimensionales donde los dominios de cada dimensión están jerarquizados. Nos permite consultar eficientemente la información agregada (suma, valor máximo, etc.) a diferentes niveles de cada dimensión. Un entorno de aplicación típico para nuestra solución sería un sistema OLAP. En segundo lugar, nos centramos en la representación de la información espacial, específicamente en datos ráster, que se utilizan comúnmente en sistemas de información geográfica (SIG) para representar atributos espaciales (como la altitud de un terreno, la temperatura media, etc.). El nuevo método permite realizar eficientemente varias consultas espaciales típicas con tiempos de respuesta mejores que el estado del arte, al mismo tiempo que reduce el espacio utilizado tanto en la memoria principal como en el disco. Además, también presentamos un marco de trabajo para realizar un join espacial entre conjuntos de datos vectoriales y ráster, que usa la estructura de datos compacta previamente presentada en esta parte de la tesis. Por último, presentamos una solución para el cálculo de momentos empíricos a partir de un conjunto de trayectorias de un proceso estocástico de tiempo continuo observadas en un período de tiempo dado. La función de autocovariancia empírica es un ejemplo de tales operaciones. En esta tesis proponemos un método que comprime secuencias de números flotantes que representan trayectorias de movimiento Browniano, aunque puede ser utilizado en otras áreas similares. En esta parte, también introducimos un nuevo algoritmo para el cálculo de la autocovariancia que utiliza una única trayectoria a la vez, en lugar de cargar todo el conjunto de datos, reduciendo el consumo de memoria durante el proceso de cálculoXunta de Galicia; ED431G/01Ministerio de Economía y Competitividad ;TIN2016-78011-C4-1-RMinisterio de Economía y Competitividad; TIN2016-77158-C4-3-RMinisterio de Economía y Competitividad; TIN2013-46801-C4-3-RCentro para el desarrollo Tecnológico e Industrial; IDI-20141259Centro para el desarrollo Tecnológico e Industrial; ITC-20151247Xunta de Galicia; GRC2013/05

    Resolving the Complexity of Some Fundamental Problems in Computational Social Choice

    Get PDF
    This thesis is in the area called computational social choice which is an intersection area of algorithms and social choice theory.Comment: Ph.D. Thesi
    corecore