1,417 research outputs found

    Deadlock detection and dihomotopic reduction via progress shell decomposition

    Get PDF
    Deadlock detection for concurrent programs has traditionally been accomplished by symbolic methods or by search of a state transition system. This work examines an approach that uses geometric semantics involving the topological notion of dihomotopy to partition the state space into components, followed by an exhaustive search of the reduced state space. Prior work partitioned the state-space inductively; however, this work shows that a technique motivated by recursion further reduces the size of the state transition system. The reduced state space results in asymptotic improvements in overall runtime for verification. Thus, with efficient partitioning, more efficient deadlock detection and eventually more efficient verification of some temporal properties can be expected for large problems --Abstract, page iii

    Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge

    Full text link
    [ES] El proceso de ciencia de datos es esencial para extraer valor de los datos. Sin embargo, la parte más tediosa del proceso, la preparación de los datos, implica una serie de formateos, limpieza e identificación de problemas que principalmente son tareas manuales. La preparación de datos todavía se resiste a la automatización en parte porque el problema depende en gran medida de la información del dominio, que se convierte en un cuello de botella para los sistemas de última generación a medida que aumenta la diversidad de dominios, formatos y estructuras de los datos. En esta tesis nos enfocamos en generar algoritmos que aprovechen el conocimiento del dominio para la automatización de partes del proceso de preparación de datos. Mostramos la forma en que las técnicas generales de inducción de programas, en lugar de los lenguajes específicos del dominio, se pueden aplicar de manera flexible a problemas donde el conocimiento es importante, mediante el uso dinámico de conocimiento específico del dominio. De manera más general, sostenemos que una combinación de enfoques de aprendizaje dinámicos y basados en conocimiento puede conducir a buenas soluciones. Proponemos varias estrategias para seleccionar o construir automáticamente el conocimiento previo apropiado en varios escenarios de preparación de datos. La idea principal se basa en elegir las mejores primitivas especializadas de acuerdo con el contexto del problema particular a resolver. Abordamos dos escenarios. En el primero, manejamos datos personales (nombres, fechas, teléfonos, etc.) que se presentan en formatos de cadena de texto muy diferentes y deben ser transformados a un formato unificado. El problema es cómo construir una transformación compositiva a partir de un gran conjunto de primitivas en el dominio (por ejemplo, manejar meses, años, días de la semana, etc.). Desarrollamos un sistema (BK-ADAPT) que guía la búsqueda a través del conocimiento previo extrayendo varias meta-características de los ejemplos que caracterizan el dominio de la columna. En el segundo escenario, nos enfrentamos a la transformación de matrices de datos en lenguajes de programación genéricos como R, utilizando como ejemplos una matriz de entrada y algunas celdas de la matriz de salida. También desarrollamos un sistema guiado por una búsqueda basada en árboles (AUTOMAT[R]IX) que usa varias restricciones, probabilidades previas para las primitivas y sugerencias textuales, para aprender eficientemente las transformaciones. Con estos sistemas, mostramos que la combinación de programación inductiva, con la selección dinámica de las primitivas apropiadas a partir del conocimiento previo, es capaz de mejorar los resultados de otras herramientas actuales específicas para la preparación de datos.[CA] El procés de ciència de dades és essencial per extraure valor de les dades. No obstant això, la part més tediosa del procés, la preparació de les dades, implica una sèrie de transformacions, neteja i identificació de problemes que principalment són tasques manuals. La preparació de dades encara es resisteix a l'automatització en part perquè el problema depén en gran manera de la informació del domini, que es converteix en un coll de botella per als sistemes d'última generació a mesura que augmenta la diversitat de dominis, formats i estructures de les dades. En aquesta tesi ens enfoquem a generar algorismes que aprofiten el coneixement del domini per a l'automatització de parts del procés de preparació de dades. Mostrem la forma en què les tècniques generals d'inducció de programes, en lloc dels llenguatges específics del domini, es poden aplicar de manera flexible a problemes on el coneixement és important, mitjançant l'ús dinàmic de coneixement específic del domini. De manera més general, sostenim que una combinació d'enfocaments d'aprenentatge dinàmics i basats en coneixement pot conduir a les bones solucions. Proposem diverses estratègies per seleccionar o construir automàticament el coneixement previ apropiat en diversos escenaris de preparació de dades. La idea principal es basa a triar les millors primitives especialitzades d'acord amb el context del problema particular a resoldre. Abordem dos escenaris. En el primer, manegem dades personals (noms, dates, telèfons, etc.) que es presenten en formats de cadena de text molt diferents i han de ser transformats a un format unificat. El problema és com construir una transformació compositiva a partir d'un gran conjunt de primitives en el domini (per exemple, manejar mesos, anys, dies de la setmana, etc.). Desenvolupem un sistema (BK-ADAPT) que guia la cerca a través del coneixement previ extraient diverses meta-característiques dels exemples que caracteritzen el domini de la columna. En el segon escenari, ens enfrontem a la transformació de matrius de dades en llenguatges de programació genèrics com a R, utilitzant com a exemples una matriu d'entrada i algunes dades de la matriu d'eixida. També desenvolupem un sistema guiat per una cerca basada en arbres (AUTOMAT[R]IX) que usa diverses restriccions, probabilitats prèvies per a les primitives i suggeriments textuals, per aprendre eficientment les transformacions. Amb aquests sistemes, mostrem que la combinació de programació inductiva amb la selecció dinàmica de les primitives apropiades a partir del coneixement previ, és capaç de millorar els resultats d'altres enfocaments de preparació de dades d'última generació i més específics.[EN] Data science is essential for the extraction of value from data. However, the most tedious part of the process, data wrangling, implies a range of mostly manual formatting, identification and cleansing manipulations. Data wrangling still resists automation partly because the problem strongly depends on domain information, which becomes a bottleneck for state-of-the-art systems as the diversity of domains, formats and structures of the data increases. In this thesis we focus on generating algorithms that take advantage of the domain knowledge for the automation of parts of the data wrangling process. We illustrate the way in which general program induction techniques, instead of domain-specific languages, can be applied flexibly to problems where knowledge is important, through the dynamic use of domain-specific knowledge. More generally, we argue that a combination of knowledge-based and dynamic learning approaches leads to successful solutions. We propose several strategies to automatically select or construct the appropriate background knowledge for several data wrangling scenarios. The key idea is based on choosing the best specialised background primitives according to the context of the particular problem to solve. We address two scenarios. In the first one, we handle personal data (names, dates, telephone numbers, etc.) that are presented in very different string formats and have to be transformed into a unified format. The problem is how to build a compositional transformation from a large set of primitives in the domain (e.g., handling months, years, days of the week, etc.). We develop a system (BK-ADAPT) that guides the search through the background knowledge by extracting several meta-features from the examples characterising the column domain. In the second scenario, we face the transformation of data matrices in generic programming languages such as R, using an input matrix and some cells of the output matrix as examples. We also develop a system guided by a tree-based search (AUTOMAT[R]IX) that uses several constraints, prior primitive probabilities and textual hints to efficiently learn the transformations. With these systems, we show that the combination of inductive programming with the dynamic selection of the appropriate primitives from the background knowledge is able to improve the results of other state-of-the-art and more specific data wrangling approaches.This research was supported by the Spanish MECD Grant FPU15/03219;and partially by the Spanish MINECO TIN2015-69175-C4-1-R (Lobass) and RTI2018-094403-B-C32-AR (FreeTech) in Spain; and by the ERC Advanced Grant Synthesising Inductive Data Models (Synth) in Belgium.Contreras Ochando, L. (2020). Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160724TESI

    Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural Networks

    Get PDF

    Advanced data structures for the interpretation of image and cartographic data in geo-based information systems

    Get PDF
    A growing need to usse geographic information systems (GIS) to improve the flexibility and overall performance of very large, heterogeneous data bases was examined. The Vaster structure and the Topological Grid structure were compared to test whether such hybrid structures represent an improvement in performance. The use of artificial intelligence in a geographic/earth sciences data base context is being explored. The architecture of the Knowledge Based GIS (KBGIS) has a dual object/spatial data base and a three tier hierarchial search subsystem. Quadtree Spatial Spectra (QTSS) are derived, based on the quadtree data structure, to generate and represent spatial distribution information for large volumes of spatial data

    Ensemble learning for ranking interesting attributes

    Get PDF
    Machine learning knowledge representations, such as decision trees; are often incomprehensible to humans. They can also contain errors specific to the representation type and the data used to generate them. By combining larger; less comprehensible decision trees, it is possible to increase their accuracy as an ensemble compared to the best individual tree. The thesis examines an ensemble learning technique and presents a unique knowledge elicitation technique which produces an ordered ranking of attributes by their importance in leading to more desirable classifications. The technique compares full branches of decision trees, finding the set difference of shared attributes. The combination of this information from all ensemble members is used to build an importance table which allows attributes to be ranked ordinally and by relative magnitude. A case study utilizing this method is discussed and its results are presented and summarized

    Probabilistic Load Forecasting with Deep Conformalized Quantile Regression

    Get PDF
    The establishment of smart grids and the introduction of distributed generation posed new challenges in energy analytics that can be tackled with machine learning algorithms. The latter, are able to handle a combination of weather and consumption data, grid measurements, and their historical records to compute inference and make predictions. An accurate energy load forecasting is essential to assure reliable grid operation and power provision at peak times when power consumption is high. However, most of the existing load forecasting algorithms provide only point estimates or probabilistic forecasting methods that construct prediction intervals without coverage guarantee. Nevertheless, information about uncertainty and prediction intervals is very useful to grid operators to evaluate the reliability of operations in the power network and to enable a risk-based strategy for configuring the grid over a conservative one. There are two popular statistical methods used to generate prediction intervals in regression tasks: Quantile regression is a non-parametric probabilistic forecasting technique producing prediction intervals adaptive to local variability within the data by estimating quantile functions directly from the data. However, the actual coverage of the prediction intervals obtained via quantile regression is not guaranteed to satisfy the designed coverage level for finite samples. Conformal prediction is an on-top probabilistic forecasting framework producing symmetric prediction intervals, most often with a fixed length, guaranteed to marginally satisfy the designed coverage level for finite samples. This thesis proposes a probabilistic load forecasting method for constructing marginally valid prediction intervals adaptive to local variability and suitable for data characterized by temporal dependencies. The method is applied in conjunction with recurrent neural networks, deep learning architectures for sequential data, which are mostly used to compute point forecasts rather than probabilistic forecasts. Specifically, the use of an ensemble of pinball-loss guided deep neural networks performing quantile regression is used together with conformal prediction to address the individual shortcomings of both techniques

    A Formal Study of Moessner's Sieve

    Get PDF
    In this dissertation, we present a new characterization of Moessner's sieve that brings a range of new results with it. As such, we present a dual to Moessner's sieve that generates a sequence of so-called Moessner triangles, instead of a traditional sequence of successive powers, where each triangle is generated column by column, instead of row by row. Furthermore, we present a new characteristic function of Moessner's sieve that calculates the entries of the Moessner triangles generated by Moessner's sieve, without having to calculate the prefix of the sequence.We prove Moessner's theorem adapted to our new dual sieve, called Moessner's idealized theorem, where we generalize the initial configuration from a sequence of natural numbers to a seed tuple containing just one non-zero entry. We discover a new property of Moessner's sieve that connects Moessner triangles of different rank, thus acting as a dual to the existing relation between Moessner triangles of different index, thereby suggesting the presence of a 2-dimensional grid of triangles, rather than the traditional 1-dimensional sequence of values.We adapt Long's theorem to the dual sieve and obtain a simplified initial configuration of Long's theorem, consisting of a seed tuple of two non-zero entries. We conjecture a new generalization of Long's theorem that has a seed tuple of arbitrary entries for its initial configuration and connects Moessner's sieve with polynomial evaluation. Lastly, we approach the connection between Moessner's sieve and polynomial evaluation from an alternative perspective and prove an equivalence relation between the triangle creation procedures of Moessner's sieve and the repeated application of Horner's method for polynomial division.All results presented in this dissertation have been formalized in the Coq proof assistant and proved using a minimal subset of the constructs and tactics available in the Coq language. As such, we demonstrate the potential of proof assistants to inspire new results while lowering the gap between programs (in computer science) and proofs (in mathematics)

    A Recursive Partitioning Approach for Dynamic Discrete Choice Modeling in High Dimensional Settings

    Full text link
    Dynamic discrete choice models are widely employed to answer substantive and policy questions in settings where individuals' current choices have future implications. However, estimation of these models is often computationally intensive and/or infeasible in high-dimensional settings. Indeed, even specifying the structure for how the utilities/state transitions enter the agent's decision is challenging in high-dimensional settings when we have no guiding theory. In this paper, we present a semi-parametric formulation of dynamic discrete choice models that incorporates a high-dimensional set of state variables, in addition to the standard variables used in a parametric utility function. The high-dimensional variable can include all the variables that are not the main variables of interest but may potentially affect people's choices and must be included in the estimation procedure, i.e., control variables. We present a data-driven recursive partitioning algorithm that reduces the dimensionality of the high-dimensional state space by taking the variation in choices and state transition into account. Researchers can then use the method of their choice to estimate the problem using the discretized state space from the first stage. Our approach can reduce the estimation bias and make estimation feasible at the same time. We present Monte Carlo simulations to demonstrate the performance of our method compared to standard estimation methods where we ignore the high-dimensional explanatory variable set
    corecore