1,269 research outputs found

    Sparse Learning over Infinite Subgraph Features

    Full text link
    We present a supervised-learning algorithm from graph data (a set of graphs) for arbitrary twice-differentiable loss functions and sparse linear models over all possible subgraph features. To date, it has been shown that under all possible subgraph features, several types of sparse learning, such as Adaboost, LPBoost, LARS/LASSO, and sparse PLS regression, can be performed. Particularly emphasis is placed on simultaneous learning of relevant features from an infinite set of candidates. We first generalize techniques used in all these preceding studies to derive an unifying bounding technique for arbitrary separable functions. We then carefully use this bounding to make block coordinate gradient descent feasible over infinite subgraph features, resulting in a fast converging algorithm that can solve a wider class of sparse learning problems over graph data. We also empirically study the differences from the existing approaches in convergence property, selected subgraph features, and search-space sizes. We further discuss several unnoticed issues in sparse learning over all possible subgraph features.Comment: 42 pages, 24 figures, 4 table

    A Network-Based Deterministic Model for Causal Complexity

    Get PDF
    Despite the widespread use of techniques and tools for causal analysis, existing methodologies still fall short as they largely regard causal variables as independent elements, thereby failing to appreciate the significance of the interactions of causal variables. The prospect of inferring causal relationships from weaker structural assumptions compels for further research in this area. This study explores the effects of the interactions of variables in the context of causal analysis, and introduces new advancements to this area of research. In this study, we introduce a new approach for the causal complexity with the goal of making the solution set closer to deterministic by taking into consideration the underlying patterns embedded within a dataset; in particular, the interactions of causal variables. Our model follows the configurational approach, and as such, is able to account for the three major phenomena of conjunctural causation, equifinality, and causal asymmetry

    Networking - A Statistical Physics Perspective

    Get PDF
    Efficient networking has a substantial economic and societal impact in a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption require new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with non-linear large scale systems. This paper aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.Comment: (Review article) 71 pages, 14 figure

    Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training

    Full text link
    Distributed full-graph training of Graph Neural Networks (GNNs) over large graphs is bandwidth-demanding and time-consuming. Frequent exchanges of node features, embeddings and embedding gradients (all referred to as messages) across devices bring significant communication overhead for nodes with remote neighbors on other devices (marginal nodes) and unnecessary waiting time for nodes without remote neighbors (central nodes) in the training graph. This paper proposes an efficient GNN training system, AdaQP, to expedite distributed full-graph GNN training. We stochastically quantize messages transferred across devices to lower-precision integers for communication traffic reduction and advocate communication-computation parallelization between marginal nodes and central nodes. We provide theoretical analysis to prove fast training convergence (at the rate of O(T^{-1}) with T being the total number of training epochs) and design an adaptive quantization bit-width assignment scheme for each message based on the analysis, targeting a good trade-off between training convergence and efficiency. Extensive experiments on mainstream graph datasets show that AdaQP substantially improves distributed full-graph training's throughput (up to 3.01 X) with negligible accuracy drop (at most 0.30%) or even accuracy improvement (up to 0.19%) in most cases, showing significant advantages over the state-of-the-art works

    Dynamic Resource Allocation

    Get PDF
    Computer systems are subject to continuously increasing performance demands. However, energy consumption has become a critical issue, both for high-end large-scale parallel systems [12], as well as for portable devices [34]. In other words, more work needs to be done in less time, preferably with the same or smaller energy budget. Future performance and efficiency goals of computer systems can only be reached with large-scale, heterogeneous architectures [6]. Due to their distributed nature, control software is required to coordinate the parallel execution of applications on such platforms. Abstraction, arbitration and multi-objective optimization are only a subset of the tasks this software has to fulfill [6, 31]. The essential problem in all this is the allocation of platform resources to satisfy the needs of an application.\ud \ud This work considers the dynamic resource allocation problem, also known as the run-time mapping problem. This problem consists of task assignment to (processing) elements and communication routing through the interconnect between the elements. In mathematical terms, the combined problem is defined as the multi-resource quadratic assignment and routing problem (MRQARP). An integer linear programming formulation is provided, as well as complexity proofs on the N P-hardness of the problem.\ud \ud This work builds upon state-of-the-art work of Yagiura et al. [39, 40, 42] on metaheuristics for various generalizations of the generalized assignment problem. Specifically, we focus on the guided local search (GLS) approach for the multi-resource quadratic assignment problem (MRQAP). The quadratic assignment problem defines a cost relation between tasks and between elements. We generalize the multi-resource quadratic assignment problem with the addition of a capacitated interconnect and a communication topology between tasks. Numerical experiments show that the performance of the approach is comparable with commercial solvers. The footprint, the time versus quality trade-off and available metadata make guided local search a suitable candidate for run-time mapping

    Qualitative Fault Detection and Hazard Analysis Based on Signed Directed Graphs for Large-Scale Complex Systems

    Get PDF
    Nowadays in modern industries, the scale and complexity of process systems are increased continuously. These systems are subject to low productivity, system faults or even hazards because of various conditions such as mis-operation, equipment quality change, externa

    HMM word graph based keyword spotting in handwritten document images

    Full text link
    [EN] Line-level keyword spotting (KWS) is presented on the basis of frame-level word posterior probabilities. These posteriors are obtained using word graphs derived from the recogni- tion process of a full-fledged handwritten text recognizer based on hidden Markov models and N-gram language models. This approach has several advantages. First, since it uses a holistic, segmentation-free technology, it does not require any kind of word or charac- ter segmentation. Second, the use of language models allows the context of each spotted word to be taken into account, thereby considerably increasing KWS accuracy. And third, the proposed KWS scores are based on true posterior probabilities, taking into account all (or most) possible word segmentations of the input image. These scores are properly bounded and normalized. This mathematically clean formulation lends itself to smooth, threshold-based keyword queries which, in turn, permit comfortable trade-offs between search precision and recall. Experiments are carried out on several historic collections of handwritten text images, as well as a well-known data set of modern English handwrit- ten text. According to the empirical results, the proposed approach achieves KWS results comparable to those obtained with the recently-introduced "BLSTM neural networks KWS" approach and clearly outperform the popular, state-of-the-art "Filler HMM" KWS method. Overall, the results clearly support all the above-claimed advantages of the proposed ap- proach.This work has been partially supported by the Generalitat Valenciana under the Prometeo/2009/014 project grant ALMA-MATER, and through the EU projects: HIMANIS (JPICH programme, Spanish grant Ref. PCIN-2015-068) and READ (Horizon 2020 programme, grant Ref. 674943).Toselli, AH.; Vidal, E.; Romero, V.; Frinken, V. (2016). HMM word graph based keyword spotting in handwritten document images. Information Sciences. 370:497-518. https://doi.org/10.1016/j.ins.2016.07.063S49751837

    Information metrics for localization and mapping

    Get PDF
    Decades of research have made possible the existence of several autonomous systems that successfully and efficiently navigate within a variety of environments under certain conditions. One core technology that has allowed this is simultaneous localization and mapping (SLAM), the process of building a representation of the environment while localizing the robot in it. State-of-the-art solutions to the SLAM problem still rely, however, on heuristic decisions and options set by the user. In this thesis we search for principled solutions to various aspects of the localization and mapping problem with the help of information metrics. One such aspect is the issue of scalability. In SLAM, the problem size grows indefinitely as the experiment goes by, increasing computational resource demands. To maintain the problem tractable, we develop methods to build an approximation to the original network of constraints of the SLAM problem by reducing its size while maintaining its sparsity. In this thesis we propose three methods to build the topology of such approximated network, and two methods to perform the approximation itself. In addition, SLAM is a passive application. It means, it does not drive the robot. The problem of driving the robot with the aim of both accurately localizing the robot and mapping the environment is called active SLAM. In this problem two normally opposite forces drive the robot, one to new places discovering unknown regions and another to revisit previous configurations to improve localization. As opposed to heuristics, in this thesis we pose the problem as the joint minimization of both map and trajectory estimation uncertainties, and present four different active SLAM approaches based on entropy-reduction formulation. All methods presented in this thesis have been rigorously validated in both synthetic and real datasets.Dècades de recerca han fet possible l’existència de nombrosos sistemes autònoms que naveguen eficaçment i eficient per varietat d’entorns sota certes condicions. Una de les principals tecnologies que ho han fet possible és la localització i mapeig simultanis (SLAM), el procés de crear una representació de l’entorn mentre es localitza el robot en aquesta. De tota manera, els algoritmes d’SLAM de l’estat de l’art encara basen moltes decisions en heurístiques i opcions a escollir per l’usuari final. Aquesta tesi persegueix solucions fonamentades per a varietat d’aspectes del problema de localització i mappeig amb l’ajuda de mesures d’informació. Un d’aquests aspectes és l’escalabilitat. En SLAM, el problema creix indefinidament a mesura que l’experiment avança fent créixer la demanda de recursos computacionals. Per mantenir el problema tractable, desenvolupem mètodes per construir una aproximació de la xarxa de restriccions original del problema d’SLAM, reduint així el seu tamany a l’hora que es manté la seva naturalesa dispersa. En aquesta tesi, proposem tres métodes per confeccionar la topologia de l’approximació i dos mètodes per calcular l’aproximació pròpiament. A més, l’SLAM és una aplicació passiva. És a dir que no dirigeix el robot. El problema de guiar el robot amb els objectius de localitzar el robot i mapejar l’entorn amb precisió es diu SLAM actiu. En aquest problema, dues forces normalment oposades guien el robot, una cap a llocs nous descobrint regions desconegudes i l’altra a revisitar prèvies configuracions per millorar la localització. En contraposició amb mètodes heurístics, en aquesta tesi plantegem el problema com una minimització de l’incertesa tant en el mapa com en l’estimació de la trajectòria feta i presentem quatre mètodes d’SLAM actiu basats en la reducció de l’entropia. Tots els mètodes presentats en aquesta tesi han estat rigurosament validats tant en sèries de dades sintètiques com en reals
    corecore