8 research outputs found

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Automated Synthesis of Quantum Subcircuits

    Full text link
    The quantum computer has become contemporary reality, with the first two-qubit machine of mere decades ago transforming into cloud-accessible devices with tens, hundreds, or--in a few cases--even thousands of qubits. While such hardware is noisy and still relatively small, the increasing number of operable qubits raises another challenge: how to develop the now-sizeable quantum circuits executable on these machines. Preparing circuits manually for specifications of any meaningful size is at best tedious and at worst impossible, creating a need for automation. This article describes an automated quantum-software toolkit for synthesis, compilation, and optimization, which transforms classically-specified, irreversible functions to both technology-independent and technology-dependent quantum circuits. We also describe and analyze the toolkit's application to three situations--quantum read-only memories, quantum random number generators, and quantum oracles--and illustrate the toolkit's start-to-finish features from the input of classical functions to the output of quantum circuits ready-to-run on commercial hardware. Furthermore, we illustrate how the toolkit enables research beyond circuit synthesis, including comparison of synthesis and optimization methods and deeper understanding of even well-studied quantum algorithms. As quantum hardware continues to develop, such quantum circuit toolkits will play a critical role in realizing its potential.Comment: 49 pages, 25 figures, 20 table

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems

    GNSS and InSAR based water vapor tomography: A Compressive Sensing solution

    Get PDF
    UnvollstĂ€ndig oder ungenau erstellte Modelle atmosphĂ€rischer Effekte schrĂ€nken die QualitĂ€t geodĂ€tischer Weltraumverfahren wie GNSS (Globale Satelliten-Navigationssysteme) und InSAR (Interferometrisches Radar mit synthetischer Apertur) ein. Gleichzeitig enthalten ZustandsgrĂ¶ĂŸen der ErdatmosphĂ€re, allen voran die dreidimensionale (3D) Wasserdampf-Verteilung, wertvolle Informationen fĂŒr Klimaforschung und Wettervorhersage, welche aus GNSS- oder InSAR-Beobachtungen abgeleitet werden können. Es gibt etliche Verfahren zur 3DWasserdampf-Rekonstruktion aus GNSS-basierten feuchten Laufzeitverzögerungen. Aufgrund der meist spĂ€rlich verteilten GNSS-Stationen und durch die begrenzte Anzahl sichtbarer GNSS-Satelliten, treten in tomographischen Anwendungen in der Regel jedoch schlecht gestellte Probleme auf, die z.B. ĂŒber geometrische Zusatzbedingungen regularisiert werden, welche oft glĂ€ttend auf die Wasserdampf-SchĂ€tzungen wirken. Diese Arbeit entwickelt und analysiert daher einen Ansatz, der auf einer Compressive Sensing (CS) Lösung des tomographischen Modells beruht. Dieser Ansatz nutzt die SpĂ€rlichkeit der Wasserdampf-Verteilung in einem geeigneten Transformationsbereich zur Regularisierung des schlecht gestellten tomographischen Problems und kommt somit ohne glĂ€ttende geometrische Zusatzbedingungen aus. Eine weitere Motivation fĂŒr die Nutzung einer spĂ€rlichen Compressive Sensing Lösung besteht darin, dass die Anzahl an zu bestimmenden von Null verschiedenen Koeffizienten bei gleichbleibender Anzahl an Beobachtungen in Compressive Sensing geringer sein kann als die Anzahl an zu schĂ€tzenden Parametern in ĂŒblichen Kleinste Quadrate (LSQ) AnsĂ€tzen. Zur Erhöhung der rĂ€umlichen Auflösung der Beobachtungen fĂŒhrt diese Arbeit zudem sowohl feuchte Laufzeitverzögerungen aus GNSS als auch aus InSAR in das tomographische Gleichungssystem ein. Die Neuheiten des vorgestellten Ansatzes sind 1) die Nutzung von sowohl GNSS als auch absoluten InSAR Laufzeitverzögerungen fĂŒr die tomographische Wasserdampf-Rekonstruktion und 2) die Lösung des tomographischen Systems mittels Compressive Sensing. Zudem wird 3) die QualitĂ€t der CS-Rekonstruktion mit der QualitĂ€t ĂŒblicher LSQ-SchĂ€tzungen verglichen. Die tomographische Rekonstruktion der durch feuchte RefraktivitĂ€ten beschriebenen atmosphĂ€rischen Wasserdampf-Verteilung beruht auf der einen Seite auf realen feuchten Laufzeitverzögerungen aus GNSS und InSAR und auf der anderen Seite auf drei verschiedenen synthetischen DatensĂ€tzen feuchter Laufzeitverzögerungen, die aus Wasserdampf-Simulationen des Weather Research and Forecasting (WRF) Modells abgeleitet wurden. Die Validierung der geschĂ€tzten Wasserdampf-Verteilung stĂŒtzt sich somit zum einen auf Radiosonden Profile und zum anderen auf einen Vergleich der geschĂ€tzten RefraktivitĂ€ten mit den WRF RefraktivitĂ€ten, die zugleich als Eingangsdaten zur Generierung der synthetischen Laufzeitverzögerungen genutzt werden. Der reale bzw. der erste synthetische Datensatz vergleicht die RekonstruktionsqualitĂ€t des entwickelten CS-Ansatzes mit ĂŒblichen Kleinste Quadrate Wasserdampf-SchĂ€tzungen und untersucht, inwieweit die Nutzung von InSAR Laufzeitverzögerungen bzw. von synthetischen InSAR Laufzeitverzögerungen die Genauigkeit und die PrĂ€zision der Wasserdampf-Rekonstruktion erhöht. Der zweite synthetische Datensatz wurde dafĂŒr ausgelegt, den allgemeinen Einfluss der Beobachtungsgeometrie auf die RefraktivitĂ€tsschĂ€tzungen zu analysieren. Der dritte synthetische Datensatz untersucht insbesondere die Empfindlichkeit der tomographischen Rekonstruktion gegenĂŒber variierenden GNSS-Stationszahlen, unterschiedlichen Voxel-Diskretisierungen und verschiedenen Orbit-Konstellationen. Im realen Datensatz verhalten sich die Kleinste Quadrate SchĂ€tzung und der Compressive Sensing Ansatz sowohl fĂŒr die reine GNSS-Lösung als auch fĂŒr die kombinierte GNSS- und InSAR-Lösung konsistent. Die synthetischen DatensĂ€tze zeigen, dass Compressive Sensing in geeigneten Szenarien sehr genaue und prĂ€zise Ergebnisse liefern kann. Die QualitĂ€t der Wasserdampf-SchĂ€tzungen hĂ€ngt in erster Linie ab i) von der Genauigkeit des funktionalen Modells, das die feuchten Laufzeitverzögerungen, die zu schĂ€tzenden RefraktivitĂ€ten und die von den Strahlen in den Voxeln zurĂŒckgelegten Distanzen in Beziehung zueinander setzt, ii) von der Anzahl verfĂŒgbarer GNSS Stationen, iii) von der Voxel-Diskretisierung, und iv) von der Vielseitigkeit der in das tomographische System eingebauten Strahlrichtungen. Die mittels des realen Datensatzes bzw. mittels der synthetischen DatensĂ€tze untersuchten Regionen sind etwa 120 × 120 km2 bzw. 100 × 100 km2 groß. Im realen Datensatz stehen acht GNSS-Stationen zur VerfĂŒgung und es werden feuchte Laufzeitverzögerungen von GPS InSAR genutzt. In den synthetischen DatensĂ€tzen werden unterschiedliche Stationsanzahlen definiert und vielseitige Strahlrichtungen getestet

    GNSS and InSAR based water vapor tomography: A Compressive Sensing solution

    Get PDF
    An accurate knowledge of the three-dimensional (3D) distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. In addition, a precise determination of water vapor is also required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). Several approaches for 3D tomographic water vapor reconstruction from GNSS-based Slant Wet Delay (SWD) estimates exist. Yet, due to the usually sparsely distributed GNSS sites and due to the limited number of visible GNSS satellites, the tomographic system usually is ill-posed and needs to be regularized, e.g. by means of geometric constraints that risk to over-smooth the tomographic refractivity estimates. Therefore, this work develops and analyzes a Compressive Sensing (CS) approach for neutrospheric water vapor tomographies benefiting of the sparsity of the refractivity estimates in an appropriate transform domain as a prior for regularization. The CS solution is developed because it does not include any geometric smoothing constraints as applied in common Least Squares (LSQ) approaches and because the sparse CS solution containing only a few non-zero coefficients may be determined, at a constant number of observations, based on less parameters than the corresponding LSQ solution. In addition to the developed CS solution, this work introduces SWDs obtained from both GNSS and InSAR into the tomographic system in order to dispose of a better spatial distribution of the observations. The novelties of this approach are 1) the use of both absolute GNSS and absolute InSAR SWDs for tomography and 2) the solution of the tomographic system by means of Compressive Sensing. In addition, 3) the quality of the CS reconstruction is compared with the quality of common LSQ approaches to water vapor tomography. The tomographic reconstruction is performed, on the one hand, based on a real data set using GNSS and InSAR SWDs and, on the other hand, based on three different synthetic SWD data sets generated using wet refractivity information from the Weather Research and Forecasting (WRF) model. Thus, the validation of the achieved results focuses, on the one hand, on radiosonde profiles and, on the other hand, on a comparison of the refractivity estimates with the input WRF refractivities. The real data set resp. the first synthetic data set compares the reconstruction quality of the developed CS approach with LSQ approaches to water vapor tomography and investigates in how far the inclusion of InSAR resp. synthetic InSAR SWDs increases the accuracy and precision of the refractivity estimates. The second synthetic data set is designed in order to analyze the general effect of the observing geometry on the quality of the refractivity estimates. The third synthetic data set places a special focus on the sensibility of the tomographic reconstruction to different numbers of GNSS sites, varying voxel discretization, and different orbit constellations. In case of the real data set, for both the GNSS only solution and a combined GNSS and InSAR solution, the refractivities estimated by means of the LSQ and CS methodologies show a consistent behavior, although the two solution strategies differ. The synthetic data sets show that CS can yield very precise and accurate results, if an appropriate tomographic setting is chosen. The reconstruction quality mainly depends on i) the accuracy of the functional model relating the SWD estimates to the refractivity parameters and to the distances passed by the rays within the voxels, ii) the number of available GNSS sites, iii) the voxel discretization, and iv) the variety of ray directions introduced into the tomographic system. The sizes of the study areas associated to the real resp. to the synthetic data sets are about 120 × 120 km2 and about 100 × 100 km2, respectively. In the real data set, a total of eight GNSS sites is available and SWD estimates of GPS and InSAR are introduced. In the synthetic data sets, different numbers of sites are defined and a variety of ray directions is tested

    Exact methods for Bayesian network structure learning and cost function networks

    Get PDF
    Les modĂšles graphiques discrets reprĂ©sentent des fonctions jointes sur de grands ensembles de variables en tant qu'une combinaison de fonctions plus petites. Il existe plusieurs instanciations de modĂšles graphiques, notamment des modĂšles probabilistes et dirigĂ©s comme les rĂ©seaux BayĂ©siens, ou des modĂšles dĂ©terministes et non-dirigĂ©s comme les rĂ©seaux de fonctions de coĂ»ts. Des requĂȘtes comme trouver l'explication la plus probable (MPE) sur un rĂ©seau BayĂ©siens, et son Ă©quivalent, trouver une solution de coĂ»t minimum sur un rĂ©seau de fonctions de coĂ»t, sont toutes les deux des tĂąches d’optimisation combinatoire NP-difficiles. Il existe cependant des techniques de rĂ©solution robustes qui ont une large gamme de domaines d'applications, notamment les rĂ©seaux de rĂ©gulation de gĂšnes, l'analyse de risques et le traitement des images. Dans ce travail, nous contribuons Ă  l'Ă©tat de l'art de l'apprentissage de la structure des rĂ©seaux BayĂ©siens (BNSL), et rĂ©pondons Ă  des requĂȘtes de MPE et de minimisation des coĂ»ts sur les rĂ©seaux BayĂ©siens et les rĂ©seaux de fonctions de coĂ»ts. Pour le BNSL, nous dĂ©couvrons un nouveau point dans l'espace de conception des algorithmes de recherche qui atteint un compromis diffĂ©rent entre la qualitĂ© et la vitesse de l'infĂ©rence. Les algorithmes existants optent soit pour la qualitĂ© maximale de l'infĂ©rence en utilisant la programmation linĂ©aire en nombres entiers (PLNE) et la sĂ©paration et Ă©valuation, soit pour la vitesse de l'infĂ©rence en utilisant la programmation par contraintes (PPC). Nous dĂ©finissons des propriĂ©tĂ©s d'une classe spĂ©ciale d'inĂ©galitĂ©s, qui sont appelĂ©es "les inĂ©galitĂ©s de cluster" et qui mĂšnent Ă  un algorithme avec une qualitĂ© d'infĂ©rence beaucoup plus puissante que celle basĂ©e sur la PPC, et beaucoup plus rapide que celle basĂ©e sur la PLNE. Nous combinons cet algorithme avec des idĂ©es originales pour une propagation renforcĂ©e ainsi qu'une reprĂ©sentation de domaines plus compacte, afin d'obtenir des performances dĂ©passant l'Ă©tat de l'art dans le solveur open source ELSA (Exact Learning of bayesian network Structure using Acyclicity reasoning). Pour les rĂ©seaux de fonctions de coĂ»ts, nous identifions une faiblesse dans l'utilisation de la relaxation continue dans une classe spĂ©cifique de solveurs, y compris le solveur primĂ© "ToulBar2". Nous prouvons que cette faiblesse peut entraĂźner des dĂ©cisions de branchement sous-optimales et montrons comment dĂ©tecter un ensemble maximal de telles dĂ©cisions qui peuvent ensuite ĂȘtre Ă©vitĂ©es par le solveur. Cela permet Ă  ToulBar2 de rĂ©soudre des problĂšmes qui Ă©taient auparavant solvables uniquement par des algorithmes hybrides.Discrete Graphical Models (GMs) represent joint functions over large sets of discrete variables as a combination of smaller functions. There exist several instantiations of GMs, including directed probabilistic GMs like Bayesian Networks (BNs) and undirected deterministic models like Cost Function Networks (CFNs). Queries like Most Probable Explanation (MPE) on BNs and its equivalent on CFNs, which is cost minimisation, are NP-hard, but there exist robust solving techniques which have found a wide range of applications in fields such as bioinformatics, image processing, and risk analysis. In this thesis, we make contributions to the state of the art in learning the structure of BNs, namely the Bayesian Network Structure Learning problem (BNSL), and answering MPE and minimisation queries on BNs and CFNs. For BNSL, we discover a new point in the design space of search algorithms, which achieves a different trade-off between inference strength and speed of inference. Existing algorithms for it opt for either maximal strength of inference, like the algorithms based on Integer Programming (IP) and branch-and-cut, or maximal speed of inference, like the algorithms based on Constraint Programming (CP). We specify properties of a specific class of inequalities, called cluster inequalities, which lead to an algorithm that performs much stronger inference than that based on CP, much faster than that based on IP. We combine this with novel ideas for stronger propagation and more compact domain representations to achieve state-of-the-art performance in the open-source solver ELSA (Exact Learning of bayesian network Structure using Acyclicity reasoning). For CFNs, we identify a weakness in the use of linear programming relaxations by a specific class of solvers, which includes the award-winning open-source ToulBar2 solver. We prove that this weakness can lead to suboptimal branching decisions and show how to detect maximal sets of such decisions, which can then be avoided by the solver. This allows ToulBar2 to tackle problems previously solvable only by hybrid algorithms
    corecore