2,283 research outputs found
Emerging Approaches for THz Array Imaging: A Tutorial Review and Software Tool
Accelerated by the increasing attention drawn by 5G, 6G, and Internet of
Things applications, communication and sensing technologies have rapidly
evolved from millimeter-wave (mmWave) to terahertz (THz) in recent years.
Enabled by significant advancements in electromagnetic (EM) hardware, mmWave
and THz frequency regimes spanning 30 GHz to 300 GHz and 300 GHz to 3000 GHz,
respectively, can be employed for a host of applications. The main feature of
THz systems is high-bandwidth transmission, enabling ultra-high-resolution
imaging and high-throughput communications; however, challenges in both the
hardware and algorithmic arenas remain for the ubiquitous adoption of THz
technology. Spectra comprising mmWave and THz frequencies are well-suited for
synthetic aperture radar (SAR) imaging at sub-millimeter resolutions for a wide
spectrum of tasks like material characterization and nondestructive testing
(NDT). This article provides a tutorial review of systems and algorithms for
THz SAR in the near-field with an emphasis on emerging algorithms that combine
signal processing and machine learning techniques. As part of this study, an
overview of classical and data-driven THz SAR algorithms is provided, focusing
on object detection for security applications and SAR image super-resolution.
We also discuss relevant issues, challenges, and future research directions for
emerging algorithms and THz SAR, including standardization of system and
algorithm benchmarking, adoption of state-of-the-art deep learning techniques,
signal processing-optimized machine learning, and hybrid data-driven signal
processing algorithms...Comment: Submitted to Proceedings of IEE
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Tools for efficient Deep Learning
In the era of Deep Learning (DL), there is a fast-growing demand for building and deploying Deep Neural Networks (DNNs) on various platforms. This thesis proposes five tools to address the challenges for designing DNNs that are efficient in time, in resources and in power consumption.
We first present Aegis and SPGC to address the challenges in improving the memory efficiency of DL training and inference. Aegis makes mixed precision training (MPT) stabler by layer-wise gradient scaling. Empirical experiments show that Aegis can improve MPT accuracy by at most 4\%. SPGC focuses on structured pruning: replacing standard convolution with group convolution (GConv) to avoid irregular sparsity. SPGC formulates GConv pruning as a channel permutation problem and proposes a novel heuristic polynomial-time algorithm. Common DNNs pruned by SPGC have maximally 1\% higher accuracy than prior work.
This thesis also addresses the challenges lying in the gap between DNN descriptions and executables by Polygeist for software and POLSCA for hardware. Many novel techniques, e.g. statement splitting and memory partitioning, are explored and used to expand polyhedral optimisation. Polygeist can speed up software execution in sequential and parallel by 2.53 and 9.47 times on Polybench/C. POLSCA achieves 1.5 times speedup over hardware designs directly generated from high-level synthesis on Polybench/C.
Moreover, this thesis presents Deacon, a framework that generates FPGA-based DNN accelerators of streaming architectures with advanced pipelining techniques to address the challenges from heterogeneous convolution and residual connections. Deacon provides fine-grained pipelining, graph-level optimisation, and heuristic exploration by graph colouring. Compared with prior designs, Deacon shows resource/power consumption efficiency improvement of 1.2x/3.5x for MobileNets and 1.0x/2.8x for SqueezeNets.
All these tools are open source, some of which have already gained public engagement. We believe they can make efficient deep learning applications easier to build and deploy.Open Acces
Verification of Red-Black Trees in KeY - A Case Study in Deductive Java Verification
Während das ausführliche Testen von Software das Auftreten von Fehlern unwahrscheinlicher macht, kann mit formaler Verifikation durch Beweise garantiert werden, dass sich ein Programm für sämtliche Eingaben korrekt verhält. Besonders für tausendfach verwendete Grundelemente wie die Datenstrukturen und Algorithmen einer Standardbibliothek ist eine formale Verifikation daher erstrebenswert.
In dieser Fallstudie spezifizieren und verifizieren wir eine Java-Implementierung von Rot-Schwarz-Bäumen mit KeY. KeY ist ein Tool zur formalen Verifikation von Java-Programmen, und verwendet dabei Dynamic Frames für Aussagen über Speicherbereiche, also das Framing. Rot-Schwarz-Bäume sind eine beliebte Datenstruktur für das effiziente Speichern und Auslesen von Elementen und sind zum Beispiel in der Klasse java.util.TreeMap der Java Class Library umgesetzt. In beiden Bereichen existieren verschiedenste Fallstudien – die in KeY betrachten jedoch kaum Baumstrukturen und existierende Rot-Schwarz-Baum-Verifizierungen gehen auf andere Weise als KeY mit Framing um.
In dieser Arbeit kommen wir zu dem Schluss, dass die Verifikation von Baumstrukturen mit Dynamic Frames möglich ist, jedoch im Vergleich zu anderen Ansätzen viel zusätzlichen Aufwand mit sich bringt. Darüber hinaus erkunden wir generelle Stärken und Schwächen von KeY und machen einige Vorschläge zur Verbesserung der Benutzbarkeit. Außerdem testen wir mit JML Scripts und Proof Caching neue Methoden zur Beweis-Automatisierung und -Persistierung
- …