15 research outputs found
Be My Guest: Normalizing and Compiling Programs using a Host Language
In programming language research, normalization is a process of fundamental importance to the theory of computing and reasoning about programs.In practice, on the other hand, compilation is a process that transforms programs in a language to machine code, and thus makes the programminglanguage a usable one. In this thesis, we investigate means of normalizing and compiling programs in a language using another language as the "host".Leveraging a host to work with programs of a "guest" language enables reuse of the host\u27s features that would otherwise be strenuous to develop.The specific tools of interest are Normalization by Evaluation and Embedded Domain-Specific Languages, both of which rely on a host language for their purposes. These tools are applied to solve problems in three different domains: to show that exponentials (or closures) can be eliminated from a categorical combinatory calculus, to propose a new proof technique based on normalization for showing noninterference, and to enable the programming of resource-constrained IoT devices from Haskell
Modular Normalization with Types
With the increasing use of software in todayâs digital world, software is becoming more and more complex and the cost of developing and maintaining software has skyrocketed. It has become pressing to develop software using effective tools that reduce this cost. Programming language research aims to develop such tools using mathematically rigorous foundations. A recurring and central concept in programming language research is normalization: the process of transforming a complex expression in a language to a canonical form while preserving its meaning. Normalization has compelling benefits in theory and practice, but is extremely difficult to achieve. Several program transformations that are used to optimise programs, prove properties of languages and check program equivalence, for instance, are after all instances of normalization, but they are seldom viewed as such.Viewed through the lens of current methods, normalization lacks the ability to be broken into sub-problems and solved independently, i.e., lacks modularity. To make matters worse, such methods rely excessively on the syntax of the language, making the resulting normalization algorithms brittle and sensitive to changes in the syntax. When the syntax of the language evolves due to modification or extension, as it almost always does in practice, the normalization algorithm may need to be revisited entirely. To circumvent these problems, normalization is currently either abandoned entirely or concrete instances of normalization are achieved using ad hoc means specific to a particular language. Continuing this trend in programming language research poses the risk of building on a weak foundation where languages either lack fundamental properties that follow from normalization or several concrete instances end up repeated in an ad hoc manner that lacks reusability.This thesis advocates for the use of type-directed Normalization by Evaluation (NbE) to develop normalization algorithms. NbE is a technique that provides an opportunity for a modular implementation of normalization algorithms by allowing us to disentangle the syntax of a language from its semantics. Types further this opportunity by allowing us to dissect a language into isolated fragments, such as functions and products, with an individual specification of syntax and semantics. To illustrate type-directed NbE in context, we develop NbE algorithms and show their applicability for typed programming language calculi in three different domains (modal types, static information-flow control and categorical combinators) and for a family of embedded-domain specific languages in Haskell
Method To Estimate Network Availability
A distributed network makes network services available to end users at various nodes or connection points throughout the distributed networkâs geographic area. A network administrator monitors the performance, capability, and availability of the distributed network to provide the network services. However, the network administrator may be limited to network traffic or other network-side parameters that may not provide an accurate or a conclusive representation of the state of the distributed network. For example, diminished or decreased network traffic could indicate a malfunction in the distributed network or be a natural consequence of a decreased number of end users. Cost, infrastructure requirements, and other limitations prevent installation and operation of a secondary network, which could be used to conclusively determine the conditions of the area within the distributed network. Instead, machine-learning algorithms may monitor and model some features of the distributed network, which may supplement service availability composite metrics, and allow the network administrator to better evaluate the condition of the distributed network without the need of the secondary network
Practical Normalization by Evaluation for EDSLs
Embedded domain-specific languages (eDSLs) are typically implemented in a rich host language, such as Haskell, using a combination of deep and shallow embedding techniques. While such a combination enables programmers to exploit the execution mechanism of Haskell to build and specialize eDSL programs, it blurs the distinction between the host language and the eDSL. As a consequence, extension with features such as sums and effects requires a significant amount of ingenuity from the eDSL designer. In this paper, we demonstrate that Normalization by Evaluation (NbE) provides a principled framework for building, extending, and customizing eDSLs. We present a comprehensive treatment of NbE for deeply embedded eDSLs in Haskell that involves a rich set of features such as sums, arrays, exceptions and state, while addressing practical concerns about normalization such as code expansion and the addition of domain-specific features
Normalization for fitch-style modal calculi
Fitch-style modal lambda calculi enable programming with necessity modalities in a typed lambda calculus by extending the typing context with a delimiting operator that is denoted by a lock. The addition of locks simplifies the formulation of typing rules for calculi that incorporate different modal axioms, but each variant demands different, tedious and seemingly ad hoc syntactic lemmas to prove normalization. In this work, we take a semantic approach to normalization, called normalization by evaluation (NbE), by leveraging the possible-world semantics of Fitch-style calculi to yield a more modular approach to normalization. We show that NbE models can be constructed for calculi that incorporate the K, T and 4 axioms of modal logic, as suitable instantiations of the possible-world semantics. In addition to existing results that handle -equivalence, our normalization result also considers -equivalence for these calculi. Our key results have been mechanized in the proof assistant Agda. Finally, we showcase several consequences of normalization for proving meta-theoretic properties of Fitch-style calculi as well as programming-language applications based on different interpretations of the necessity modality
Normalization for Fitch-Style Modal Calculi
Fitch-style modal lambda calculi enable programming with necessity modalities in a typed lambda calculus by extending the typing context with a delimiting operator that is denoted by a lock. The addition of locks simplifies the formulation of typing rules for calculi that incorporate different modal axioms, but each variant demands different, tedious and seemingly ad hoc syntactic lemmas to prove normalization. In this work, we take a semantic approach to normalization, called normalization by evaluation (NbE), by leveraging the possible-world semantics of Fitch-style calculi to yield a more modular approach to normalization. We show that NbE models can be constructed for calculi that incorporate the K, T and 4 axioms of modal logic, as suitable instantiations of the possible-world semantics. In addition to existing results that handle beta-equivalence, our normalization result also considers eta-equivalence for these calculi. Our key results have been mechanized in the proof assistant Agda. Finally, we showcase several consequences of normalization for proving meta-theoretic properties of Fitch-style calculi as well as programming-language applications based on different interpretations of the necessity modality
Differentially Private Heatmaps
We consider the task of producing heatmaps from users' aggregated data while
protecting their privacy. We give a differentially private (DP) algorithm for
this task and demonstrate its advantages over previous algorithms on real-world
datasets.
Our core algorithmic primitive is a DP procedure that takes in a set of
distributions and produces an output that is close in Earth Mover's Distance to
the average of the inputs. We prove theoretical bounds on the error of our
algorithm under a certain sparsity assumption and that these are near-optimal.Comment: To appear in AAAI 202
Recommended from our members
Antenna subset modulation for secure millimeter-wave wireless communication
textThe small carrier wavelength at millimeter-wave (mm-Wave) frequencies allows the possibility of implementing a large number of antennas on a single chip. This work uses the potential of large antenna arrays at these frequencies to develop a low-complexity directional modulation technique: Antenna Subset Modulation (ASM) for point-to-point secure wireless communication. The main idea in ASM is to communicate information by modulating the far-field radiation pattern of the array at the symbol rate. By driving only a subset of antennas and changing the subset used for each symbol transmission the far-field pattern is modulated. Two techniques for implementing antenna subset selection are proposed. The first technique is simple where the antenna subset to be used is selected at random for every symbol transmission. While randomly switching antenna subsets does not affect the symbol modulation for a desired receiver along the main lobe direction, it effectively randomizes the amplitude and phase of the received symbol for an eavesdropper along a sidelobe. Using a simplified statistical model for random antenna subset selection, an expression for the average symbol error rate (SER) is derived as a function of observation angle for linear arrays. To overcome the problem of large peak sidelobe level in random antenna subset switching, an optimized antenna subset selection procedure based on simulated annealing is then discussed. Finally, numerical results comparing the average SER performance of the proposed techniques against conventional array transmission are presented. While both methods produce a narrower information beam-width in the desired direction, the optimized antenna subset selection technique is shown to offer better security and array performance.Electrical and Computer Engineerin
Molr - A delegation framework for accelerator commissioning
Accelerator commissioning is the process of preparing an accelerator for beam operations. A typical commissioning period at CERN involves running thousands of tests on many complex systems and machinery to ensure smooth beam operations and correct functioning of the machine protection systems. AccTesting is a software framework which helps orchestrate the commissioning of CERNâs accelerators and itâs equipment systems. This involves running and managing tests provided by various commissioning tools and analyzing their outcomes. Currently, AccTesting only supports a speciïŹc set of commissioning tools. In this project, we aim to widen the spectrum of commissioning tools supported by AccTesting by developing a generic and programmable integration framework called Molr, which would enable the integration of more commissioning tools with AccTesting. In this report, we summarize the work done during the summer student project and lay out a brief overview of the current status and next steps for Molr