Nate Foster & Alexandra Silva — Guarded Kleene Algebra with Tests
We introduce Guarded Kleene Algebra with Tests (GKAT), a generalization of Kleene Algebra with Tests (KAT) that arises by restricting the union (+) and iteration (*) operations from KAT to predicate-guarded versions. We develop the (co)algebraic theory of GKAT and show how it can be efficiently used to to reason about imperative programs. In contrast to KAT, whose equational theory is PSPACE complete, we show that the equational theory of GKAT is linear time. We also prove completeness for an analogue of Salomaa's axiomatization of KA.
Download slidesNada Amin — Collapsing Heterogenous Towers of Evaluators
Download slides
Lennart Augustsson — f(x) = m*x + c
Affine transformations are useful for array indexing.
Download slidesPierre Dagand — Computational Survivalism: compiler(s) for the end of Moore's law
Cryptographic primitives are subject to diverging imperatives. The necessity of functional correctness and auditability pushes for using a high-level programming language. The performance requirements and the threat of timing attacks push for using no more abstract than an assembler to exploit (or avoid!) the intricate micro-architectural features of a given machine. We believe that a suitable programming language can reconcile both views and actually improve on the state of the art in both directions. In this work, we introduce Usuba, an opinionated dataflow programming language in which block ciphers become so simple as to be “obviously correct” and whose types document and enforce valid parallelization strategies at the bit-level granularity. We then present Usubac, its optimizing compiler that produces high-throughput, constant-time implementations performing on par with hand-tuned reference implementations. The cornerstone of our approach is a systematization and generalization of bitslicing, an implementation trick frequently used by cryptographers. We thus show that Usuba can produce code that executes between 5% slower to 22% faster than hand-tuned reference implementations while gracefully scaling across a wide range of architectures and automatically exploiting Single Instruction Multiple Data (SIMD) instructions whenever the cipher’s structure allows it.
Download slidesDerek Dreyer — Stacked Borrows: An Aliasing Model for Rust
The type system of Rust provides strong invariants about aliasing and mutability -- namely that mutable references are unique, and shared references are immutable -- and the Rust developers would like to be able to perform compiler optimizations based on these invariants. However, unsafe Rust code (which plays an important role in the implementation of many Rust libraries) could very easily break these invariants. The Rust developers would like to differentiate code that preserves these invariants from code that violates them, and to treat the latter as inducing undefined behavior. But this involves formalizing a refined model of pointer accesses that enforces these invariants dynamically. In this work, we propose "stacked borrows", a new pointer access model for Rust that codifies its invariants concerning mutability and aliasing as part of the dynamic semantics of the language. We are in the process of proving that this model validates a variety of desirable optimizations that exploit the invariants on Rust's reference types. The stacked borrows model has been implemented (by my student Ralf Jung) in Miri, an experimental interpreter for Rust's Mid-level Intermediate Representation (MIR), which is publicly available as part of the Rust toolchain and has already uncovered at least 9 bugs in the Rust standard library.
Download slidesLink to website
Richard Eisenberg — Generalized Newtype Compiling: Don't let your types slow you down
Types are great for structuring data and assisting in reasoning about code. However, some types force data representations on us that have poor performance characteristics. A classic example is Peano naturals, which are wonderful at compile time but dreadful at runtime. This talk presents early work-in-progress about a design for describing a type's representation with its definition; the representation need not resemble the type definition.
Download slidesRon Garcia — SPACE-EFFICIENCY FOR ABSTRACT GRADUAL TYPING?
Download slides
Kuen-Bang Hou (Favonia) — Logarithm and Program Testing
This talk defines logarithm for types and explains its role in program testing.
Download slidesJohn Hughes — Integrating random and enumerative property-based testing
This was a follow-up to Benjamin's talk, in particular exploring links to combinatorial testing, and suggesting that we parameterise the hybrid monad on the *strength* of random enumeration required.
Download slidesDavid Janin — A timed IO monad
A generic monadic approach for safe and robust timed concurrent programming.
Download slidesDownload paper
Ranjit Jhala — Refinement Level Computation
We introduce Refinement Reflection, a new framework for building SMT-based deductive verifiers. The key idea is to reflect the code implementing a user-defined function into the function's (output) refinement type. As a consequence, at uses of the function, the function definition is instantiated in the SMT logic in a precise fashion that permits decidable verification. Reflection allows the user to write equational proofs of programs just by writing other programs using pattern-matching and recursion to perform case-splitting and induction. Thus, via the propositions-as-types principle, we show that reflection permits the specification of arbitrary functional correctness properties. Finally, we introduce a proof-search algorithm called Proof by Logical Evaluation that uses techniques from model checking and abstract interpretation, to completely automate equational reasoning. We have implemented reflection in Liquid Haskell and used it to verify that the widely used instances of the Monoid, Applicative, Functor, and Monad type-classes actually satisfy key algebraic laws required to make the clients safe, and have used reflection to build the first library that actually verifies assumptions about associativity and ordering that are crucial for safe deterministic parallelism.
Download slidesDownload paper
Link to website
Gabriele Keller — High Performance Simulations in Haskell
Overview of what is happening with the Accelerate project, which aims at providing an easy to use way to write HPC programs in Haskell.
Download slidesXavier Leroy — Coinductive definitional interpreters using the delay monad
The delay monad, introduced by V. Capretta in 2005, is an elegant, constructive presentation of general computations (that may not terminate) in Coq, Agda, or other provers based on type theory. I've been looking into using the delay monad to define interpreters in Coq for non-normalizing languages such as untyped lambda-calculus. A few twists are required to work around Coq's productivity conditions over corecursive function definitions. The definitional interpreter we obtain in the end has all the good properties expected from a denotational semantics, but is not particularly pleasant to work with in Coq.
Download slidesSimon Marlow — Selective Applicative Functors
Download slides
Ulf Norell — Implicit Generalization in Agda
Implicit generalization is a new feature in Agda (but present in many other languages) that can automatically insert quantification over free variables in types. Contrary to many other systems Agda requires all variables that should be generalized to be declared as such in a separate declaration. In this talk I show how this feature lets you omit large chunks of uninteresting quantification, and dive into the algorithm that makes it all work. The talk starts with some Agda examples that can be found here: https://gist.github.com/UlfNorell/94e8abf47149da52897cb8041601dc18 and continues in the attached slides.
Download slidesLink to website
Benjamin Pierce — A Hybrid Testing Monad
We propose a hybrid testing monad that supports both enumeration and randomness for writing test generators, combining their strengths. This permits easily switching between testing in both fully random and fully enumerative styles. More importantly, it opens the door to an entire spectrum of mixed approaches and gives a more principled treatment of backtracking. We explore this hybrid monad in both a lazy setting (extending Haskell's QuickCheck) and an eager one (extending Coq's QuickChick), showing a multitude of useful new combinators one can encode. We demonstrate the power of the hybrid monad in a significant case study, applying it to optimize an existing, highly-tuned generator for information-flow control abstract machines; we find that switching from pure random generation to a mix of random and enumerative requires only a small code change and results in an average $3.9\\times$ speedup in mean time to failure.
Download slidesFrançois Pottier — Time Receipts in Iris
Download slides
Gabriel Scherer — A right-to-left type system for mutually-recursive value definitions
In call-by-value languages, some mutually-recursive value definitions can be safely evaluated to build recursive functions or cyclic data structures, but some definitions (`let rec x = x + 1`) contain vicious circles and their evaluation fails at runtime. We propose a new static analysis to check the absence of such runtime failures. We present a set of declarative inference rules, prove its soundness with respect to the reference source-level semantics of [Nordlander, Carlsson and Gill, 2008], and show that it can be (right-to-left) directed into an algorithmic check in a surprisingly simple way. Our implementation of this new check replaced the existing check used by the OCaml programming language, a fragile syntactic/grammatical criterion which let several subtle bugs slip through as the language kept evolving.
Download slidesLink to website
Peter Thiemann — Fun with Label-Dependent Session Types
Session types have emerged as a typing discipline for communication protocols. Existing calculi with session types come equipped with many different primitives that combine communication with the introduction or elimination of the transmitted value. We present a foundational session type calculus with a lightweight operational semantics. It fully decouples communication from the introduction and elimination of data and thus features a single communication reduction, which acts as a rendezvous between senders and receivers. We achieve this decoupling by introducing label-dependent session types, a minimalist value-dependent session type system with subtyping. The system is sufficiently powerful to simulate existing functional session type systems. Compared to such systems, label-dependent session types place fewer restrictions on the code. The new calculus showcases a novel integration of dependent types and linear typing, which has uses beyond session type systems.
Download slidesNiki Vazou — Liquidate your Assets
Liquid Haskell is an extension to the type system of Haskell that supports formal reasoning about program correctness by encoding logical properties as re nement types. In this article, we show how Liquid Haskell can also be used to reason about program ef ciency in the same setting, with the system’s existing veri cation machinery being used to ensure that the results are both meaningful and precise. To illustrate our approach, we analyse the ef ciency of a range of popular data structures and algorithms, and in doing so, explore various notions of resource usage. Our experience is that reasoning about ef ciency in Liquid Haskell is often just as simple as reasoning about correctness, and that the two can naturally be combined.
Download slidesDownload paper
Dimitrios Vytiniotis — Research Scientist
In this talk we discuss modular symbolic differentiation for a general-purpose programming language in the style of the recent Swift-for-Tensorflow project, based on the idea of compiling differentiable functions to data structures containing the original function and its forward or backward derivatives. We focus on the problem of differentiating through higher-order functions and show it can be reduced to finding a suitable notion of a differentiable curry, combining ideas from the classic "Lambda the Ultimate Backpropagator" (LUTB) and the recent categorical presentation of "The Simple Essence of Automatic Differentiation". We provide two different interpretations for the perturbation spaces corresponding to function types, one simply typed (and new, to our knowledge), the other directly inspired by LUTB, but cast for the first time in a static (dependent) type discipline. We discuss their trade-offs, and present proofs that both interpretations ``work'', as they both yield notions of differentiable currying and evaluation that satisfy various laws expected to hold in Cartesian Closed Categories. We sketch extensions to higher-order derivatives.
Download slidesDavid Walker — NV: A platform for modelling and verifying routing protocols
This talk describes a functional language for modeling and verifying configurations of routing protocols such as BGP, OSPF and others. After a user has modeled their network, he or she can (1) convert the model to an SMT formula for verification of properties of the converged states of the model, or (2) simulate the model, finding a particular stable state of the model, on which assertions may be checked.
Download slidesLink to website
Stephanie Weirich — Strongly-typed System F in GHC
Download slides
Keith Winstein — Tiny functions for lots of things
Networks, applications, and media codecs frequently treat one another as strangers. By expressing large systems as compositions of small, pure functions, we've found it's possible to achieve tighter couplings between these components, improving performance without giving up modularity or the ability to debug. I'll discuss our experience with systems that demonstrate this basic idea: ExCamera (NSDI 2017) parallelizes video encoding into thousands of tiny tasks, each handling a fraction of a second of video, much shorter than the interval between key frames, and executing with 4,000-way parallelism on AWS Lambda. Salsify (NSDI 2018) is a low-latency network video system that uses a purely functional video codec to explore execution paths of the encoder without committing to them, allowing it to closely match the capacity estimates from a video-aware transport protocol. This architecture outperforms more loosely-coupled applications -- Skype, Facetime, Hangouts, WebRTC -- in delay and visual quality, and suggests that while improvements in video codecs may have reached the point of diminishing returns, video systems still have low-hanging fruit. Lepton (NSDI 2017) uses a purely functional JPEG/VP8 transcoder to compress images in parallel across a distributed network filesystem with arbitrary block boundaries. This free-software system is in production at Dropbox and has compressed, by 23%, more than 200 petabytes of user JPEGs. Based on our experience building these systems, we have designed an intermediate representation and toolchain for massively parallel "serverless" applications. The IR lets application developers express jobs against an abstract machine, taking advantage of back-end execution engines that manage parallel execution, data movement, failure recovery, debugging, tracing, and memoization on different compute and storage platforms. Expressing systems and protocols as compositions of small, pure functions may open up a wave of general-purpose lambda computing, permitting us to transform everyday operations into massively parallel -- and understandable -- applications.
Download slidesSteve Zdancewic — Compositional Compiler Correctness in Coq
Interaction trees are a general-purpose data structure in Coq for formalizing the behaviors of recursive programs that interact with their environment. ITrees are built of uninterpreted events and their continuations---a coinductive variant of a ``free monad.'' They allow proofs of compositional properties for interpreters built from event handlers and admit a general, mutual recursion operator. ITrees are also executable, e.g. through extraction, making them suitable for debugging, testing, and implementing executable artifacts. This talk will show how, using this theory, we can prove, in Coq, the termination-sensitive correctness of a compiler from a simple imperative source language to an assembly-like target whose meanings are given as an ITree-based denotational semantics. Crucially, the correctness proof follows entirely by structural induction and the equational theory of combinators for control-flow graphs, which are built on top of the mutual recursion operator.
Download slidesLink to website