Publications
A Denotational Approach to Release/Acquire Concurrency
Yotam Dvir, Ohad Kammar, and Ori Lahav.
Appears in the European Symposium on Programming, 2024, DOI: 10.1007/978-3-031-57267-8_5.
“We present a compositional denotational semantics for a functional language with first-class parallel composition and shared-memory operations whose operational semantics follows the Release/Acquire weak memory model (RA). The semantics is formulated in Moggi’s monadic approach, and is based on Brookes-style traces. To do so we adapt Brookes’s traces to Kang et al.’s view-based machine for RA, and supplement Brookes’s mumble and stutter closure operations with additional operations, specific to RA. The latter provides a more nuanced understanding of traces that uncouples them from operational interrupted executions. We show that our denotational semantics is adequate and use it to validate various program transformations of interest. This is the first work to put weak memory models on the same footing as many other programming effects in Moggi’s standard monadic approach.”
Shoggoth: A Formal Foundation for Strategic Rewriting (video)
Xueying Qin, Liam O’Connor, Rob van Glabbeek, Peter Hoefner, Ohad Kammar, and Michel Steuwer.
Appears in the 51st ACM SIGPLAN Symposium on Principles of Programming Languages, 2024, DOI: 10.1145/3633211.
“Rewriting is a versatile and powerful technique used in many domains. Strategic rewriting allows programmers to control the application of rewrite rules by composing individual rewrite rules into complex rewrite strategies. These strategies are semantically complex, as they may be nondeterministic, they may raise errors that trigger backtracking, and they may not terminate. Given such semantic complexity, it is necessary to establish a formal understanding of rewrite strategies and to enable reasoning about them in order to answer questions like: How do we know that a rewrite strategy terminates? How do we know that a rewrite strategy does not fail because we compose two incompatible rewrites? How do we know that a desired property holds after applying a rewrite strategy? In this paper, we introduce Shoggoth: a formal foundation for understanding, analysing and reasoning about strategic rewriting that is capable of answering these questions. We provide a denotational semantics of System S, a core language for strategic rewriting, and prove its equivalence to our big-step operational semantics, which extends existing work by explicitly accounting for divergence. We further define a location-based weakest precondition calculus to enable formal reasoning about rewriting strategies, and we prove this calculus sound with respect to the denotational semantics. We show how this calculus can be used in practice to reason about properties of rewriting strategies, including termination, that they are well-composed, and that desired postconditions hold. The semantics and calculus are formalised in Isabelle/HOL and all proofs have been mechanised.”
An Algebraic Theory for Shared-State Concurrency
Yotam Dvir, Ohad Kammar, and Ori Lahav.
Appears in the Asian Symposium on Programming Languages and Systems 2022, 2022, DOI: 10.1007/978-3-031-21037-2_1.
“We present a monadic denotational semantics for a higher-order programming language with shared-state concurrency, i.e. global-state in the presence of interleaving concurrency. Central to our approach is the use of Plotkin and Power’s algebraic effect methodology: designing an equational theory that captures the intended semantics, and proving a monadic representation theorem for it. We use Hyland et al.’s equational theory of resumptions that extends non-deterministic global-state with an operator for yielding to the environment. The representation is based on Brookes-style traces. Based on this representation we define a denotational semantics that is directionally adequate with respect to a standard operational semantics. We use this semantics to justify compiler transformations of interest: redundant access eliminations, each following from a mundane algebraic calculation; while structural transformations follow from reasoning over the monad’s interface.”
Fully abstract models for effectful lambda-calculi via category-theoretic logical relations (video, short video)
Ohad Kammar, Shin-ya Katsumata, and Philip Saville.
Appears in the 49th ACM SIGPLAN Symposium on Principles of Programming Languages, 2022, DOI: 10.1145/3498705.
“We present a construction which, under suitable assumptions, takes a model of Moggi’s computational lambda-calculus with sum types, effect operations and primitives, and yields a model that is adequate and fully abstract. The construction, which uses the theory of fibrations, categorical glueing, top-top-lifting, and top-top-closure, takes inspiration from O’Hearn & Riecke’s fully abstract model for PCF. Our construction can be applied in the category of sets and functions, as well as the category of diffeological spaces and smooth maps and the category of quasi-Borel spaces, which have been studied as semantics for differentiable and probabilistic programming.”
Recalibrating classifiers for interpretable abusive content detection
Bertie Vidgen, Sam Staton, Scott A. Hale, Ohad Kammar, Helen Margetts, Tom Melham, Marcin Szymczak.
Appears in the Fourth Workshop on Natural Language Processing and Computational Social Science, 2020, DOI: 10.18653/v1/2020.nlpcss-1.14.
“We investigate the use of machine learning classifiers for detecting online abuse in empirical research. We show that uncalibrated clas- sifiers (i.e. where the ‘raw’ scores are used) align poorly with human evaluations. This limits their use for understanding the dynamics, patterns and prevalence of online abuse. We examine two widely used classifiers (created by Perspective and Davidson et al.) on a dataset of tweets directed against candidates in the UK’s 2017 general election. A Bayesian approach is presented to recalibrate the raw scores from the classifiers, using probabilistic programming and newly annotated data. We argue that interpretability evaluation and recal- ibration is integral to the application of abusive content classifiers.”
On the expressive power of user-defined effects: effect handlers, monadic reflection, delimited control (updated version) (video)
Yannick Forster, Ohad Kammar, Sam Lindley, and Matija Pretnar.
Appears in the Journal of functional programming, 2019, arXiv:1610.09161, DOI: 10.1145/3110257.
Expanded and updated version of the previous ICFP’17 paper.
” We compare the expressive power of three programming abstractions for user-defined computational effects: Plotkin and Pretnar’s effect handlers, Filinski’s monadic reflection, and delimited control. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features.
We present three calculi, one per abstraction, extending Levy’s call-by-push-value. For each calculus, we present syntax, operational semantics, a natural type-and-effect system, and, for effect handlers and monadic reflection, a set-theoretic denotational semantics. We establish their basic metatheoretic properties: safety, termination, and, where applicable, soundness and adequacy. Using Felleisen’s notion of a macro translation, we show that these abstractions can macro-express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro-expressed while preserving typeability either by monadic reflection or by delimited control. Our argument fails with simple changes to the type system such as polymorphism and inductive types. We supplement our development with a mechanised Abella formalisation.”
A Domain Theory for Statistical Probabilistic Programming (video)
Matthijs Vákár, Ohad Kammar, and Sam Staton.
Appears in the 46th ACM SIGPLAN Symposium on Principles of Programming Languages, 2019, DOI: 10.1145/3290349.
Selected by the programme committee for the Distinguished Paper Award. Awarded to at most 10% of accepted papers, to highlight them as papers that should be read by a broad audience due to their relevance, originality, significance, and clarity.
” We give an adequate denotational semantics for languages with recursive higher-order types, continuous probability distributions, and soft constraints. These are expressive languages for building Bayesian models of the kinds used in computational statistics and machine learning. Among them are untyped languages, similar to Church and WebPPL, because our semantics allows recursive mixed-variance datatypes. Our semantics justifies important program equivalences including commutativity.
Our new semantic model is based on `quasi-Borel predomains’. These are a mixture of chain-complete partial orders (cpos) and quasi-Borel spaces. Quasi-Borel spaces are a recent model of probability theory that focuses on sets of admissible random elements. Probability is traditionally treated in cpo models using probabilistic powerdomains, but these are not known to be commutative on any class of cpos with higher order functions. By contrast, quasi-Borel predomains do support both a commutative probabilistic powerdomain and higher-order functions. As we show, quasi-Borel predomains form both a model of Fiore’s axiomatic domain theory and a model of Kock’s synthetic measure theory. ”
Partially static data as free extension of algebras
Jeremy Yallop, Tamara von Glehn, Ohad Kammar.
Appears in the International Conference on Functional Programming 2018, 2018, DOI: 10.1145/3236795.
“Partially-static data structures are a well-known technique for improving binding times. However, they are often defined in an ad-hoc manner, without a unifying framework to ensure full use of the equations associated with each operation. We present a foundational view of partially-static data structures as free extensions of algebras for suitable equational theories, i.e. the coproduct of an algebra and a free algebra in the category of algebras and their homomorphisms. By precalculating these free extensions, we construct a high-level library of partially static data representations for common algebraic structures. We demonstrate our library with common use-cases from the literature: string and list manipulation, linear algebra, and numerical simplification.”
Functional programming for modular Bayesian inference
Adam Ścibior Ohad Kammar, and Zoubin Ghahramani.
Appears in the International Conference on Functional Programming 2018, 2018, DOI: 10.1145/3236778.
“We present an architectural design of a library for Bayesian modelling and inference in modern functional programming languages. The novel aspect of our approach are modular correct-by-construction implementations of existing state-of-the-art inference algorithms. Our design relies on three inherently functional features: higher-order functions, inductive data-types, and support for either type-classes or an expressive module system. We provide a performant Haskell implementation of this architecture, demonstrating that high-level and modular probabilistic programming can be added as a library in sufficiently expressive languages. We review the core abstractions in this architecture: inference representations, inference transformations, and inference representation transformers. We then implement concrete instances of these abstractions, counterparts to particle filters and Metropolis-Hastings samplers, which form the basic building blocks of our library. By composing these building blocks we obtain state-of-the-art inference algorithms: Resample-Move Sequential Monte Carlo, Particle Marginal Metropolis-Hastings, and Sequential Monte Carlo squared. We evaluate our implementation against existing probabilistic programming systems and find it is already com- petitively performant, although we conjecture that existing functional programming optimisation techniques could reduce the overhead associated with the abstractions we use. We show that our modular design enables deterministic testing of inherently stochastic Monte Carlo algorithms. Finally, we demonstrate using OCaml that an expressive module system can also implement our design.”
Factorisation systems for logical relations and monadic lifting in type-and-effect system semantics
Ohad Kammar and Dylan McDermott.
Appears in the 34th Conference on the Mathematical Foundations of Programming Semantics, 2018, arXiv:1804.03460, DOI: 10.1016/j.entcs.2018.11.012.
“Type-and-effect systems incorporate information about the computational effects, e.g., state mutation, probabilistic choice, or I/O, a program phrase may invoke alongside its return value. A semantics for type-and-effect systems involves a parameterised family of monads whose size is exponential in the number of effects. We derive such refined semantics from a single monad over a category, a choice of algebraic operations for this monad, and a suitable factorisation system over this category. We relate the derived semantics to the original semantics using fibrations for logical relations. Our proof uses a folklore technique for lifting monads with operations.”
Denotational validation of Bayesian inference (video)
Adam Ścibior, Ohad Kammar, Matthijs Vákár, Sam Staton, Hongseok Yang, Yufei Cai, Klaus Ostermann, Sean K. Moss, Chris Heunen, and Zoubin Ghahramani.
Appears in the 45th ACM SIGPLAN Symposium on Principles of Programming Languages, 2018, arXiv:1711.03219, DOI: 10.1145/3158148.
“We present a modular semantic account of Bayesian inference algorithms for probabilistic programming languages, as used in data science and machine learning. Sophisticated inference algorithms are often explained in terms of composition of smaller parts. However, neither their theoretical justification nor their implementation reflects this modularity. We show how to conceptualise and analyse such inference algorithms as manipulating intermediate representations of probabilistic programs using higher-order functions and inductive types, and their denotational semantics.
Semantic accounts of continuous distributions use measurable spaces. However, our use of higher-order functions presents a substantial technical difficulty: it is impossible to define a measurable space structure over the collection of measurable functions between arbitrary measurable spaces that is compatible with standard operations on those functions, such as function application. We overcome this difficulty using quasi-Borel spaces, a recently proposed mathematical structure that supports both function spaces and continuous distributions.
We define a class of semantic structures for representing probabilistic programs, and semantic validity criteria for transformations of these representations in terms of distribution preservation. We develop a collection of building blocks for composing representations. We use these building blocks to validate common inference algorithms such as Sequential Monte Carlo and Markov Chain Monte Carlo. To emphasize the connection between the semantic manipulation and its traditional measure theoretic origins, we use Kock’s synthetic measure theory. We demonstrate its usefulness by proving a quasi-Borel counterpart to the Metropolis-Hastings-Green theorem.”
On the expressive power of user-defined effects: effect handlers, monadic reflection, delimited control (video)
Yannick Forster, Ohad Kammar, Sam Lindley, and Matija Pretnar.
Appears in the 22nd ACM SIGPLAN International Conference on Functional Programming, 2017, arXiv:1610.09161, DOI: 10.1145/3110257.
“We compare the expressive power of three programming abstractions for user-defined computational effects: Bauer and Pretnar’s effect handlers, Filinski’s monadic reflection, and delimited control without answer-type-modification. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features.
We present three calculi, one per abstraction, extending Levy’s call-by-push-value. For each calculus, we present syntax, operational semantics, a natural type-and-effect system, and, for effect handlers and monadic reflection, a set-theoretic denotational semantics. We establish their basic meta-theoretic properties: safety, termination, and, where applicable, soundness and adequacy. Using Felleisen’s notion of a macro translation, we show that these abstractions can macro-express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro-expressed while preserving typeability either by monadic reflection or by delimited control. We supplement our development with a mechanised Abella formalisation.”
A convenient category for higher-order probability theory
Chris Heunen, Ohad Kammar, Sam Staton, Hongseok Yang.
Appears in the 32st Annual ACM/IEEE Symposium on Logic in Computer Science, 2017, arXiv:1701.02547, DOI: 10.1109/LICS.2017.8005137.
“Higher-order probabilistic programming languages allow programmers to write sophisticated models in machine learning and statistics in a succinct and structured way, but step outside the standard measure-theoretic formalization of probability theory. Programs may use both higher-order functions and continuous distributions, or even define a probability distribution on functions. But standard probability theory cannot support higher-order functions, that is, the category of measurable spaces is not cartesian closed.
Here we introduce quasi-Borel spaces. We show that these spaces: form a new formalization of probability theory replacing measurable spaces; form a cartesian closed category and so support higher-order functions; form an extensional category and so support good proof principles for equational reasoning; and support continuous probability distributions. We demonstrate the use of quasi-Borel spaces for higher-order functions and probability by: showing that a well-known construction of probability theory involving random functions gains a cleaner expression; and generalizing de Finetti’s theorem, that is a crucial theorem in probability theory, to quasi-Borel spaces.”
A monad for full ground reference cells
Ohad Kammar, Paul Blain Levy, Sean Keith Moss, Sam Staton.
Appears in the 32st Annual ACM/IEEE Symposium on Logic in Computer Science, 2017, arXiv:1702.04908, DOI: 10.1109/LICS.2017.8005109.
“We present a denotational account of dynamic allocation of potentially cyclic memory cells using a monad on a functor category. We identify the collection of heaps as an object in a different functor category equipped with a monad for adding hiding/encapsulation capabilities to the heaps. We derive a monad for full ground references supporting effect masking by applying a state monad transformer to the encapsulation monad. To evaluate the monad, we present a denotational semantics for a call-by-value calculus with full ground references, and validate associated code transformations.”
No value restriction is needed for algebraic effects and handlers
Ohad Kammar and Matija Pretnar.
Appears in the Journal of Functional Programming, 2016, DOI: 10.1017/S0956796816000320.
“We present a straightforward, sound, Hindley-Milner polymorphic type system for algebraic effects and handlers in a call-by-value calculus, which, to our surprise, allows type variable generalisation of arbitrary computations, and not just values. We first recall that the soundness of unrestricted call-by-value Hindley-Milner polymorphism is known to fail in the presence of computational effects such as reference cells and continuations, and that many programming examples can be recast to use effect handlers instead of these effects. After presenting the calculus and its soundness proof, formalised in Twelf, we analyse the expressive power of effect handlers with respect to state effects. We conjecture handlers alone cannot express reference cells, but show they can simulate dynamically scoped state, establishing that dynamic binding also does not need a value restriction.”
Bayesian inversion by omega-complete cone duality
Fredrik Dahlqvist, Vincent Danos, Ilias Garnier, and Ohad Kammar.
Appears in the The 27th International Conference on Concurrency Theory, 2016.
“The process of inverting Markov kernels relates to the important subject of Bayesian modelling and learning. In fact, Bayesian update is exactly kernel inversion. In this paper, we investigate how and when Markov kernels (aka stochastic relations, or probabilistic mappings, or simply kernels) can be inverted. We address the question both directly on the category of measurable spaces, and indirectly by interpreting kernels as Markov operators:
- For the direct option, we introduce a typed version of the category of Markov kernels and use the so-called ‘disintegration of measures’. Here, one has to specialise to measurable spaces borne from a simple class of topological spaces -e.g. Polish spaces (other choices are possible). Our method and result greatly simplify a recent development in Ref. [4].
- For the operator option, we use a cone version of the category of Markov operators (kernels seen as predicate transformers). That is to say, our linear operators are not just continuous, but are required to satisfy the stronger condition of being ω-chain-continuous. Prior work shows that one obtains an adjunction in the form of a pair of contravariant and inverse functors between the categories of L1- and L∞-cones [3]. Inversion, seen through the operator prism, is just adjunction. No topological assumption is needed.
- We show that both categories (Markov kernels and ω-chain-continuous Markov operators) are related by a family of contravariant functors Tp for 1 ≤ p ≤ ∞. The Tp’s are Kleisli extensions of (duals of) conditional expectation functors introduced in Ref. [3].
- With this bridge in place, we can prove that both notions of inversion agree when both defined: if f is a kernel, and f† its direct inverse, then T∞(f)† = T1(f†).
”
Semantics for probabilistic programming: higher-order functions, continuous distributions, and soft constraints
Sam Staton, Hongseok Yang, Chris Heunen, Ohad Kammar, and Frank Wood.
Appears in the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, 2016, arXiv:1601.04943, DOI: 10.1145/2933575.2935313.
“We study the semantic foundation of expressive probabilistic programming languages, that support higher-order functions, continuous distributions, and soft constraints (such as Anglican, Church, and Venture). We define a metalanguage (an idealised version of Anglican) for probabilistic computation with the above features, develop both operational and denotational semantics, and prove soundness, adequacy, and termination. They involve measure theory, stochastic labelled transition systems, and functor categories, but admit intuitive computational readings, one of which views sampled random variables as dynamically allocated read-only variables. We apply our semantics to validate nontrivial equations underlying the correctness of certain compiler optimisations and inference algorithms such as sequential Monte Carlo simulation. The language enables defining probability distributions on higher-order functions, and we study their properties.”
Inferring Co-Evolution
Ehud Lamm and Ohad Kammar.
Appears in the Philosophy of Science, forthcoming, 2014, DOI: 10.1086/678045.
“We discuss two inference patterns for inferring the coevolution of two characters based on their properties at a single point in time and determine when developmental interactions can be used to deduce evolutionary order. We discuss the use of the inference patterns we present in the biological literature and assess the arguments’ validity, the degree of support they give to the evolutionary conclusion, how they can be corroborated with empirical evidence, and to what extent they suggest new empirically addressable questions. We suggest that the developmental argument is uniquely applicable to cognitive-cultural coevolution.”
An algebraic theory of type-and-effect systems
Ohad Kammar.
Appears in the University of Edinburgh thesis collection, 2014.
“We present a general semantic account of Gifford-style type-and-effect systems. These type systems provide lightweight static analyses annotating program phrases with the sets of possible computational effects they may cause, such as memory access and modification, exception raising, and non-deterministic choice. The analyses are used, for example, to justify the program transformations typically used in optimising compilers, such as code reordering and inlining. Despite their existence for over two decades, there is no prior comprehensive theory of type-and-effect systems accounting for their syntax and semantics, and justifying their use in effect-dependent program transformation. We achieve this generality by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with the effect operations. Our first main contribution is the uniform construction of semantic models for type-and-effect analysis by a process we call conservative restriction. Our construction requires an algebraic model of the unannotated programming language and a relevant notion of predicate. It then generates a model for Gifford-style type-and-effect analysis. This uniform construction subsumes existing ad-hoc models for type-and-effect systems, and is applicable in all cases in which the semantics can be given via enriched Lawvere theories. Our second main contribution is a demonstration that our theory accounts for the various aspects of Gifford-style effect systems. We begin with a version of Levy’s Call-by-push-value that includes algebraic effects. We add effect annotations, and design a general type-and-effect system for such call-by-push-value variants. The annotated language can be thought of as an intermediate representation used for program optimisation. We relate the unannotated semantics to the conservative restriction semantics, and establish the soundness of program transformations based on this effect analysis. We develop and classify a range of validated transformations, generalising many existing ones and adding some new ones. We also give modularly-checkable sufficient conditions for the validity of these optimisations.
In the final part of this thesis, we demonstrate our theory by analysing a simple example language involving global state with multiple regions, exceptions, and non-determinism. We give decision procedures for the applicability of the various effect-dependent transformations, and establish their soundness and completeness.”
Handlers in Action
Ohad Kammar, Sam Lindley, and Nicolas Oury.
Appears in the The 18th ACM SIGPLAN International Conference on Functional Programming, 2013, DOI: 10.1145/2544174.2500590.
“Plotkin and Pretnar’s handlers for algebraic effects occupy a sweet spot in the design space of abstractions for effectful computation. By separating effect signatures from their implementation, alge- braic effects provide a high degree of modularity, allowing pro- grammers to express effectful programs independently of the con- crete interpretation of their effects. A handler is an interpretation of the effects of an algebraic computation. The handler abstraction adapts well to multiple settings: pure or impure, strict or lazy, static types or dynamic types. This is a position paper whose main aim is to popularise the handler abstraction. We give a gentle introduction to its use, a col- lection of illustrative examples, and a straightforward operational semantics. We describe our Haskell implementation of handlers in detail, outline the ideas behind our OCaml, SML, and Racket implementations, and present experimental results comparing han- dlers with existing code.”
Algebraic Foundations for Effect-Dependent Optimisations
Ohad Kammar and Gordon D. Plotkin.
Appears in the Symposium on Principles of Programming Languages, 2012, DOI: 10.1145/2103656.2103698.
” We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with operation symbols.
We develop an annotated version of Levy’s Call-by-Push-Value language with a kind of computations for every effect set; it can be thought of as a sequential, annotated intermediate language. We develop a range of validated optimisations (i.e., equivalences), generalising many existing ones and adding new ones. We classify these optimisations as structural, algebraic, or abstract: structural optimisations always hold; algebraic ones depend on the effect theory at hand; and abstract ones depend on the global nature of that theory (we give modularly-checkable sufficient conditions for their validity).”
On the statistical thermodynamics of reversible communicating processes
Giorgio Bacci, Vincent Danos and Ohad Kammar.
Appears in the Conference on Algebra and Coalgebra in Computer Science, 2011, DOI: 10.1007/978-3-642-22944-2_1.
“We propose a probabilistic interpretation of a class of reversible communicating processes. The rate of forward and backward computing steps, instead of being given explicitly, is derived from a set of formal energy parameters. This is similar to the Metropolis-Hastings algorithm. We find a lower bound on energy costs which guarantees that a process converges to a probabilistic equilibrium state (a grand canonical ensemble in statistical physics terms). This implies that such processes hit a success state in finite average time, if there is one.”
Preprints
A Denotational Approach to Release/Acquire Concurrency
Yotam Dvir, Ohad Kammar, and Ori Lahav.
Preprint, 2024.
“We present a compositional denotational semantics for a functional language with first-class parallel composition and shared-memory operations whose operational semantics follows the Release/Acquire weak memory model (RA). The semantics is formulated in Moggi’s monadic approach, and is based on Brookes-style traces. To do so we adapt Brookes’s traces to Kang et al.’s view-based machine for RA, and supplement Brookes’s mumble and stutter closure operations with additional operations, specific to RA. The latter provides a more nuanced understanding of traces that uncouples them from operational interrupted executions. We show that our denotational semantics is adequate and use it to validate various program transformations of interest. This is the first work to put weak memory models on the same footing as many other programming effects in Moggi’s standard monadic approach.”
Frex: dependently-typed algebraic simplification
Guillaume Allais, Edwin Brady, Nathan Corbyn, Ohad Kammar, and Jeremy Yallop.
Preprint, 2022.
“We present an extensible, mathematically-structured algebraic simplification library design. We structure the library using universal algebraic concepts: a free algebra; and a free extension — frex — of an algebra by a set of variables. The library’s dependently-typed API guarantees simplification modules, even user-defined ones, are terminating, sound, and complete with respect to a well-specified class of equations. Completeness offers intangible benefits in practice — our main contribution is the novel design. Cleanly separating between the interface and implementation of simplification modules provides two new modularity axes. First, simplification modules share thousands of lines of infrastructure code dealing with term-representation, pretty-printing, certification, and macros/reflection. Second, new simplification modules can reuse existing ones. We demonstrate this design by developing simplification modules for monoid varieties: ordinary, commutative, and involutive. We implemented this design in the new Idris2 dependently-typed programming language, and in Agda.”
An absolute characterisation of locally determined omega-colimits
Ohad Kammar.
Preprint, 2014, arXiv:1508.05072.
“Characterising colimiting omega-cocones of projection pairs in terms of least upper bounds of their embeddings and projections is important to the solution of recursive domain equations. We present a universal characterisation of this local property as omega-cocontinuity of locally continuous functors. We present a straightforward proof using the enriched Yoneda embedding. The proof can be generalised to Cattani and Fiore’s notion of locality for adjoint pairs.”
Talks
Semantic foundations for type-driven probabilistic modelling
Talk given at the 40th International Conference on Mathematical Foundations of Programming Semantics (MFPS’24), 21 June, 2024.
The last few years have seen several breakthroughs in the semantic foundations of probabilistic and statistical modelling. Types show clear promise in organising intricate models and the inference algorithms we use to fit them to data. I will present a type-rich and straightforward model, the quasi Borel space, and survey recent and ongoing developments in this area.
Semantic foundations of potential-synthesis for expected amortised-cost analysis
Talk given at the 11th ACM SIGPLAN Workshop on Higher-Order Programming with Effects (HOPE’23), 04 September, 2023.
We describe ongoing work developing a semantic infrastructu re for designing automated and interactive amortised cost analyses based on synthesising potential functions for randomised data structures. These analyses can synthesise amortised complexity bounds that are tight and correct by construction. The architecture combines established and recent ideas from program logics for data structure specification, weakest pre-condition quantitative reasoning, semantics of probabilistic programming, and amortised resource analysis synthesis.
Frex: staged-optimisation and equational-proof-synthesis using universal algebra
Talk given at the 21st meeting of IFIP Working Group 2.11: Program Generation, 16 August, 2022.
A recurring task in program generation involves developing data-structures representing semantically-distinct code fragments. We can stage an optimised version of the original program off of its representation. Moreover, we may want to extract the detailed steps in the equational proof that the representation is equivalent to its source. These representations are known as partially-static data structures and semantic normal-forms. In recent and ongoing work, we specify these data structures using universal algebraic concepts, the free algebra and its less familiar sibling the free extension. In this talk, I’ll review our application of free extensions to staged-optimisation and proof-synthesis. Depending on audience participation, I’ll either delve further into applications, give a tutorial on designing data structures based on free extensions, or sketch future directions.
An introduction to statistical modelling semantics with higher-order measure theory
Talk given at the Logic of Probabilistic Programming, see course webpage for latest recordings and exercise sheets, 03 February, 2022.
The last few years have seen several breakthroughs in the semantic foundations of statistical modelling. In this tutorial, I will introduce one of these approaches — quasi Borel spaces. We will review and develop a semantic tool-kit for measure theory using higher-order functions. We will apply it to the semantic foundations of generative Bayesian modelling.
Frex: dependently-typed algebraic simplification
Talk given at the Scottish Programming Languages Seminar, 20 October, 2021.
(joint work with Guillaume Allais, Edwin Brady, Nathan Corbyn, and Jeremy Yallop) I’ll present an extensible, mathematically-structured algebraic simplification library design. We structure the library using universal algebraic concepts: a free algebra; and a free extension — frex — of an algebra by a set of variables. The latter concept, the frex, generalises the
ring of polynomials over a ring' to that of
an algebra of polynomials over an algebra`. The library’s dependently-typed API guarantees that the simplification modules in the library, even user-defined ones, are terminating, sound, and complete with respect to a well-specified class of equations. Our design is modular in two axes. First, simplification modules share thousands of lines of infrastructure code dealing with term-representation, pretty-printing, certification, and macros/reflection. Second, more advanced simplification modules can reuse existing simplification modules. We demonstrate this design by developing three monoid simplification modules: ordinary, commutative, and involutive monoids. We implemented this design in the new Idris2 dependently-typed programming language, and in Agda. I will demonstrate the Idris2 implementation.
Where do ideas come from?
Talk given at the Mental Strength for Science Unworkshop, 17 August, 2021.
We are in the ideas business: collecting, fleshing out, developing, and communicating them. I will discuss the life-cycle of (my) ideas, and give some pragmatic advice on managing and cultivating them.
Tutorial: equational reasoning and rewriting
Talk given at the Idris Developers Meeting 2021, 28 February, 2021.
When we use dependent types, we sometimes need to convince Idris that two terms are equal. When this argument requires more than a couple of steps, it’s more readable to use equational reasoning instead to justify this equality. In this tutorial, I’ll explain the current equational reasoning standard library, so that you could do it yourself when you need to. I will also highlight contributions that will improve equational reasoning, and mention some ongoing work involving equations.
Higher-order building blocks for statistical modelling
Talk given at the PPS-PIHOC-DIAPASoN Workshop, 17 February, 2021.
Higher-order functions provide modular programmable abstractions, even if the resulting program is first-order. The PIHOC community have produced several breakthroughs in the foundations for higher-order probability in recent years, enabling us to formulate and manipulate these abstractions for statistical modelling. I will use quasi-Borel spaces to discuss such higher-order building blocks in existing and ongoing work in semantic foundations for statistical modelling, including operational, denotational, and axiomatic semantics, and the design and implementation of statistical inference and simulation systems.
An introduction to ProbProg
Talk given at the Huawei Tech-Talk, Huawei Edinburgh Research Centre, 14 January, 2021.
Probabilistic programming is a collection of techniques for expressing and analysing statistical models as programs. I will present a high-level introduction to ProbProg aimed at compiler engineers and programming language researchers. We’ll go over a few probabilistic programs, and discuss broad questions such as: When might a probabilistic program be appropriate? How they might be used inside a bigger program? What operations might be ‘expensive’? What do ProbProgs ‘compute’? and what happens under the hood in an inference engine?
Frex: indexing-modulo-equations with free extensions
Talk given at the Seminar on the Foundations of Mathematics and Theoretical Computing, Faculty of Mathematics and Physics, University of Ljubljana, 15 October, 2020.
Joint work with Guillaume Allais, Edwin Brady, and Jeremy Yallop.
I’ll report about ongoing work on an extensible dependently-typed library of algebraic solvers for common algebraic structures based on free extensions (frex): the coproduct of an algebra with a free algebra. One challenge arising when programming with types indexed by values involves convincing the type-checker that two open terms are equal, as variables appearing in open terms get in the way of standard normalisation procedures like beta-reduction.
Here I’ll focus on indexing by terms amenable to equational reasoning. When the indexing type is the carrier for an algebraic structure, we hypothesise that type-checking modulo this theory amounts to representing the free extension of the algebra structure of this type with a finite set of variables. We propose a library involving two tiers: a user-facing tier for indexing types with algebraic values (‘frexlets’), and a library-developer-facing tier for extending the library to support new algebraic theories (‘core frex’). I’ll report and demonstrate our implementation in Brady’s upcoming Idris2 programming language.
Frex: indexing-modulo-equations with free extensions
Talk given at the Programming Languages at University of Glasgow (PLUG) Seminar, School of Computing Science, University of Glasgow, 13 October, 2020.
Joint work with Guillaume Allais, Edwin Brady, and Jeremy Yallop.
I’ll report about ongoing work on an extensible dependently-typed library of algebraic solvers for common algebraic structures based on free extensions (frex): the coproduct of an algebra with a free algebra. One challenge arising when programming with types indexed by values involves convincing the type-checker that two open terms are equal, as variables appearing in open terms get in the way of standard normalisation procedures like beta-reduction.
Here I’ll focus on indexing by terms amenable to equational reasoning. When the indexing type is the carrier for an algebraic structure, we hypothesise that type-checking modulo this theory amounts to representing the free extension of the algebra structure of this type with a finite set of variables. We propose a library involving two tiers: a user-facing tier for indexing types with algebraic values (‘frexlets’), and a library-developer-facing tier for extending the library to support new algebraic theories (‘core frex’). I’ll report and demonstrate our implementation in Brady’s upcoming Idris2 programming language.
A ProbProg Language Taxonomy
Talk given at the 36th International Conference on Mathematical Foundations of Programming Semantics (MFPS’20) Special Session on Probabilistic Programming, 05 June, 2020.
Modern probabilistic programming languages come in many flavours. In this talk I will propose two axes for comparing and contrasting these flavours through their statistical modelling primitives, and describe work-in-progress building semantic bridges between these flavours.
A modern perspective on the O’Hearn-Reicke model
Talk given at the Syntax and Semantics of Low-Level Languages (LOLA’19), 23 June, 2019.
O’Hearn and Riecke gave an extensional fully abstract model of PCF. The construction uses concrete logical relations of varying arity (unary, binary, ternary, etc.) to remove junk from the extensional domain-theoretic model. In ongoing work, we recast their model using modern semantic machinery, such as monads and their algebras, monadic lifting, and definability by TT-closure. We review the model from this perspective, and report on our progress.
Where do ideas come from?
Talk given at the 4th Logic Mentoring Workshop, affiliated with LICS’19, 11 April, 2019.
We are in the ideas business: collecting, fleshing out, developing, and communicating them. I will discuss the life-cycle of (my) ideas, and give some pragmatic advice on managing and cultivating them.
[Eff-Bayes: modular implementations of approximate Bayesian inference with effect handlers](https://github.com/ohad/eff-bayes/tree/master/slides/kaist)
Talk given at the School of Computing, KAIST, 11 April, 2019.
Abstract: I will describe work-in-progress, implementing a library for Bayesian inference algorithms for statistical probabilistic programming in the programming language Eff using algebraic effects and handlers. During the demonstration, I will introduce both programming with effects handlers and probabilistic programming.
Joint work with Matija Pretnar, Žiga Lukšič, Oliver Goldstein, and Adam Ścibior.
[A tutorial on quasi-Borel spaces](https://www.cs.ubc.ca/event/2019/03/tutorial-quasi-borel-spaces-ohad-kammar-university-edinburgh)
Talk given at the Department of Computer Science, the University of British Columbia, 22 March, 2019.
We have used Quasi-Borel spaces, a new mathematical structure, as a foundations of probabilistic programming and higher-order statistics. In this informal talk, I will introduce this alternative to traditional measure theory. We will cover the basic definition, and the constructions relevant to modelling and verification.
A tutorial on quasi-Borel spaces
Talk given at the Workshop on Higher-order probabilistic computation, Bellairs Research Institute, 17 March, 2019.
We have used Quasi-Borel spaces, a new mathematical structure, as a foundations of probabilistic programming and higher-order statistics. In this informal talk, I will introduce this alternative to traditional measure theory. We will cover the basic definition, and the constructions relevant to modelling and verification.
[Eff-Bayes: modular implementations of approximate Bayesian inference with effect handlers](https://github.com/ohad/eff-bayes/tree/master/slides/spls-st-andrews-2019)
Talk given at the Scottish Programming Language Seminar, University of St. Andrews School of Computer Science](https://www.cs.st-andrews.ac.uk/), 13 March, 2019.
I will describe work-in-progress, implementing a library for Bayesian inference algorithms for statistical probabilistic programming in the programming language Eff using algebraic effects and handlers.
Joint work with Matija Pretnar, Žiga Lukšič, Oliver Goldstein, and Adam Ścibior.
A domain theory for statistical probabilistic programming
Talk given at the Logic and Semantics Seminar, Programming, Logic, and Semantics Group, University of Cambridge Computer Laboratory, 22 February, 2019.
I will describe our recent work on statistical probabilistic programming languages. These are expressive languages for describing generative Bayesian models of the kinds used in computational statistics and machine learning. We give an adequate denotational semantics for a calculus with recursive higher-order types, continuous probability distributions, and soft constraints. Among them are untyped languages, similar to Church and WebPPL, because our semantics allows recursive mixed-variance datatypes. Our semantics justifies important program equivalences including commutativity.
Our new semantic model is based on `quasi-Borel predomains’. These are a mixture of chain-complete partial orders (cpos) and quasi-Borel spaces. Quasi-Borel spaces are a recent model of probability theory that focuses on sets of admissible random elements. I will give a brief introduction to quasi-Borel spaces and predomains, and their properties.
Probability is traditionally treated in cpo models using probabilistic powerdomains, but these are not known to be commutative on any class of cpos with higher-order functions. By contrast, quasi-Borel predomains do support both a commutative probabilistic powerdomain and higher-order functions, which I will describe.
For more details on this joint work with Matthijs Vákár and Sam Staton, see:
Matthijs Vákár, Ohad Kammar, and Sam Staton. 2019. A Domain Theory for Statistical Probabilistic Programming. Proc. ACM Program. Lang. 3, POPL, Article 36 (January 2019), 35 pages., DOI: 10.1145/3290349.
A domain theory for statistical probabilistic programming
Talk given at the Seminar za temelje matematike in teoretično računalništvo, Faculty of Mathematics and Physics, University of Ljubljana, 14 February, 2019.
I will describe our recent work on statistical probabilistic programming languages. These are expressive languages for describing generative Bayesian models of the kinds used in computational statistics and machine learning. We give an adequate denotational semantics for a calculus with recursive higher-order types, continuous probability distributions, and soft constraints. Among them are untyped languages, similar to Church and WebPPL, because our semantics allows recursive mixed-variance datatypes. Our semantics justifies important program equivalences including commutativity.
Our new semantic model is based on `quasi-Borel predomains’. These are a mixture of chain-complete partial orders (cpos) and quasi-Borel spaces. Quasi-Borel spaces are a recent model of probability theory that focuses on sets of admissible random elements. I will give a brief introduction to quasi-Borel spaces and predomains, and their properties.
Probability is traditionally treated in cpo models using probabilistic powerdomains, but these are not known to be commutative on any class of cpos with higher-order functions. By contrast, quasi-Borel predomains do support both a commutative probabilistic powerdomain and higher-order functions, which I will describe.
For more details on this joint work with Matthijs Vákár and Sam Staton, see:
Matthijs Vákár, Ohad Kammar, and Sam Staton. 2019. A Domain Theory for Statistical Probabilistic Programming. Proc. ACM Program. Lang. 3, POPL, Article 36 (January 2019), 35 pages., DOI: 10.1145/3290349.
A domain theory for statistical probabilistic programming
Talk given at the Second Workshop on Probabilistic Interactive and Higher-Order Computation (PIHOC), 07 February, 2019.
I will describe our recent work on statistical probabilistic programming languages. These are expressive languages for describing generative Bayesian models of the kinds used in computational statistics and machine learning. We give an adequate denotational semantics for a calculus with recursive higher-order types, continuous probability distributions, and soft constraints. Among them are untyped languages, similar to Church and WebPPL, because our semantics allows recursive mixed-variance datatypes. Our semantics justifies important program equivalences including commutativity.
Our new semantic model is based on `quasi-Borel predomains’. These are a mixture of chain-complete partial orders (cpos) and quasi-Borel spaces. Quasi-Borel spaces are a recent model of probability theory that focuses on sets of admissible random elements. I will give a brief introduction to quasi-Borel spaces and predomains, and their properties.
Probability is traditionally treated in cpo models using probabilistic powerdomains, but these are not known to be commutative on any class of cpos with higher-order functions. By contrast, quasi-Borel predomains do support both a commutative probabilistic powerdomain and higher-order functions, which I will describe.
For more details on this joint work with Matthijs Vákár and Sam Staton, see:
Matthijs Vákár, Ohad Kammar, and Sam Staton. 2019. A Domain Theory for Statistical Probabilistic Programming. Proc. ACM Program. Lang. 3, POPL, Article 36 (January 2019), 35 pages., DOI: 10.1145/3290349.
A domain theory for statistical probabilistic programming
Talk given at the LSV Seminar, Laboratoire Spécification et Vérification, École normale supérieure Paris-Saclay, Paris, 29 January, 2019.
I will describe our recent work on statistical probabilistic programming languages. These are expressive languages for describing generative Bayesian models of the kinds used in computational statistics and machine learning. We give an adequate denotational semantics for a calculus with recursive higher-order types, continuous probability distributions, and soft constraints. Among them are untyped languages, similar to Church and WebPPL, because our semantics allows recursive mixed-variance datatypes. Our semantics justifies important program equivalences including commutativity.
Our new semantic model is based on `quasi-Borel predomains’. These are a mixture of chain-complete partial orders (cpos) and quasi-Borel spaces. Quasi-Borel spaces are a recent model of probability theory that focuses on sets of admissible random elements. I will give a brief introduction to quasi-Borel spaces and predomains, and their properties.
Probability is traditionally treated in cpo models using probabilistic powerdomains, but these are not known to be commutative on any class of cpos with higher-order functions. By contrast, quasi-Borel predomains do support both a commutative probabilistic powerdomain and higher-order functions, which I will describe.
For more details on this joint work with Matthijs Vákár and Sam Staton, see:
Matthijs Vákár, Ohad Kammar, and Sam Staton. 2019. A Domain Theory for Statistical Probabilistic Programming. Proc. ACM Program. Lang. 3, POPL, Article 36 (January 2019), 35 pages., DOI: 10.1145/3290349.
A domain theory for statistical probabilistic programming
Talk given at the 7th ACM SIGPLAN Workshop on Higher-Order Programming with Effects, 23 September, 2018.
We describe ongoing work investigating a convenient category of predomains and a probabilistic powerdomain construction suitable for statistical probabilistic programming semantics, as used in statistical modelling and machine learning. Our goal is to give adequate semantics to a probabilistic higher-order language with recursive types, and includes types for powers of real numbers and arbitrary measurable functions between them.
Specifically, we provide (1) a cartesian closed category; (2) whose objects are (pre)-domains; and (3) a commutative monad for continuous probabilistic choice and Bayesian conditioning. Jones and Plotkin have shown that conditions (2)–(3) hold when one restricts attention to continuous domains, and Jung and Tix have proposed to search for a suitable category of continuous domains possessing all three properties (1)–(3), a question that remains open to date.
We propose an alternative direction, considering spaces with separate, but compatible, measure-theoretic and domain-theoretic structures. On the domain-theoretic side, we require posets with suprema of countably increasing chains (omega-cpos). On the measure-theoretic side, we require a quasi-Borel space (qbs) structure, a recently introduced algebraic structure suitable for modelling higher-order probability theory. There are three equivalent characterisations of this category given by: imposing an order-theoretic separatedness condition on countable-product-preserving omega-cpo-valued contra-variant functors; internal omega-cpos in the quasi-topos of quasi-Borel spaces; and an essentially algebraic presentation. The category of these omega-qbses validates Fiore and Plotkin’s axiomatic domain theory, yielding semantics for recursive types. We construct a commutative powerdomain construction by factorising the Lebesgue integral from the space of random elements to the space of countably additive linear integration operators.
Joint work with Sam Staton and Matthijs Vákár
A domain theory for quasi-Borel spaces and statistical probabilistic programming
Talk given at the invited talk, International Workshop on Domain Theorey and its Applications, 08 July, 2018.
I will describe ongoing work investigating a convenient category of pre-domains and a probabilistic powerdomain construction suitable for statistical probabilistic programming semantics, as used in statistical modelling and machine learning. Specifically, we provide (1) a cartesian closed category; (2) whose objects are (pre)-domains; and (3) a commutative monad for probabilistic choice and Bayesian conditioning. Jones and Plotkin have shown that conditions (2)–(3) hold when one restricts attention to continuous domains, and Jung and Tix have proposed to search for a suitable category of continuous domains possessing all three properties (1)–(3), a question that remains open to date.
I propose an alternative direction, considering spaces with separate, but compatible, measure-theoretic and domain-theoretic structures. On the domain-theoretic side, we require posets with suprema of countably increasing chains (omega-cpos). On the measure-theoretic side, we require a quasi-Borel space (qbs) structure, a recently introduced algebraic structure suitable for modelling higher-order probability theory. There are three equivalent characterisations of this category given by: imposing an order-theoretic separatedness condition on countable-preserving omega-cpo-valued presheaves; internal omega-cpos in the quasi-topos of quasi-Borel spaces; and an essentially algebraic presentation. The category of these omega-qbses validates Fiore and Plotkin’s axiomatic domain theory, yielding semantics for recursive types. To conclude, I will describe a commutative powerdomain construction given by factorising the Lebesgue integral from the space of random elements to the space of sigma-linear integration operators
A monad for full ground reference cells
Talk given at the Theoretische Informatik Seminar, Department of Computer Science, Faculty of Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 25 June, 2018.
I will describe how to build a monad on a functor category to model dynamic allocation of potentially cyclic memory cells. In particular I’ll explain how to tackle the challenge of ‘effect masking’ which means roughly that ‘if you don’t need to know about memory access then you can’t detect it.’, and use this monad to give a denotational account of an ML-like language with reference cells, and validate associated program transformations.
I will explain the main insight behind our construction: identifying the collection of stores as an object in a different functor category equipped with a monad for adding hiding/encapsulation capabilities to the stores. I will then obtain the full ground storage monad by applying a state monad transformer to the encapsulation monad.
The talk is based on work with: Paul B. Levy, Sean K. Moss, and Sam Staton (http://arxiv.org/abs/1702.04908)
A domain theory for quasi-Borel spaces
Talk given at the Oxford Quantum Day 2018, 19 June, 2018.
Talk given at the Dagstuhl seminar 18172: algebraic effects go mainstream, 24 April, 2018.
A brief overview of probabilistic programming for statistical modelling and machine learning. Report on recent advances in the modular validation of similarly modular implementations of generic Bayesian inference algorithms such as Sequential Monte Carlo and Trace Markov Chain Monte Carlo. Report on ongoing work on a Haskell implementation of these ideas and its performance evaluation with respect to existing probabilistic programming systems.
Joint with Adam Ścibior, Matthijs Vákár, Sam Staton, Hongseok Yang, Yufei Cai, Klaus Ostermann, Zoubin Ghahramani, Sean K. Moss, and Chris Heunen.
Denotational validation of Bayesian inference
Talk given at the Mobility Reading Group Seminar, Department of Computing, Imperial College London, 11 April, 2018.
I will present a semantic account of Bayesian inference algorithms for probabilistic programming languages, as used in Bayesian data science and machine learning. Sophisticated inference algorithms are often explained in terms of composition of smaller parts. However, neither their theoretical justification nor their implementation reflect this modularity. I will show how to conceptualise and analyse such inference algorithms as manipulating intermediate representations of probabilistic programs using higher-order functions and inductive types, and their denotational semantics.
Semantic accounts of continuous distributions use measurable spaces. However, this use of higher-order functions presents a substantial technical difficulty: it is impossible to define a measurable space structure over the collection of measurable functions between arbitrary measurable spaces that is compatible with standard operations on those functions, such as application. I will explain how to overcome this difficulty using quasi-Borel spaces, a recently proposed mathematical structure that supports both function spaces and continuous distributions. To make the semantic manipulation closer to its traditional measure theoretic origins, I will review and use Kock’s synthetic measure theory. This notation was useful for proving a quasi-Borel counterpart to the Metropolis-Hastings-Green theorem.
Joint with Adam Ścibior, Matthijs Vákár, Sam Staton, Hongseok Yang, Yufei Cai, Klaus Ostermann, Zoubin Ghahramani, Sean K. Moss, and Chris Heunen.
Quasi-Borel spaces and the validation of Bayesian inference algorithms
Talk given at the Workshop on Probabilistic Interactive and Higher-Order Computation (PIHOC), 22 February, 2018.
I will present a semantic account of Bayesian inference algorithms for probabilistic programming languages, as used in Bayesian data science and machine learning. Sophisticated inference algorithms are often explained in terms of composition of smaller parts. However, neither their theoretical justification nor their implementation reflect this modularity. I will show how to conceptualise and analyse such inference algorithms as manipulating intermediate representations of probabilistic programs using higher-order functions and inductive types, and their denotational semantics.
Semantic accounts of continuous distributions use measurable spaces. However, this use of higher-order functions presents a substantial technical difficulty: it is impossible to define a measurable space structure over the collection of measurable functions between arbitrary measurable spaces that is compatible with standard operations on those functions, such as application. I will explain how to overcome this difficulty using quasi-Borel spaces, a recently proposed mathematical structure that supports both function spaces and continuous distributions. To make the semantic manipulation closer to its traditional measure theoretic origins, I will review and use Kock’s synthetic measure theory. This notation was useful for proving a quasi-Borel counterpart to the Metropolis-Hastings-Green theorem.
Joint with Adam Ścibior, Matthijs Vákár, Sam Staton, Hongseok Yang, Yufei Cai, Klaus Ostermann, Zoubin Ghahramani, Sean K. Moss, and Chris Heunen.
On the expressive power of user-defined effects: effect handlers, monadic reflection, delimited control
Talk given at the 22nd ACM SIGPLAN International Conference on Functional Programming, 04 September, 2017.
We compare the expressive power of three programming abstractions for user-defined computational effects: Bauer and Pretnar’s effect handlers, Filinski’s monadic reflection, and delimited control without answer-type-modification. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features.
We present three calculi, one per abstraction, extending Levy’s call-by-push-value. For each calculus, we present syntax, operational semantics, a natural type-and-effect system, and, for effect handlers and monadic reflection, a set-theoretic denotational semantics. We establish their basic meta-theoretic properties: safety, termination, and, where applicable, soundness and adequacy. Using Felleisen’s notion of a macro translation, we show that these abstractions can macro-express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro-expressed while preserving typeability either by monadic reflection or by delimited control. We supplement our development with a mechanised Abella formalisation.
Denotational validation of Bayesian inference
Talk given at the Séminaire PLUME École normale supérieure de Lyon, 12 July, 2017.
I will present a semantic account of Bayesian inference algorithms for probabilistic programming languages, as used in Bayesian data science and machine learning. Sophisticated inference algorithms are often explained in terms of composition of smaller parts. However, neither their theoretical justification nor their implementation reflects this modularity. I will show how to conceptualise and analyse such inference algorithms as manipulating intermediate representations of probabilistic programs using higher-order functions and inductive types, and their denotational semantics.
Semantic accounts of continuous distributions use measurable spaces. However, this use of higher-order functions presents a substantial technical difficulty: it is impossible to define a measurable space structure over the collection of measurable functions between arbitrary measurable spaces that is compatible with standard operations on those functions, such as application. I will explain how to overcome this difficulty using quasi-Borel spaces, a recently proposed mathematical structure that supports both function spaces and continuous distributions.
More specifically, I will define a semantic class of structures for representing probabilistic programs, and semantic validity criteria for transformations of these representations in terms of distribution preservation. I will review a collection of building blocks for composing representations, and show how to use these building blocks to validate common inference algorithms such as Sequential Monte Carlo and Markov Chain Monte Carlo. To make the semantic manipulation closer to its traditional measure theoretic origins, I will review and use Kock’s synthetic measure theory. This notation was useful for proving a quasi-Borel counterpart to the Metropolis-Hastings-Green theorem, which I will also describe.
Joint with Adam Ścibior, Matthijs Vákár, Sam Staton, Hongseok Yang, Yufei Cai, Klaus Ostermann, Zoubin Ghahramani, Sean K. Moss, and Chris Heunen.
A monad for full ground reference cells
Talk given at the 32st Annual ACM/IEEE Symposium on Logic in Computer Science, 23 June, 2017.
We present a denotational account of dynamic allocation of potentially cyclic memory cells using a monad on a functor category. We identify the collection of heaps as an object in a different functor category equipped with a monad for adding hiding/encapsulation capabilities to the heaps. We derive a monad for full ground references supporting effect masking by applying a state monad transformer to the encapsulation monad. To evaluate the monad, we present a denotational semantics for a call-by-value calculus with full ground references, and validate associated code transformations.
A convenient category for higher-order probability theory
Talk given at the 32st Annual ACM/IEEE Symposium on Logic in Computer Science, 20 June, 2017.
Higher-order probabilistic programming languages allow programmers to write sophisticated models in machine learning and statistics in a succinct and structured way, but step outside the standard measure-theoretic formalization of probability theory. Programs may use both higher-order functions and continuous distributions, or even define a probability distribution on functions. But standard probability theory cannot support higher-order functions, that is, the category of measurable spaces is not cartesian closed.
Here we introduce quasi-Borel spaces. We show that these spaces: form a new formalization of probability theory replacing measurable spaces; form a cartesian closed category and so support higher-order functions; form an extensional category and so support good proof principles for equational reasoning; and support continuous probability distributions. We demonstrate the use of quasi-Borel spaces for higher-order functions and probability by: showing that a well-known construction of probability theory involving random functions gains a cleaner expression; and generalizing de Finetti’s theorem, that is a crucial theorem in probability theory, to quasi-Borel spaces.
Quasi Borel-spaces
Talk given at the University of Oxford Department of Computer Science Quantum Group Workshop, 13 May, 2017.
I will explain what quasi-Borel spaces are, and outline how we’ve been using them as a convenient alternative to measurable spaces in probability theory and probabilistic programming that has well-behaved higher-order, logical, and algebraic structure.
(Joint work with Chris Heunen, Adam Scibior, Sam Staton, Matthijs Vakar, and Hongseok Yang.)
A monad for full ground reference cells
Talk given at the Oxford Advanced Seminar on Informatic Structures (OASIS), Foundations Logic and Structures theme, University of Oxford Department of Computer Science, 10 March, 2017.
I will describe how to build a monad on a functor category to model dynamic allocation of potentially cyclic memory cells. In particular I’ll explain how to tackle the challenge of ‘effect masking’ which means roughly that ‘if you don’t need to know about memory access then you can’t detect it.’, and use this monad to give a denotational account of an ML-like language with reference cells, and validate associated program transformations.
I will explain the main insight behind our construction: identifying the collection of stores as an object in a different functor category equipped with a monad for adding hiding/encapsulation capabilities to the stores. I will then obtain the full ground storage monad by applying a state monad transformer to the encapsulation monad.
The talk is based on work with: Paul B. Levy, Sean K. Moss, and Sam Staton (http://arxiv.org/abs/1702.04908).
A monad for full ground reference cells
Talk given at the Logic and Semantics Seminar, Programming, Logic, and Semantics Group, University of Cambridge Computer Laboratory, 17 February, 2017.
We present a denotational account of dynamic allocation of potentially cyclic memory cells using a monad on a functor category. We identify the collection of heaps as an object in a different functor category equipped with a monad for adding hiding/encapsulation capabilities to the heaps. We derive a monad for full ground references supporting effect masking by applying a state monad transformer to the encapsulation monad. To evaluate the monad, we present a denotational semantics for a call-by-value calculus with full ground references, and validate associated code transformations.
Joint work with: Paul B. Levy, Sean K. Moss, and Sam Staton.
On the expressive power of user-defined effects: effect handlers, monadic reflection, delimited control without answer-type-modification
Talk given at the LSD Seminar, Programming, Logic, and Semantics Group, University of Cambridge Computer Laboratory, 17 February, 2017.
We compare the expressive power of three programming abstractions for user-defined computational effects: Bauer and Pretnar’s effect handlers, Filinski’s monadic reflection, and delimited control. This comparison allows a precise discussion about the relative merits of each programming abstraction.
We present three calculi, one per abstraction, extending Levy’s call-by-push-value. These comprise syntax, operational semantics, a natural type-and-effect system, and, for handlers and reflection, a set-theoretic denotational semantics. We establish their basic meta-theoretic properties: adequacy, soundness, and strong normalisation. Using Felleisen’s notion of a macro translation, we show that these abstractions can macro-express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro-expressed while preserving typeability either by monadic reflection or by delimited control. We supplement our development with a mechanised Abella formalisation.
Joint work with Yannick Forster, Sam Lindley, and Matija Pretnar.
Talk given at the Mathematically Structured Programming Group, University of Strathclyde Computer and Information Sciences Department, 06 February, 2017.
A monad for full ground reference cells
Talk given at the LFCS Seminar, Laboratory for Foundations of Computer Science, University of Edinburgh School of Informatics, 31 January, 2017.
We present a denotational account of dynamic allocation of potentially cyclic memory cells using a monad on a functor category. We identify the collection of heaps as an object in a different functor category equipped with a monad for adding hiding/encapsulation capabilities to the heaps. We derive a monad for full ground references supporting effect masking by applying a state monad transformer to the encapsulation monad. To evaluate the monad, we present a denotational semantics for a call-by-value calculus with full ground references, and validate associated code transformations.
Joint work with: Paul B. Levy, Sean Moss, and Sam Staton.
On the Expressive Power of Algebraic Effects and Handlers
Talk given at the 4th South of England Regional Programming Language Seminar (S-REPLS4), Imperial College London, 27 September, 2016.
Monads are the de-facto mechanism for adding user-defined computational effects to programming languages. Support for monads has been developed in general-purpose statically-typed functional programming languages such as Haskell and OCaml, proof assistants such as Coq, and other verification tools such as F-star. Algebraic effects and handlers are a recently developed programming abstraction that allows programmers to define and program with custom effects using a specific delimited-control structure. This control structure has straightforward type-system, computational intuition, operational and denotational semantics, and reasoning principles.
Our goal is to compare the expressive power of programming constructs facilitating user-defined effects. We consider a calculus of effect handlers, Filinski’s calculus of monadic reflection, which encapsulate user-defined monads as a programmable abstraction, and a calculus of the standard delimited control operators shift-zero and reset. We compare the calculi using Felleisen’s notion of macro translations. When we ignore the natural type systems associated with each calculus, each pair of abstractions is macro inter-translateable, but this translation does not preserve typability. We present preliminary results regarding the non-existence of typability-preserving macro-translations, and extensions to the type system that preserve typability.
Joint work with Yannick Forster, Sam Lindley, and Matija Pretnar.
Functional models of full ground, and general, reference cells
Talk given at the HOPE’16 Workshop, 18 September, 2016.
We present ongoing work developing a straightforward denotational semantics for reference cells of arbitrary types based on sets- with-structure and structure-preserving functions. We start with full ground references — which can refer to ground values and (recursively) ground references — where types denotes set-valued presheaves. By considering stores as mixed-variance set-valued functors (i.e., profunctors), we obtain two monads for full ground state which can interpret alloction, dereference, and mutation of full ground references, while validating the usual equations. We use this structure to give a denotational semantics to a monadic metalanguage with a special construct for effect masking similar to Haskell’s runST. Time permitting, we speculate how to extend this approach, using an appropriate recursive domain equation, to account for ML-like general reference cells.
No value restriction is needed for algebraic effects and handlers
Talk given at the Syntax and Semantics of Low-Level Languages (LOLA’16), 10 July, 2016.
No value restriction is needed for algebraic effects and handlers
Talk given at the Seminaire Gallium, INRIA Paris Rocquencourt, France., 17 June, 2016.
We present a straightforward, sound Hindley-Milner polymorphic type system for algebraic effects and handlers in a call-by-value calculus, which allows type variable generalisation of arbitrary computations, not just values. This result is surprising. On the one hand, the soundness of unrestricted call-by-value Hindley-Milner polymorphism is known to fail in the presence of computational effects such as reference cells and continuations. On the other hand, many programming examples can be recast to use effect handlers instead of these effects. Analysing the expressive power of effect handlers with respect to state effects, we claim handlers cannot express reference cells, and show they can simulate dynamically scoped state. Time permitting, we will discuss the denotational soundness theorem that led to this work.
Joint work with Sean Moss and Matija Pretnar.
No value restriction is needed for algebraic effects and handlers
Talk given at the ANTIQUE seminar, École normale supérieure, Paris, France., 16 June, 2016.
No value restriction is needed for algebraic effects and handlers
Talk given at the 22nd International Conference on Types for Proofs and Programs (TYPES 2016), 26 May, 2016.
No value restriction is needed for algebraic effects and handlers
Talk given at the Logic and Semantics group seminar, Department of Computer Science, University of Aarhus, Denmark., 21 March, 2016.
No value restriction is needed for algebraic effects and handlers
Talk given at the Schloss Dagstuhl Seminar ‘From theory to practice of algebraic effects and handlers’, 18 March, 2016.
A denotational semantics for Hindley-Milner Polymorphism
Talk given at the 4th ACM SIGPLAN Workshop on Higher-Order Programming with Effects (HOPE’15), 30 August, 2015.
Hindley-Milner let-polymorphism in the presence of computational effects, notably local state mixed with type recursion, requires restriction to achieve type soundness. Existing solutions either restrict generalisation over type variables, excluding opportunities for sound polymorphism, or use types that expose the effectful nature of the implementation. Our goal is to analyse the problem and existing solutions denotationally, with the hope that such models will provide new insights.
We present an intrinsically-typed (Church-style) zeroth-order calculus of let-polymorphism. We define a fibrational denotational semantics to the pure calculus, drawing on ideas from models of the polymorphic lambda calculus, modifying Seeley’s PL categories to use Ulmer’s relative adjunctions. We outline ongoing and future work how to extend the pure semantics to include higher-order functions and relative monads, and to incorporating local state and recursive types into models by recursive domain equations. We conclude by speculating how to use these postulated models to analyse the value restriction and other solutions.
A syllabus for algebraic effects (invited tutorial)
Talk given at the Mathematical Foundations of Programming Semantics XXXI (MFPS’15), 25 June, 2015.
A universal characterisation of locally determined omega-colimits
Talk given at the Domains XI Workshop, 09 September, 2014.
Characterising colimiting omega-cocones of projection pairs in terms of least upper bounds of their embeddings and projections is important to the solution of recursive domain equations. We present a universal characterisation of this local property as omega-cocontinuity of locally continuous functors. We present a straightforward proof using the enriched Yoneda embedding. The proof can be generalised to Cattani and Fiore’s notion of locality for adjoint pairs.
Graphical algebraic foundations for monad stacks
Talk given at the 3rd ACM SIGPLAN Workshop on Higher-Order Programming with Effects (HOPE’14), 31 August, 2014.
Purely functional languages use sequences of monad transformers called monad stacks to incorporate computational effects modularly. The current practice is to intuitively find the appropriate stack for a given task using intractable brute force and heuristics. We investigate a systematic alternative. By restricting attention to algebraic stack combinations, we provide a linear-time algorithm for generating all the appropriate monad stacks, or decide no such stack exist. Our approach is based on Hyland, Plotkin, and Power’s algebraic analysis of monad transformers, who propose a graph-theoretical solution to this problem. We extend their analysis with a straightforward connection to series-parallel graphs, and demonstrate a web-tool for generating monad stacks from a graph denoting the commutative interaction of effects.
Graphical algebraic foundations for monad stacks
Talk given at the Electronic and Software Systems seminar, University of Southhampton, 30 July, 2014.
Purely functional languages, e.g. Haskell, incorporate computational effects modularly using sequences of monad transformers, termed monad stacks. The current practice is to find the appropriate stack for a given task intuitively. By restricting attention to algebraic stack combinations, we provide a linear-time algorithm for generating all the appropriate monad stacks, or decide no such stacks exist. Our approach is based on Hyland, Plotkin, and Power’s algebraic analysis of monad transformers, who propose a graph-theoretical solution to this problem. We extend their analysis with a straightforward connection to the modular decomposition of a graph and to cographs, a.k.a. series-parallel graphs. We present an accessible and self-contained account of this monad-stack generation problem, and, more generally, of the decomposition of a combined algebraic theory into sums and tensors, and its algorithmic solution. We provide a web-tool implementing this algorithm intended for semantic investigations of effect combinations and for monad stack generation.
Graphical algebraic foundations for monad stacks
Talk given at the University of Edinburgh LFCS PLInG, 02 June, 2014.
Haskell incorporates computational effects modularly using sequences of monad transformers, termed monad stacks. The current practice is to find the appropriate stack for a given task using intractable brute force and heuristics. By restricting attention to algebraic stack combinations, we provide a linear-time algorithm for generating all the appropriate monad stacks, or decide no such stacks exist. Our approach is based on Hyland, Plotkin, and Power’s algebraic analysis of monad transformers, who propose a graph-theoretical solution to this problem. We extend their analysis with a straightforward connection to the modular decomposition of a graph and to cographs, a.k.a. series-parallel graphs. We present an accessible and self-contained account of this monad-stack generation problem, and, more generally, of the decomposition of a combined algebraic theory into sums and tensors, and its algorithmic solution. We provide a web-tool implementing this algorithm intended for semantic investigations of effect combinations and for monad stack generation.
Graphical algebraic foundations for monad stacks
Talk given at the University of Strathclyde MSP 101 seminar series, 28 May, 2014.
Haskell incorporates computational effects modularly using sequences of monad transformers, termed monad stacks. The current practice is to find the appropriate stack for a given task using intractable brute force and heuristics. By restricting attention to algebraic stack combinations, we provide a linear-time algorithm for generating all the appropriate monad stacks, or decide no such stacks exist. Our approach is based on Hyland, Plotkin, and Power’s algebraic analysis of monad transformers, who propose a graph-theoretical solution to this problem. We extend their analysis with a straightforward connection to the modular decomposition of a graph and to cographs, a.k.a. series-parallel graphs. We present an accessible and self-contained account of this monad-stack generation problem, and, more generally, of the decomposition of a combined algebraic theory into sums and tensors, and its algorithmic solution. We provide a web-tool implementing this algorithm intended for semantic investigations of effect combinations and for monad stack generation.
An algebraic theory of type and effect systems
Talk given at the University of Cambridge Computer Laboratory Logic and Semantics Seminar, 13 December, 2013.
We present a theory of Gifford-style type-and-effect annotations, where effect annotations are sets of effects, such as memory accesses or exceptions. Our theory accounts for effect-dependent program transformations for functional-imperative languages, as used in optimizing compilers. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasizes the operations causing the effects at hand. The key observation is that annotation effects denote algebraic operations. After presenting our general type-and-effect system and its semantics, we validate and generalize existing optimizations and add new ones. Our theory also suggests a classification of these optimizations into three classes, structural, local, and global: structural optimizations always hold; local ones depend on the effect theory at hand; and global ones depend on the global nature of that theory, such as idempotency or absorption laws. We also give modularly-checkable necessary and sufficient conditions for validating the optimizations. Joint work with Gordon Plotkin.
An algebraic theory of type and effect systems
Talk given at the MIT Programming Languages Seminar, 30 September, 2013.
We present a theory of Gifford-style type-and-effect annotations, where effect annotations are sets of effects, such as memory accesses or exceptions. Our theory accounts for effect-dependent program transformations for functional-imperative languages, as used in optimizing compilers. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasizes the operations causing the effects at hand. The key observation is that annotation effects denote algebraic operations. After presenting our general type-and-effect system and its semantics, we validate and generalize existing optimizations and add new ones. Our theory also suggests a classification of these optimizations into three classes, structural, local, and global: structural optimizations always hold; local ones depend on the effect theory at hand; and global ones depend on the global nature of that theory, such as idempotency or absorption laws. We also give modularly-checkable necessary and sufficient conditions for validating the optimizations. Joint work with Gordon Plotkin.
A general theory of type-and-effect systems via universal algebra
Talk given at the Journées d’Informatique Fondamentale de Paris Diderot, 25 April, 2013.
We present a general theory of Gifford-style type-and-effect annotations, where effect annotations are sets of effects, such as memory accesses or exceptions. Our theory accounts for effect-dependent program transformations for functional-imperative languages, as used in optimising compilers. Using our theory, we validate and generalise many existing optimisations and add new ones. Our general theory also suggests a classification of these optimisations into three classes, structural, local, and global. We also give modularly-checkable necessary and sufficient conditions for validating the optimisations.
Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand. The universal algebraic perspective gives an elementary and concrete view on the monadic concepts. The key observation is that annotation effects can be identified with the algebraic operation symbols. We describe how the universal algebraic approach gives particularly simple characterisations of the optimisations: structural optimisations always hold; local ones depend on the effect theory at hand; and global ones depend on the global nature of that theory, such as idempotency or absorption laws. Time permitting, we outline how the theory generalises to more sophisticated notions of universal algebra via enriched Lawvere theories and factorisation systems.
Joint work with Gordon Plotkin.
An introduction to Programming Language Semantics
Talk given at the 1st join category theory and computer science seminar, University of Cambridge, 18 November, 2012.
A brief introduction to programming language semantics. We will covere key aspects of the operational and denotational approaches, including type soundness, denotational soundness, adequacy. I will present a logical relations proof, and show how category theory allows to restructure it as a model in a suitable category of predicates.
Handlers in Action
Talk given at the 1st ACM SIGPLAN Workshop on Higher-Order Programming with Effects (HOPE’12), 09 September, 2012.
We present operational semantics for effect handlers. Introduced by Plotkin and Pretnar, effect handlers form the basis of Bauer and Pretnar’s Eff programming language. Our talk outlines three contributions. * We propose the first small-step structural operational semantics for a higher-order programming language with effect handlers, and a sound type and effect system. * We exhibit two alternative effect handler implementation techniques using free monads, and using composable continuations.
- We show that Filinski’s monadic reflection is subsumed by effect handlers.
Handlers in Action
Talk given at the Scottish Programming Lanugage Seminar (SPLS), 15 March, 2012.
We present a small-step operational semantics for a simplified Call-by-Push-Value variant of Bauer and Pretnar’s Eff programming language. Eff incorporates functional and imperative features by adapting Plotkin and Pretnar’s effect handlers as a programming paradigm.
This is preliminary work with Sam Lindley and Nicolas Oury.
Algebraic Foundations for Effect-Dependent Optimisations
Talk given at the Principles of Programing Seminar (POP), 30 January, 2012.
We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with operation symbols.
We develop an annotated imperative and functional language with a kind of computations for every effect set; it can be thought of as a sequential, annotated intermediate language. We develop a range of validated optimisations (i.e., equivalences), generalizing many existing ones and adding new ones. We classify these optimisations as structural, algebraic, or abstract: structural optimisations always hold; algebraic ones depend on the effect theory at hand; and abstract ones depend on the global nature of that theory (we give modularly-checkable sufficient conditions for their validity). Joint work with Gordon Plotkin, to appear in POPL’12.
Algebraic Foundations for Effect-Dependent Optimisations
Talk given at the 39th Symposium on Principles of Programming Languages (POPL’12), 26 January, 2012.
We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with operation symbols.
We develop an annotated version of Levy’s Call-by-Push-Value language with a kind of computations for every effect set; it can be thought of as a sequential, annotated intermediate language. We develop a range of validated optimisations (i.e., equivalences), generalising many existing ones and adding new ones. We classify these optimisations as structural, algebraic, or abstract: structural optimisations always hold; algebraic ones depend on the effect theory at hand; and abstract ones depend on the global nature of that theory (we give modularly-checkable sufficient conditions for their validity).
Joint work with Gordon Plotkin.
Algebraic Foundations for Effect-Dependent Optimisations
Talk given at the Functional Programming Lab Seminar, 11 January, 2012.
We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with operation symbols. We develop an annotated imperative and functional language with a kind of computations for every effect set; it can be thought of as a sequential, annotated intermediate language. We develop a range of validated optimisations (i.e., equivalences), generalising many existing ones and adding new ones. We classify these optimisations as structural, algebraic, or abstract: structural optimisations always hold; algebraic ones depend on the effect theory at hand; and abstract ones depend on the global nature of that theory (we give modularly-checkable sufficient conditions for their validity).
Joint work with Gordon Plotkin, to appear in POPL’12.
Programming Language Semantics: {Ret | Int | P}rospective Discussion
Talk given at the LFCS Lab Lunch, 20 September, 2011.
Defining the formal meaning of programming languages has been investigated for nearly half a century. The foundational questions in the field have impact on programming language theoreticians, designers and implementors. In this quasi-historical session, we’ll review the goals underlining semantics, and how operational and denotational methods have been devised to investigate them. The review will be followed by a short discussion about the current state of the theory, tools and methodologies available. Was it a big waste of time?
Algebraic Foundations for Type and Effect Analysis
Talk given at the European Workshop on Computational Effects, 18 March, 2011.
We propose a semantic foundation for optimisations based on type and effect analysis. We present a multiple-effect CBPV-based calculus whose denotational semantics is based on an effect-indexed structure of adjunctions. When the underlying set of effects is specified by an algebraic theory, we take effects to be given by sets of operations. The required adjunction structure can then be obtained via a uniform process of conservative restriction of the theory. The calculus and its semantics is then extended by a straightforward generalisation of Pretnar and Plotkin’s effect handlers to arbitrary, non-algebraic, inter-effect handlers.
The modular composition of effects then boils down to ex- tension conservativeness, and we show that some common ways to compose arbitrary effects by sums and tensors are indeed conservative. In particular we obtain conservative extension results for both the sum of a monad and a free monad, and also for a unification of three common instances of the tensor of theories that is obtained using monoid actions. This unification includes the usual reader, state and writer monads and monad transformers.
We use our calculus and its semantics to provide general effect- dependent optimisations. We exemplify the machinery developed by deriving a language with state, IO and exceptions, and their handlers. Many of the already known effect-dependent optimisations are then particular cases of our general ones. We have thus demonstrated the possibility of a general theory of effect optimisations based on the algebraic theory of effects.
Joint work with Gordon Plotkin.
Take Action for Your State!
Talk given at the joint Scottish Programming Lanugage Seminar (SPLS) and Fun in the Afternoon, 24 November, 2010.
There is a conceptual connection between the State, Reader and Writer monads. However, the State and Reader monads only require a type, whereas the Writer monad requires a monoid. I will introduce and use Plotkin’s and Power’s algebraic theory of effects to formalise this connection using the notions of monoid actions and conservative restriction. This work results in a generalised notion of the State and Reader monads and monad transformers for an arbitrary monoid action. This is joint work with Gordon Plotkin.
A well kept secret…
Talk given at the LFCS Lab Lunch, 19 January, 2010.
At the end of last year, a prominent Category Theorist revealed a well kept secret on the categories mailing list. I will present a brief overview of the various threads that erupted from this revelation, and how they are relevant to Your Research (TM).
Reports
Foundations for Probabilistic Programming
Internal Report: Poster presented at the 1st Probabilistic Programming Conference, 19 June, 2018.
A monadic solution to the Cartwright-Felleisen-Wadler conjecture
Internal Report: talk proposal accepted to, and delivered by Dylan McDermott the Workshop on Higher-Order Programming with Effects (HOPE’17), 03 September, 2017.
Algebraic Approaches to Semantics
Internal Report: Research Proposal, 24 February, 2010.