Notes on Implementing Algebraic Subtyping

Algebraic Subtyping is a type system devised by Stephen Dolan in 2016, in his dissertation. It extends Hindley-Milner with subtyping, in a way that preserves decidability and principality. Over the past few years, I have implemented Algebraic Subtyping in my language Pinafore (omitting record types). Pinafore is, as far as I know, currently the only language to implement this type system besides Dolan’s own MLsub prototype.

These notes are mostly things I wish I knew when writing it. Not having a strong background in type theory, I did not necessarily understand every part of Dolan’s paper before implementing it, but just started work as I understood it. I have learnt a lot since then.


So what’s so great about Algebraic Subtyping?


When people interpret and understand the world, we use types, and we make wide use of subtype relations. “X is a dog”, “Y is a cat”, “Z is an animal”, and then, “every cat is an animal”, and so forth. That’s a subtype relation. And you might think that covariance and contravariance are esoteric type-theory concepts, but it turns out that they too are reflected in language: covariance with the word “of” (a bowl of apples is a bowl of fruit), and contravariance with the word “for” (a bowl for fruit is a bowl for apples). It’s therefore helpful to bring these concepts that are already intuitive into the formal type system of a programming language. For example, every text document is a document, every check box is a UI widget, etc.


So of course “object-oriented” languages such as C++, C#, Java, and so forth already have subtyping. However, the type systems of these languages are marred by extraneous complexity. They are not elegant when considered abstractly, making them difficult to reason about. For example, they typically separate “primitive” and “reference” types that behave differently in certain ways; the unit type void is often not a first-class type; type polymorphism is often rather ad-hoc; and there’s no account of contravariance and covariance of type constructors. And the concept of function types tends to be particularly poorly implemented.

Scala is somewhat better, the type system is perhaps as good as it can be, but it’s ultimately limited by the underlying Java VM type system.

Algebraic Subtyping, by contrast, is what type-theorists think of as a real type system. It’s mathematically clean, making it much easier for both language developers and their users to reason about, and much less likely to involve unpleasant surprises.


Subtype relations in Algebraic Subtyping do not need to be injective, and no inverse conversions need to be provided. If P is a subtype of Q, (written P <: Q) then there is implicitly a conversion function from P to Q, but there does not need to be a retraction function Q -> Maybe P.

This is particularly valuable if you opt for non-uniform representation (below). The only thing that matters is that the subtype system is consistent. I give a proper definition of this below, but essentially it means that all subtype “paths” from P to Q are the same conversion. Provided you enforce that consistency, you can let your users create whatever systems of subtypes they want.

In a sense, Algebraic Subtyping disassembles the notion of “object orientation” into its underlying type-theory parts: subtyping, retractions, function types, constructor functions, etc.


What’s not so great about Algebraic Subtyping? Or at least, what problems will you have to tackle?

Polar Types

In Algebraic Subtyping, certain constructions (|, None) are only allowed in positive position, and certain others (&, Any) are only allowed in negative position. So right away the user needs to pay attention to type polarity. For example, when a type is the argument of a function, its polarity is inverted (because the argument parameter of the function type constructor is contravariant), so its own parameters are also inverted. This can be a little tricky to get used to.

Non-Variant Parameters

In Algebraic Subtyping, every parameter of every type constructor must be either covariant or contravariant, and this creates subtype relations for the constructor. For example, if P <: Q, then List P <: List Q, because the type parameter of the List type constructor is covariant.

But some types might naturally want to be parameterised by a variable that is neither covariant nor contravariant. An obvious example is a reference cell, with “get” (covariant) and “set” (contravariant) operations. The (perhaps only mildly awkward) solution Algebraic Subtyping suggests, and Pinafore implements, is to use a contravariant/covariant pair of parameters. Pinafore has a special {-p,+q} syntax for this, but essentially they are nothing more than two parameters. For example:

type Cell {-p,+q}
newCell: p -> Action (Cell p)  # note "Cell p" is the same as "Cell {-p,+p}"
getCell: Cell {-p,+q} -> Action q
setCell: p -> Cell {-p,+q} -> Action Unit

Pinafore’s special syntax brings its own surprises:

f: WholeModel +a -> WholeModel +a = fn x => x;  # causes a type error

# looks like the same type on either side, but actually it's syntactic sugar for this:

f: WholeModel {-None,+a} -> WholeModel {-Any,+a}
  = fn x => x;  # type error, identity function can't be given this type

Ugly Type Signatures

Algebraic Subtyping does not have type constraints as such, instead, it uses & and | constructions with type variables. This can lead to ugly type signatures:

# Pinafore (without type constraints)
predicate.SetModel : WholeModel {-((Entity & a) -> Boolean),+(a -> Boolean.)} -> SetModel (Entity & a)

# but if Pinafore had type constraints...
predicate.SetModel : (a <: Entity) => WholeModel (a -> Boolean) -> SetModel a

Pinafore doesn’t do this (yet), but you can mitigate this by allowing explicit constraints in type signatures, and rewrite them as needed. It’s even possible to extract constraints from types, in cases when you want to present inferred types to the user.

Function Purity

Like the Haskell it is implemented in, Pinafore separates “pure” functions from effects, using a monad (Action). As a Haskell developer, I happen to like this, and actually don’t find it awkward. But if you choose not to do this, and allow your functions to have side-effects, you may run in to various type-system gotchas, such as polymorphic references.

No Overloading

Algebraic Subtyping does not provide a way of overloading, for example, numerical functions on different types. For example, given two numeric types Integer and Rational, we should have negation functions for each:

negate: Integer -> Integer

negate: Rational -> Rational

However, even if we have the subtype relation Integer <: Rational, there is no clean way to combine these into one function. Pinafore does not solve this problem: it avoids it by simply having two functions with different names (negate.Integer and negate.Rational). This is made slightly easier with the use of namespaces.

If you really really wanted to, and I don’t at all recommend this, the best you can do is to create a special type and use conversions to do the actual negations. In Pinafore this terrible idea could be coded like this:

datatype Negate +a of Mk a end;
negate: a -> Negate a = Mk.Negate;
subtype Negate Integer <: Integer = fn Mk.Negate x => negate.Integer x;
subtype trustme Negate Rational <: Rational = fn Mk.Negate x => negate.Rational x;

Here the +a indicates that the type parameter is covariant. So this means you have Negate Integer <: Negate Rational.

So now you have a subtype “diamond”, that is, two separate subtype paths from the same types:

Negate Integer <: Integer <: Rational
Negate Integer <: Negate Rational <: Rational

If you don’t tell Pinafore trustme, it will reject the second subtype declaration, because it cannot prove that the paths work out to be the same (i.e., that the diamond “commutes”). But they are the same in this case, so it’s fine, and now you can use negate as if it were overloaded.

However, the whole approach requires a new type for each operation (Negate for negate, Sum for +, and so forth), and is ugly and counterintuitive for the user.


Whether to pick uniform or non-uniform representation will be your first major decision. Each of your language types will represented by some underlying “implementation type”, and each of your subtype relations will be represented by a conversion function on those implementation types.

With uniform representation, there is only one implementation type, and all conversion functions are represented by the identity function on that type. This will make your language implementation work much easier, as you will not have to keep track of conversion functions at all.

Non-Uniform Representation

Pinafore uses non-uniform representation instead. Doing this is a huge amount of work, as conversion functions need to be tracked everywhere, including through unification, subsumption, inversion, simplification, subtype picking, and anything else. You also need to keep track of the “functor maps” of each type variable of each ground type constructor, that is, how to convert a -> b to F a -> F b, etc.

The advantage of non-uniform representation is that it makes the language much more flexible. For example, in Pinafore it’s possible to define a subtype relation between two types already in scope by specifying the (pure) conversion function. If you want to take this route, you will almost certainly want to use Haskell, and you will probably end up basically replicating everything I did.

Some hints:

Meet & Join

Meet and join types can be represented in the obvious way, as product and sum data-types. Likewise, None and Any are represented as empty and unit types.


Representing conversions as simple functions leads to unacceptable performance problems, as even small programs in your language will spend huge amounts of time running functions that convert between representations of trivially-equivalent types, such as T | NoneTT & Any. I solved this in Pinafore by replacing conversion functions with a conversion type, from which functions are derived. In this conversion type, certain trivial conversions have specific representation, so composition within the type can make the appropriate simplifications.

Type Variables

In Pinafore, type variables are represented by Haskell’s empty closed type families, and assignment of type variables is done using unsafeCoerce. This works provided a given type variable is never assigned more than once for each solving, and while I couldn’t statically guarantee this, I tried to make the code around these assignments as clear and specific as possible to be as confident as I could from code examination and testing.


Representations of equivalent language types are not necessarily isomorphic as implementation types. For example:

x: Action (P | Q)

y: Action P | Action Q = x

There’s no implementation conversion from Action (P | Q) to Action P | Action Q, even though these are equivalent language types. Pinafore solves this essentially by cheating: when it comes across the type signature Action P | Action Q, it cheekily simplifies it to Action (P | Q) — since this is an equivalent type in Pinafore, the user will never know the difference.

Subtype System

The system of subtypes at any given scope must be consistent. This means:

  • Given any type A, then A <: A and the subtype conversion iAA is identity.
  • Given any types A, B, C, with A <: B and B <: C, then A <: C and the subtype conversions compose as iAC = iBC . iAB .

The important thing to remember is that there must be at most one subtype conversion function for any pair of types. Subtype “diamonds” (two different subtype paths between the same types) are allowed, but they must commute.

To make this work, you should ensure that there can be at most one subtype relation between any two ground type constructors. Here’s what happens if you violate this rule:

subtype Maybe Integer <: P = s1;
subtype Maybe Text <: P = s2;  # Pinafore will reject this line
g: P -> Unit = fn _ => ();

f = fn x => g (Just x);

What type can be inferred for f? Unification requires the subtype constraint Maybe v <: P to be solved, and only one of the two available conversions can be picked. Trying to pick both can only yield the non-type Maybe (Integer | Text) -> Unit, disallowed because | appears in negative position.

On the other hand, subtype relations with type variables (e.g. F a <: G a) are possible.

Record Types

The Algebraic Subtyping paper includes record types. I was hoping that record types could be used for functions with named parameters with defaults. But this doesn’t seem to work. Consider this:

# given this function
writeHandle: Handle -> Text -> Action Unit;

# define a function with named parameters, one with a default
write = fn {text: Text; handle: Handle = stdout;} => writeHandle handle text;

# some uses...
w1 = write {text = "hello";};
w2 = fn h => write {text = "hello"; handle = h;};
w3 = w2 stdout;  # OK
w4 = w2 3;  # oh no!

What type should be inferred for write? For w1 to be accepted, it must be {text:Text;} -> Action Unit, but this means w2 has type Any -> Action Unit, and the obviously unsound w4 would be accepted.

Pinafore lets you define your own data types, and I couldn’t think of any other use for record types, so I omitted them from the language. But it might be possible to get optional parameters working by fiddling around with the nature of record types.

Recursive Types


Recursive types are awkward to work with, and probably won’t have much use in your user’s programs, so you might wonder if you can omit them? In this case your solver would instead give a type error whenever the algorithm calls for a recursive type.

This might be possible given certain restrictions on subtype relations, but I don’t know the details. It’s certainly not possible for more general subtype relations, without sacrificing principality. Pinafore, for example, has the subtype relation Maybe Entity <: Entity, which means that the expression let x = Just x in x can have the types Entity, Maybe Entity, Maybe (Maybe Entity), etc., but the principal type rec a, Maybe a must be recursive.


In general I found it helpful to understand a given recursive type in three different ways:

  • Using “mu” (or rec in Pinafore): rec a, Maybe a
  • As a “let”-substitution (note this is not Pinafore syntax for types): let a = Maybe a in a
  • Expanded: Maybe (Maybe (Maybe (...

It’s particularly helpful to keeping in mind the third, as it reinforces the fact that this type does not actually have any type variables. Of course it can’t be fully written out, and something like rec a, (List a | Maybe a) is even less “writable”.

The let-subst form represents the type as a context-free grammar, where each recursive variable is a nonterminal symbol. This type grammar is transformed during type simplification (see below).

Not Understanding

A couple of misunderstandings about recursive types that cost me effort:

  • That rec a, (P a | Q a) is equivalent to (rec a, P a) | (rec a, Q a). In fact it is not.
  • That rec a, a can be rounded off to Any or None. It cannot. This “unguarded” syntax simply isn’t a type, and you must reject it if the user specifies it, since your algorithm will have it unify with every type. Your recursive types must follow both syntax restrictions mentioned in section 5.1.

Type Solving


There are currently two algorithms for solving types for unification and subsumption: Dolan’s original, and Lionel Parreaux’s Simple-sub. If you are using non-uniform representation, you must use Simple-sub, as there is a step in the Dolan algorithm that cannot be done while preserving the evidence you need (specifically, in section 5.3.3, in biunify branch “ATOM”, the substitution cannot be performed on the set of already-seen constraints).

If you are using uniform representation, you should probably use Simple-sub anyway. My guess is that it’s at least as fast, and it’s certainly much easier to understand.

Type Simplification

Both algorithms involve keeping a list of already-seen constraints to handle recursion, and between steps you must simplify the types to ensure your solver terminates. You will also want to simplify if you ever present inferred types to the user, for example, Pinafore has an “interactive mode” where it can show inferred types of expressions.

Pinafore does these simplifications in this order:

  1. Simplify recursive types using grammars (see next section).
    • rec a, Maybe (rec b, a | Maybe b)rec a, Maybe a
  2. Eliminate type variables that are “fully constrained”. This also includes “one-sided” type variables that only appear in positive position, or only appear in negative position.
    • (a & Integer) -> (a | Rational)Integer -> Rational
  3. None and Any act as identities for | and &, respectively.
    • Integer | NoneInteger
  4. Eliminate subtypes in joins:
    • Integer | RationalRational
    • Integer & RationalInteger
  5. Collapse matching parameterised types along their parameters:
    • List A | List BList (A | B)
    • (A -> X) | (B -> Y)(A & B) -> (X | Y)
  6. Merge type variables that appear in all the same positive positions (or all the same negative positions):
    • a -> b -> (a | b)a -> a -> a
  7. Roll up recursive types
    • Maybe (rec a, Maybe a)rec a, Maybe a

Type Grammars

Type grammars are necessary to simplify complex recursive types. For example, rec a, Maybe (rec b, a | Maybe b) is equivalent to rec a, Maybe a, although it isn’t obvious how to make this transformation.

To understand type grammars, consider how a type such as rec a, (F a a | G a) unrolls. It becomes an infinite tree with “left” and “right” branches at each |, and then branching for each type variable of F and G. You can think of this tree as a language on an alphabet of directions to move down the tree, and each path as a string in this language. A type grammar is simply the context-free grammar of this language.

To obtain a grammar from a type, we convert each use of rec into a production rule (renaming any duplicate variables). You can think of it as a kind of let form for types. For example, rec a, Maybe. (rec b, a | Maybe b) would be transformed to this:

a = Maybe (a | Maybe b)
b = a | Maybe b
in a

The simplification algorithm progressively unrolls the type by substituting production rules and merging branches (e.g. Maybe P | Maybe Q to Maybe (P | Q)). As it unrolls, it keeps track of the types it has seen, and when it comes across one it’s seen before, it emits a new use of rec at that point.

rec a, Maybe (rec b, a | Maybe b)

a = Maybe (a | Maybe b)
b = a | Maybe b
in a

RUN (a)
[substitute a]
RUN (Maybe (a | Maybe b))
Maybe (RUN (a | Maybe b))
[substitute a]
Maybe (RUN (Maybe (a | Maybe b) | Maybe b))
Maybe (RUN (Maybe (a | Maybe b | b)) )
Maybe (Maybe (RUN (a | Maybe b | b) ))
[substitute a, b]
Maybe (Maybe (RUN (Maybe (a | Maybe b) | Maybe b | a | Maybe b) ))
Maybe (Maybe (RUN (a | Maybe (a | Maybe b | b)) ))
[substitute a]
Maybe (Maybe (RUN (Maybe (a | Maybe b) | Maybe (a | Maybe b | b)) ))
Maybe (Maybe (RUN (Maybe (a | Maybe b | a | Maybe b | b)) ))
Maybe (Maybe (Maybe (RUN (a | Maybe b | a | Maybe b | b)) ))
Maybe (Maybe (Maybe (RUN (a | Maybe b | b)) ))
[already seen "RUN (a | Maybe b | b)"]
Maybe (Maybe (rec r, Maybe r))

(eventually rolled up to "rec r, Maybe r")

Type Inversion

If you allow type signatures, then you will do subsumption. But if you allow type signatures under a lambda, you must also do type inversion. Here’s an example, given some positive type T:

fn x => let
    y: T = x
    in y

What is the type of this expression? Well, it’s T -> T, with T now also in negative position. When is this permissible?

Given a positive type T+, there is some set of positive types S(T+) that subsume to T+. The inversion T-, if it exists, is the type such that the set of positive types U(T-) that unify with T- is the same as S(T+).

As a rule of thumb, a type is invertible if (after simplification) it has no type variables or use of &, |, Any, or None.

Going Further

Parreaux and the team at HKUST have a more powerful type system implemented in their MLscript language, allowing union and intersection without regard to type polarity, as well as type negation. I don’t know how amenable it is to non-uniform representation.

— Ashley Yakeley

Leave a Reply