Category Archives: software

Monadology 0.1

Monadology is intended as a collection of the best ideas in monad-related classes and types, with a focus on correctness and elegance, and theoretical understanding, rather than practical performance. I am interested in hearing further ideas, so at least initially expect a lot of change version-to-version.

Re-exported Transformers

Monadology is built on the existing transformers package. It re-exports most of it. (It does not re-export ListT).

Result

This general-purpose “result” monad represents either success or failure, of any type. This sort of thing is so useful it could have been in base, but it isn’t.

data Result e a
    = SuccessResult a
    | FailureResult e

Of course, it’s isomorphic to Either. But whereas Either has a more general-purpose “symmetrical” feel, Result is more intelligible to the reader as a monad.

Exceptions

Monadology makes two separate approaches to exceptions: one type and many types. For example, for the IO monad, there are many different exception types that can be both thrown and caught. But there is also the one type SomeException that represents all the possible exceptions.

Many Types

For the many-types approach, Monadology simply provides MonadThrow and MonadCatch classes, along with various functions:

class Monad m => MonadThrow e m where
    throw :: forall a. e -> m a

class MonadThrow e m => MonadCatch e m where
    catch :: forall a. m a -> (e -> m a) -> m a

One Type

In principle, every monad m has a single type of all the exceptions it can throw and catch. For this approach, this type is named Exc m:

class Monad m => MonadException m where
    type Exc m :: Type
    throwExc :: Exc m -> m a
    catchExc :: m a -> (Exc m -> m a) -> m a

type Exc Identity = Void
type Exc ((->) r) = Void
type Exc Maybe = ()
type Exc (Result e) = e
type Exc (ExceptT e m) = Either e (Exc m)
type Exc (StateT s m) = Exc m
type Exc IO = SomeException

Functions such as finally and bracket, that make no reference to any particular exception type, make use of this to ensure that they work for all exceptions that can be thrown.

Composing Monads

You can compose two functors to get a functor. And you can compose two applicative functors to get an applicative functor. But, famously, you cannot compose two monads to get a monad.

At least, you cannot in general. But you can, of course, in certain cases. And we can capture the most useful cases by specifying the constraints we need on one of the monads so as to leave the other unconstrained.

Inner Monad

MonadInner is just the right constraint on the inner monad so as to compose with any outer monad to get a monad.

class (Traversable m, Monad m) => MonadInner m where
    retrieveInner :: forall a. m a -> Result (m Void) a

newtype ComposeInner inner outer a = MkComposeInner (outer (inner a))

instance (MonadInner inner, Monad outer) => Monad (ComposeInner inner outer)
instance MonadInner inner => MonadTrans (ComposeInner inner)

Essentially, inner a must be isomorphic to Either P (Q,a) for some P, Q. If you examine the structure of the WriterT, ExceptT, and MaybeT monad transformers, you’ll see that they are cases of this composition pattern.

Outer Monad

MonadOuter is just the right constraint on the outer monad so as to compose with any inner monad to get a monad.

newtype WExtract m = MkWExtract (forall a. m a -> a)

class Monad m => MonadOuter m where
    getExtract :: m (WExtract m)

newtype ComposeOuter outer inner a = MkComposeOuter (outer (inner a))

instance (MonadOuter outer, Monad inner) => Monad (ComposeOuter outer inner)
instance MonadOuter outer => MonadTrans (ComposeOuter outer)

Essentially, outer a must be isomorphic to P -> a for some P. If you examine the structure of the ReaderT monad transformer, you’ll see that it’s a case of this composition pattern.

Lifecycles

LifecycleT is a monad transformer for managing the closing of opened resources, such as file handles, database sessions, GUI windows, and the like. You can think of it as a conceptually simpler version of ResourceT.

The actual code is slightly different in the contents of the MVar, but it basically looks like this:

newtype LifecycleT m a = MkLifecycleT (MVar (IO ()) -> m a)

runLifecycle :: (MonadException m, MonadTunnelIO m) => LifecycleT m a -> m a

lifecycleOnClose :: MonadAskUnliftIO m => m () -> LifecycleT m ()

type Lifecycle = LifecycleT IO -- the most common usage

That MVar simply stores all the “close” operations to be run at the end of each “lifecycle” when called by runLifecycle, in reverse order of their opening. You can add your own close operations with lifecycleOnClose.

Of course you may be thinking, what if I want to close things in a different order? For example, GUI windows get closed when the close box is clicked, not in the reverse order of opening.

For this you want to get a closer function:

lifecycleGetCloser :: MonadIO m => LifecycleT m a -> LifecycleT m (a, IO ())

For example,

newGUIWindow :: Lifecycle Window

makeMyWindow :: Lifecycle Window
makeMyWindow = do
    (window,closer) <- lifecycleGetCloser newGUIWindow
    lift $ onCloseBoxClicked window closer
    return window

Here, closer is an idempotent operation that will call the closer of newGUIWindow, that is, to close the window. Subsequent calls do nothing. It also gets called at the end of the lifecycle, to ensure that the window is eventually closed if it hasn’t been already.

Also, you may come across certain functions that make use of the “with” pattern, to manage opening and closing. Here are a couple from the base library:

withFile :: FilePath -> IOMode -> (Handle -> IO r) -> IO r

withBinaryFile :: FilePath -> IOMode -> (Handle -> IO r) -> IO r

Monadology is capable of “unpicking” this pattern coroutine-style, and converting it to a Lifecycle:

lifecycleWith ::  (forall r. (a -> IO r) -> IO r) -> Lifecycle a

fileHandle :: FilePath -> IOMode -> Lifecycle Handle
fileHandle path mode = lifecycleWith $ withFile path mode

Coroutines

Speaking of coroutines, Monadology has a class for that.

class Monad m => MonadCoroutine m where
    coroutineSuspend :: ((p -> m q) -> m r) -> CoroutineT p q m r

This is experimental, as the only useful instances I’ve come across are monads based on IO, which supports coroutines by using threads.

The CoroutineT transformer is a special case of the StepT transformer, which is for step-by-step execution.

Transitive Constraints

For many transformers, certain constraints on a monad are transitive to the transformed monad. For example:

Monad m => Monad (ReaderT r m)
(MonadPlus m, Monoid w) => MonadPlus (WriterT w m)
MonadIO m => MonadIO (ExceptT m)

Monadology has a class for this:

class TransConstraint c t where
    hasTransConstraint :: forall m. c m => Dict (c (t m))

instance TransConstraint Monad (ReaderT r)
instance Monoid w => TransConstraint MonadPlus (WriterT w)
instance TransConstraint MonadIO ExceptT

Why not just use GHC’s QuantifiedConstraints extension? Because GHC has issues satisfying quantified constraints. So there’s an explicit class instead.

Tunnelling, Hoisting and Commuting

Tunnelling allows you to manipulate monads underneath a transformer. Each tunnellable transformer is associated with a tunnel monad, that represents the “effect” of the transformer.

type p --> q = forall a. p a -> q a

class (MonadTrans t, TransConstraint Monad t) => MonadTransHoist t where
    hoist :: forall m1 m2. (Monad m1, Monad m2) =>
        (m1 --> m2) -> t m1 --> t m2

class (MonadTransHoist t, MonadInner (Tunnel t)) => MonadTransTunnel t where
    type Tunnel t :: Type -> Type
    tunnel :: forall m r. Monad m =>
        ((forall m1 a. Monad m1 => t m1 a -> m1 (Tunnel t a)) -> m (Tunnel t r)) -> t m r

Tunnel monads are, curiously enough, always instances of the aforementioned MonadInner. For example:

type Tunnel (ReaderT s) = Identity
type Tunnel (WriterT w) = (,) w
type Tunnel (StateT s) = (,) (Endo s)
type Tunnel MaybeT = Maybe
type Tunnel (ExceptT e) = Either e
type Tunnel (ComposeInner inner) = inner
type Tunnel (ComposeOuter outer) = Identity

(This is essentially a correction and generalisation of MonadTransControl.)

It’s straightforward to derive hoisting from tunnelling, which is why MonadTransHoist is a superclass of MonadTransTunnel. And furthermore, you can commute two transformers in a stack, if you can commute their tunnel monads (which you always can).

commuteTWith :: (MonadTransTunnel ta, MonadTransTunnel tb, Monad m) =>
    (forall r. Tunnel tb (Tunnel ta r) -> Tunnel ta (Tunnel tb r)) ->
    ta (tb m) --> tb (ta m)

commuteInner :: (MonadInner m, Applicative f) => m (f a) -> f (m a)

commuteT :: (MonadTransTunnel ta, MonadTransTunnel tb, Monad m) =>
    ta (tb m) --> tb (ta m)
commuteT = commuteTWith commuteInner

Unlifting

Monadology has two classes for unlifting transformers.

type Unlift c t = forall m. c m => t m --> m
newtype WUnlift c t = MkWUnlift (Unlift c t)

class (...) => MonadTransUnlift t where
    -- | lift with an unlifting function that accounts for the transformer's effects (using MVars where necessary)
    liftWithUnlift :: forall m r. MonadIO m =>
        (Unlift MonadTunnelIOInner t -> m r) -> t m r
   -- | return an unlifting function that discards the transformer's effects (such as state change or output)
    getDiscardingUnlift :: forall m. Monad m =>
        t m (WUnlift MonadTunnelIOInner t)

-- | A transformer that has no effects (such as state change or output)
class MonadTransUnlift t => MonadTransAskUnlift t where
    askUnlift :: forall m. Monad m => t m (WUnlift Monad t)

Only ReaderT (and IdentityT) and the like can be instances of the more restrictive MonadTransAskUnlift.

However, MonadTransUnlift also has instances for StateT and WriterT. These allow correct unlifting without discarding effects (though another function is provided if you want discarding). How is this possible? Magic! MVars! Unlifting StateT simply holds the state in an MVar. Unlifting WriterT uses an MVar to collect effects at the end of each unlift.

Using MVars also makes everything thread-safe. Here’s an example:

longComputation1 :: IO ()
longComputation2 :: IO ()

ex :: StateT Int IO ()
ex = liftWithUnlift $ \unlift -> do
    a <- async $ do
        longComputation1
        unlift $ modify succ
    longComputation2
    unlift $ modify succ
    wait a

Here, longComputation1 and longComputation2 can run in parallel, in different threads. But unlift forces synchronisation, meaning that the modify statements never overlap. Instead, state is passed from one to the other. So ex is guaranteed to add two to its state.

As mentioned earlier, the tunnel monads of transformers in MonadTransTunnel are all instances of MonadInner. But if the transformer is an instance of MonadTransUnlift, its tunnel monad will be an instance of the stricter class MonadExtract. And if the transformer is an instance of MonadTransAskUnlift, then its tunnel monad will be an instance of MonadIdentity, monads equivalent to the identity monad.

The Same, but Monads Relative to IO

Often a monad can be understood as some transformer over IO. In such a case, we might want to know the properties of that transfomer.

Monadology provides classes for such monads, that mirror classes for transformers:

MonadTransMonadIO
MonadTransHoistMonadHoistIO
MonadTransTunnelMonadTunnelIO
MonadTransUnliftMonadUnliftIO
MonadTransAskUnliftMonadAskUnliftIO

Composing and Stacking Transformers

The ComposeT transformer allows you to compose monad transformers (unlike composing monads, there is no restriction on this). Generally speaking, if t1 and t2 both have some property, then ComposeT t1 t2 will have it too.

The StackT transformer allows you to deal with whole stacks of transformers, parameterized by a list of their types:

type TransKind = (Type -> Type) -> (Type -> Type)

type StackT :: [TransKind] -> TransKind
newtype StackT tt m a = MkStackT (ApplyStack tt m a)

type ApplyStack :: forall k. [k -> k] -> k -> k
type family ApplyStack f a where
    ApplyStack '[] a = a
    ApplyStack (t ': tt) a = t (ApplyStack tt a)

Monad Data

The concepts of “reader”, “writer”, and “state” monads each imply a kind of data: readers have parameters, writers have products, and states have references. And pretty much any monad has exceptions. So, why not make that data explicit, so we can manipulate it directly?

data Param m a = MkParam
    { paramAsk :: m a
    , paramWith :: a -> m --> m
    }

readerParam :: Monad m => Param (ReaderT r m) r

data Ref m a = MkRef
    { refGet :: m a
    , refPut :: a -> m ()
    }

stateRef :: Monad m => Ref (StateT s m) s

data Prod m a = MkProd
    { prodTell :: a -> m ()
    , prodListen :: forall r. m r -> m (r, a)
    }

writerProd :: Monad m => Prod (WriterT w m) w

data Exn m e = MkExn
    { exnThrow :: forall a. e -> m a
    , exnCatch :: forall a. m a -> (e -> m a) -> m a
    }

allExn :: forall m. MonadException m => Exn m (Exc m)
someExn :: forall e m. MonadCatch e m => Exn m e

Parameters and references can be mapped by lenses. Not so much products, though there is one thing we can do with them.

mapParam :: Functor m => Lens' a b -> Param m a -> Param m b

mapRef :: Monad m => Lens' a b -> Ref m a -> Ref m b

foldProd :: (Applicative f, Foldable f, Applicative m) => Prod m a -> Prod m (f a)

Of course, other monads have their own references:

ioRef :: IORef a -> Ref IO a

stRef :: STRef s a -> Ref (ST s) a 

Odd Stuff

ReaderStateT and TransformT are odd things that I make use of elsewhere, but don’t really understand. Both of them convert Params into Refs.

newtype WRaised f m = MkWRaised (forall a. f a -> m a)
type ReaderStateT f m = StateT (WRaised f m) m
readerStateParamRef :: Monad m => Param f a -> Ref (ReaderStateT f m) a

newtype TransformT m a = MkTransformT (forall r. (a -> m r) -> m r)
transformParamRef :: Monad m => Param m a -> Ref (TransformT m) a

Not Included

  • ListT. This does not transform every monad to a monad, so is not a monad transformer.
  • Any notion of a “base” monad. While every transformer stack must logically have some base monad, the concept is non-parametric as transformed monads cannot be base monads.
  • Lifted “batteries” functions. Just use lift.
  • An effect system.

And also…

I have substantially expanded, cleaned up and reorganised witness, my package for type witnesses, which Monadology makes use of. I have also published type-rig, which provides the Summable and Productable classes used for monad data.

— Ashley Yakeley

Pinafore 0.3

I had more planned for Pinafore 0.3, but two things happened: I found a bug in the type-checker that absolutely had to be fixed, and took me awhile to figure out. And last week I started full-time work, so I wanted to get a version out now, since I won’t have much time to work on Pinafore for about the next year.

Updates

List References

The most important new feature is a new kind of reference type. Pinafore 0.2 had WholeRef, SetRef, and FiniteSetRef. 0.3 adds ListRef, which UI.listTable uses directly. ListRef values can track items more accurately than using a WholeRef of a list. For example, you can obtain a WholeRef for a given item in the ListRef, which will keep track of the position of the item in the ListRef as other items are inserted and removed.

In future releases, ListRef will be the model for other UI elements. For example, a “grid” UI element is naturally a Cartesian product of a list of rows and a list of columns. Operations such as inserting a new row or column thus become straightforward insertions and deletions to two ListRefs.

Subsumption Expressions

A new kind of expression, expr: type, subsumes the expression to the type. So for example, 3 has type Integer, while 3: Number has type Number. Pretty straightforward.

Module-Qualified Names

You can now write, e.g., UI.listTable to refer to the listTable symbol in the UI module. Note that this isn’t just using the namespace, it also “does” the import. In other words, it’s essentially the same as let import UI in listTable.

Why, you might ask? Well, Pinafore doesn’t follow Haskell’s example of putting all the import statements at top level, because Pinafore has no “top level”. Since this means one can always bury import statements deep inside expressions, it’s simpler to just do the import with the module qualification.

Better Type Simplification

Formerly, the type simplifier eliminated one-sided type-variables. A type variable is one-sided if it only appears in positive position, or else only appears in negative position.

This is now more general: the type simplifier now eliminates all fully-constrained type variables. Here’s an example:

a & Integer -> a | Number

gets simplified to

Integer -> Number

(because Integer <: Number).

The idea is that the constraint a <: T is equivalent to the equation a = a & T. This is a fundamental principle of Algebraic Subtyping that comes from lattice theory, and making the substitution is how type-checking deals with constraints. However, one can also reverse this substitution, to extract constraints. In this case you’d extract the constraints a <: Integer and a :> Number. Note that because Integer <: Number, a is “fully constrained”: Pinafore can specialise it either to Integer or to Number, or indeed any type “in between” (such as Rational), without losing any generality. So that’s exactly what it does.

As it happens, one-sided type-variables are a special case of this. For example, the type a | Text yields the constraints a <: None and a :> Text. And obviously None <: Text, so a is fully constrained, and the type can be simplified to Text.

Other Library Changes

  • UI stuff is now in a separate UI module (which is actually built in a separate Haskell package).
  • The UI type has been renamed Element, with the expectation that it will usually be referred to as UI.Element.
  • There are some new UI elements for doing CSS styling.
  • There are some new functions and some name changes in the Std library. It’s still early for the Pinafore project as a whole, so expect a certain amount of incompatibility between versions.

Next Milestone

My plans for 0.4 are basically everything I dropped from 0.3:

  • Graphics, such as how to display images and diagrams in UI elements (earlier awkward Haskell library issues have now been sorted)
  • Files, which are the easiest way of handling large blobs of data such as images, video, etc.
  • New reference type for text

Release

Version 0.3 of Pinafore is available from Github as a Debian package. There’s also a syntax-highlighting extension for Visual Studio Code.

The Pinafore website has all the documentation.

— Ashley Yakeley

Pinafore 0.2

Pinafore was originally motivated by my desire to organise and structure (i.e., type) various bits of information relating to my life. This is why, for instance, the example Pinafore code in the documentation relates to contacts and events. After I released Pinafore 0.1, I spent some time actually trying to do this, and ran into a number of problems. 0.2 was originally going to be a bigger, longer, milestone, but instead I made fixing these problems a priority. As a result, Pinafore 0.2 is much more pleasant and usable than Pinafore 0.1.

Updates

These are the main new features.

Module System

There are now two kinds of Pinafore file, scripts and modules. Scripts are typically UNIX executable files with no filename extension and #!/usr/bin/pinafore at the top. A script contains a single expression.

Modules are located in one of a number of known places with filenames that end with .pinafore. The contents of a module file generally looks like let <bindings> in export <names>.

Recalling that there is no “top level” in Pinafore, both scripts and modules can import modules into any scope with an import statement inside a let expression.

Anyway, it all works the way you’d expect. Here’s an example.

Dynamic Supertypes

In algebraic subtyping, every subtype relation P <: Q implies an “upcasting” conversion function of type P -> Q. The overall system of conversion functions needs to be consistent, of course, but apart from that, there’s no restriction on what these functions do. In particular, they do not need to be injective, and retraction functions (“downcasting”, of type Q -> Maybe P) do not need to exist.

However, for some subtype relations, a retraction function is available. Pinafore now provides a general mechanism for making use of them.

Every (ambipolar) type T has a greatest dynamic supertype D(T), with these properties:

  • T <: D(T) (“supertype”)
  • D(D(T)) = D(T) (“greatest”)
  • There exists a special form check @T: D(T) -> Maybe T (“dynamic”)

There’s now a new kind of pattern that can be used to match according to the retraction function: (pat: T) has type D(T) and matches pat as type T.

There are actually two special forms, check and coerce. They can be defined in terms of the pattern, more-or-less like this:

check @T: D(T) -> Maybe T;
check d = case d of
    (t:T) -> Just t;
    _ -> Nothing;
    end;

coerce @T: D(T) -> T;
coerce d = case d of
    (t:T) -> t;
    _ -> error "coercion from <D(T)> to <T> failed";
    end;

So what are the greatest dynamic supertypes of various types? In most cases, D(T) = T, not very interesting. We do have D(Integer) = D (Rational) = Number. But the real motivator is dynamic entity types, see below. For these, D(T) = DynamicEntity.

For the future, it actually wouldn’t be hard to introduce predicate types. Given a type T and a function of type T -> Boolean, one would be able to define a subtype P of T consisting of those values that satisfied the function. This would be an obvious choice for the dynamic supertype mechanism (with D(P) = D(T), of course). However, for the time being I don’t have a strong motivation for predicate types, so Pinafore 0.2 does not provide them.

Dynamic Entity Types

So generally, Pinafore erases types. There is no information about entity types in storage, nor about which values have which types. This is helpful for reasoning about storage, and for refactoring your schema. However, sometimes you do want store some kind of type information. For example, you might want to implement something like this:

Human <: Animal
Dog <: Animal
dateOfBirth: Animal ~> Date
animalUI: Animal -> UI

In this case, you do want to store type information for each Animal: you want to know whether it’s a Human or a Dog.

This is how you’d do this with dynamic entity types:

dynamictype Human = !"mytype.Human";
dynamictype Dog = !"mytype.Dog";
dynamictype Animal = Human | Dog;
dateOfBirth = property @Animal @Date !"Animal.dateOfBirth";
animalUI animal = case animal of
    (human: Human) -> humanUI human;
    (dog: Dog) -> dogUI dog;
    end;

All dynamic entity types are subtypes of DynamicEntity, which internally encodes a pair of anchors, one for type and one for “value”. A dynamic entity type simply represents a set of type anchors: in the example, Human and Dog are each one anchor (as given), and Animal is both of them.

A dynamic entity type is concrete if it represents only one type anchor. Given a concrete dynamic entity type, one can generate new values dynamically or statically with the newDynamicEntity and dynamicEntity special forms.

Comparing the type system of Pinafore to that of, say, Java, there seems to be a certain sense in which dynamic entity types resemble Java’s classes, and open entity types resemble interfaces. I’m not sure how far this intuition goes, though.

Generalised Open Entity Subtypes

Previously, subtype relations could be declared with subtype P <: Q, with P and Q both open entity types. This is now generalised so that P can be other (ambipolar) entity types.

But this gives rise to a problem: we cannot have (or at least make use of) more than one subtype relation for a given “ground type constructor” if their parameters are incompatible. Consider this:

opentype P;
opentype Q;
subtype Maybe Integer <: P;
subtype Maybe Boolean <: Q;
f: Entity -> ()
f _ = ()
g x = f (Just x)

Here the ground type constructor is Maybe, so what is the type of g? There are two different subtype “paths” to get from Maybe a to Entity, so we want something like g: (Integer | Boolean) -> (). But that is not allowed, as Integer | Boolean is not a negative type.

The solution for the time being is simply to disallow subtype relations on parametered types (such as Maybe Integer). This isn’t perfect, but overall an improvement.

Command-Line Arguments & UNIX Environment

If you make a script, you may wish to pass arguments to it when invoking from the command line. If so, scriptArguments is what you want. There’s also scriptName, environment, and getEnv.

Next Milestone

My 0.3 plans are still quite fluid, but the main points are:

  • Graphics, such as how to display images and diagrams in UI elements (involves some awkward Haskell library issues)
  • Files, which are the easiest way of handling large blobs of data such as images, video, etc.
  • New reference types for text and for lists.

Release

Version 0.2 of Pinafore is available from Github as a Debian package. There’s also a syntax-highlighting extension for Visual Studio Code.

The Pinafore website has all the documentation.

— Ashley Yakeley

Whole Haskell Use of GADTs

For some reason GADTs are a particular sticking point for many advocates of Simple Haskell. But sometimes GADTs are the natural form of expressing a concept.

By way of example, here’s a snippet of API I’ve borrowed from JuicyPixels:

data Image a = Image {
    imageWidth :: !Int,
    imageHeight :: !Int,
    imageData :: Vector (PixelBaseComponent a)
    }

data DynamicImage =
    ImageY8 (Image Pixel8) |
    ImageY16 (Image Pixel16) |
    ImageY32 (Image Pixel32) |
    ImageYF (Image PixelF) |
    ImageYA8 (Image PixelYA8) |
    ImageYA16 (Image PixelYA16) |
    ImageRGB8 (Image PixelRGB8) |
    ImageRGB16 (Image PixelRGB16) |
    ImageRGBF (Image PixelRGBF) |
    ImageRGBA8 (Image PixelRGBA8) |
    ImageRGBA16 (Image PixelRGBA16) |
    ImageYCbCr8 (Image PixelYCbCr8) |
    ImageCMYK8 (Image PixelCMYK8) |
    ImageCMYK16 (Image PixelCMYK16)

promoteImage :: ColorConvertible a b => Image a -> Image b 

JuicyPixels was originally released in 2012, and there’s always a good presumption against changing an API, so we can’t blame the developers for not making use of GADTs. But if we were starting a project like this from scratch today, how would we write such code, using the whole language?

The way DynamicImage has been defined makes it a bit difficult to work with. For example, if you want to get the width or height of a dynamic image, you’ll have to write cases for each constructor. And the convertibility from one pixel type to another is encoded at the type level (with ColorConvertible), but there’s no clean way to do it at the value level, if we wanted to provide that functionality dynamically.

We can make this code more expressive, and more elegant, by using a GADT that represents the pixel type of a dynamic image:

data PixelType a where
    PixelType8 :: PixelType Pixel8
    PixelType16 :: PixelType Pixel16
    PixelType32 :: PixelType Pixel32
    PixelTypeF :: PixelType PixelF
    PixelTypeYA8 :: PixelType PixelYA8
    PixelTypeYA16 :: PixelType PixelYA16
    PixelTypeRGB8 :: PixelType PixelRGB8
    PixelTypeRGB16 :: PixelType PixelRGB16
    PixelTypeRGBF :: PixelType PixelRGBF
    PixelTypeRGBA8 :: PixelType PixelRGBA8
    PixelTypeRGBA16 :: PixelType PixelRGBA16
    PixelTypeYCbCr8 :: PixelType PixelYCbCr8
    PixelTypeCMYK8 :: PixelType PixelCMYK8
    PixelTypeCMYK16 :: PixelType PixelCMYK16

data DynamicImage = forall a. DynamicImage (PixelType a) (Image a)

Any code to get properties of the contained Image is now much simpler:

dynamicImageSize :: DynamicImage -> (Int, Int)
dynamicImageSize (DynamicImage _ img) = (imageWidth img, imageHeight img)

We can also express convertibility dynamically, if we want to add that:

canPromotePixel :: PixelType a -> PixelType b -> Maybe (Dict (ColorConvertible a b))
-- provided as appropriate

promoteDynamicImage :: DynamicImage -> PixelType b -> Maybe (Image b)
promoteDynamicImage (DynamicImage pta img) ptb = do
    Dict <- canPromotePixel pta ptb
    return $ promoteImage img

You can see how with Haskell’s fancy shiny GADT feature, the code is now more expressive, and easier to understand for anyone who knows the language.

GADTs are actually not a particularly difficult concept to learn, for someone who already understands rank-n types and existential quantification. And in many situations, they provide a huge benefit in expressibility.

— Ashley Yakeley

Whole Haskell is Best Haskell

The promise of Haskell over other languages is that it allows you to more cleanly and intuitively represent the application domain. This leads to more intelligible and maintainable code. But to take full advantage of what Haskell has to offer, you have to embrace the whole language. This means making use of any appropriate language feature, just as you would with any other language.

If you’re using Haskell in industry, it’s my belief that restricting your team to some “simple” subset of the language for complex problems will make your code more complex and more difficult for competent developers to understand. It will be more likely to have repetitive boilerplate, and more likely to require explicit error calls to handle “impossible” cases. It will be less comprehensible, and so less maintainable.

Here are some claims from “The Simple Haskell Initiative”, which I believe are flawed:

Fancy Haskell is costly to teams because it usually takes more time to understand and limits the pool of people who can effectively contribute.

By taking this attitude, you are not only committing your code to mediocrity, but your team too. Whole, elegant, Haskell, firing on all language cylinders, takes less time for competent developers to understand. And you can raise less-experienced developers to the necessary competence specific to your project.

Things that have been around longer will be more well-tested and understood by a larger group of people. Prefer tried and true techniques over the latest shiny library or language feature. The more foundational something is in your tech stack, the more conservative you should be about adopting new versions or approaches to that thing.

These are two separate things.

For libraries, maturity is certainly a valid concern. But it is unrelated to the structure of the library’s API.

For language features, the GHC team has a high bar for releasing new extensions, often involving formal proof. The implementation of features considered “fancy”, such as GADTs, type families, and polykinds, is understood to be as sound as that of any other part of the language. They are unlikely to be a particular source of compiler defects.

If you adopt a new thing, how much of its complexity will spread throughout the rest of your codebase? You should be more hesitant to adopt something if its complexity is going to spread through a larger portion of your codebase.

Code written in whole Haskell is less complex than “simple” Haskell for the same complex application. That’s the whole point of it.

There is no one definition of what language features count as “Simple Haskell”. Michael Snoyman seeks to define a “boring” subset of Haskell, but his recommended set of “boring” language extensions is quite broad, including most extensions relating to classes and type families, and even PolyKinds. Boring Haskell, in practice, seems to be close to Whole Haskell.

Sam Halliday, by contrast, seeks a vastly more restricted language, rejecting GADTs, type families, multi-parameter classes, and apparently even rank-n types and existential quantification. Such restrictions lead to unnecessarily complicated code, in my view. Here’s a simple example of how GADTs and existential data quantification can improve code generality and intelligibility.

What you should be doing

  • Embrace the whole language.
  • Set a high quality bar for code within your team.
  • Mentor less-experienced developers.

Embrace the whole language. Pretty much every “fancy” feature of the whole GHC Haskell language has a productive purpose. That’s the point of all the academic research. As a competent Haskell developer, you should know when that purpose applies and how to make best use of each feature.

Note that language features are not exactly the same as language extensions. For example, the language gives you the option to either allow or prohibit “incoherent” use of class instances. In most cases, the best use of this language feature is to prohibit, as it can ensure a discipline that leads to more intelligibility and predictability.

Some language features are of relatively specialised use. Polykinded types are very useful for type-oriented applications (such as implementing a typed language interpreter). And, of course, some simpler or more straightforward projects might make use of relatively few language features.

Set a high quality bar for code within your team. Set code expectations as early as you can. Discuss ideas early, and make suggestions for design approaches. Haskell makes refactoring easier than many languages, take advantage of that as appropriate.

Mentor less-experienced developers. When you hire a Java, Python, or C++ developer, you can expect them to be fully competent in each of those languages. You can typically give them development ownership of progressively larger project features, and leave them to get on with it until code review. Given the current state of the Haskell job market, this may not be the case for Haskell developers.

If you hire junior developers who are not yet familiar with the whole language, you will need to mentor them. Take extra time with them to explain how features of the Haskell language work, how they are best used in general, and how you use them in your codebase.

If that sounds like an extra burden, bear in mind that not many developers make the effort to learn Haskell in the first place, and those that do are likely to have more aptitude to learn more about it. Invest in the people you hire. If you’re doing anything worthwhile, you’re in this for the long term.

… Why did you choose Haskell, anyway?

— Ashley Yakeley

Pinafore 0.1

We all generate a lot of information in our lives and in our work:

contacts, events, emails, tasks, photos, plans, budgets, financial records, media collections…

Computers are supposed to help us organise it all. So how’s that working out?

For the most part, we use application programs. Each application program works with information of a particular kind, with a polished and specific user interface. However, it fixes a particular schema for that information, and it is typically difficult to combine information from multiple applications.

Alternatively, we can work with information more loosely and flexibly in a spreadsheet. However, this provides a more limited user interface experience, and the very looseness makes it difficult to reason abstractly about the type and schema of the information.

Pinafore is an attempt to reimagine how computers represent and store information, and how users interact with it. It allows users to create their own schemas for information, and create their own interfaces to it.

This 0.1 release includes some of the major pieces: a type system for information, a language, a storage system, some composable user interface elements. Enough to get the gist of the project, and perhaps suggest some future possibilities. But much more needs to be done.

Language

Pinafore is an interpreted language. Information is stored as predicate/subject/object triples in a database in your home directory, and the user interfaces are created with GTK+.

Pinafore generally resembles Haskell. It has a type system derived from Hindley-Milner, and features pattern-matching, lazy evaluation, and separation of pure functions from executable actions. There are some differences however:

  • There is no “top level”. A Pinafore file consists of a single expression. Type declarations, like bindings, are declared within let expressions.
  • Layout is not syntactically significant. Instead, lines are terminated by semicolons, and do and case expressions are terminated with the end keyword.
  • The colon is used for type signatures, while the double colon is used for list construction, the other way around from Haskell.
  • Line comments start with #, while (nestable) block comments are marked with {# and #}.
  • Only one equation is allowed for a function definition. Argument patterns can be matched with case expressions.

Of course, there are many features of Haskell that Pinafore lacks, and vice versa.

Type System

Pinafore is a strongly-typed language. The type system implements Stephen Dolan’s Algebraic Subtyping, which is an extension of Hindley-Milner to allow subtyping. This type system is decidable: if an expression has a type, Pinafore can always infer a principal type for it. Like Haskell, however, you can also add type signatures to definitions.

A subtype relation P <: Q is a relationship between two types, “P is a subtype of Q”, which simply means “every P is a Q”, or “allow a P where a Q is expected”. Of course, this implies an inclusion function P → Q that actually converts the P to the Q. These functions do not have to be injective, nor does there need to be any kind of reverse function Q → Maybe P, though these do exist in Pinafore in some cases.

Two types are equivalent if each is a subtype of the other.

Polarity

The type system distinguishes positive and negative types. This is necessary, because certain type operations are only permitted with certain polarities:

  • A positive type is a type that can appear in a positive position. Think of this as the type of a value you’ve defined in your program. The type signature of a value is such a positive position.
  • A negative type is a type that can appear in a negative position. Think of this as the type of acceptance of values, such as the argument of a function type (that is itself positive).
  • An ambipolar type is a type that is both positive and negative. This includes simple types such as Text, Integer, and so forth.
  • If P and Q are positive types, then P | Q is a positive type. You can read this as “a P or a Q, not telling you which”. As you might expect:
    • P <: P | Q
    • Q <: P | Q
    • If P <: R and Q <: R, then P | Q <: R.
  • If P and Q are negative types, then P & Q is a negative type. Think of this as “must be both a P and a Q“. Likewise:
    • P & Q <: P
    • P & Q <: Q
    • If R <: P and R <: Q, then R <: P & Q.
  • None is a positive type, that is empty (and is a subtype of every type). None | P = P.
  • Any is a negative type, that accepts anything (and is a supertype of every type). Any & P = P.

Here are some examples of expressions with the principal types that Pinafore will infer. Note that Pinafore uses a single rather than double colon for type signatures:

\x -> 3: Any -> Integer
Nothing: Maybe None
\b -> if b then "hi" else 3: Boolean -> (Text | Integer)
\x -> x + textlength x: (Integer & Text) -> Integer

Type Constructors

Like Haskell, Pinafore has type constructors such as Maybe, [] (list), (,) (pair), -> (function) and so on. But all type parameters must be types (i.e. as if of Haskell’s kind *): in addition, each parameter must be either covariant or contravariant. This gives subtype relations. For example, suppose F is a type constructor with one argument:

  • If F is covariant in its argument, then,
    • F x has the same polarity as x.
    • P <: Q implies F P <: F Q.
  • If F is contravariant in its argument, then,
    • F x has the opposite polarity as x.
    • P <: Q implies F Q <: F P.

Of course, some types are, morally, neither contravariant nor covariant in their arguments. For these we use a pair of type parameters in a special syntax, one contravariant (marked with -) and one covariant (marked with +). For example, the WholeRef type constructor represents references with get and set operations:

WholeRef {-p,+q}
get: WholeRef {-p,+q} -> Action q
(:=): WholeRef {-p,+q} -> p -> Action ()

Pinafore has some abbreviations to make working with these a little easier, e.g. WholeRef T = WholeRef {-T,+T}, and WholeRef +T = WholeRef {-Any,+T}, etc.

Recursive Types

Pinafore has “equirecursive” types, written in the form rec v. T, where T is a type expression where v appears only covariantly. The key fact of recursive types is that they are equivalent to their unrolling. For example, these two types are equivalent:

rec a. (a, Maybe a)
(rec a. (a, Maybe a), Maybe (rec a. (a, Maybe a))

Recursive types are necessary for principality (that all typeable expressions have a principal type), though they’re not much used in practice.

Data Types

Pinafore allows you to create your own algebraic data types, like the data keyword in Haskell. Here’s an example:

datatype StopwatchState = StoppedState Duration | RunningState Time;

Typed Storage

Pinafore stores “knowledge” as relationships between entities of various types. The relationship types are called morphisms, which can be composed as the name suggests.

Types of entities are all subtypes of the Entity type. These include:

  • Literal types for small pieces of data, such as Integer, Number, Boolean etc.
  • Open entity types, that simply represent arbitrary points, declared with the opentype keyword. Values of open entity types can be declared statically (with an anchor) with the openEntity keyword, or generated at run-time with the newOpenEntity function.
  • Closed entity types, that have constructors, declared with the closedtype keyword. These are similar to data types, except that each constructor has an anchor, and the contained types must themselves be entity types.

Anchors are 256-bit values usually hashed from a literal string in your program. Pinafore erases types when storing information in its storage: it does not store the structure of types nor does it store which values have which types. Instead, it uses anchors to identify information in storage.

Open Entity Example

Here’s an example. Let us suppose to store two relationships concerning people:

  • The name of some person p is “James”.
  • The mother of p is some person q.

Firstly, we will need a type for people. This can be an open entity type: it has no information of its own besides identity: all information about people comes from morphisms.

opentype Person;

We also need properties for “name” and “mother”. We need to give these anchors, since this is what will identify them in storage, not the names of the language bindings we happen to use.

“Name” is a property from Person to Text, because the name of a person is text. We give it the anchor !"myschema.name".

“Mother” is a property from Person to Person, because the mother of a person is a person. We give it the anchor !"myschema.mother".

Properties are morphisms, so the type of them is a morphism type, indicated by ~>. In fact, properties generate morphisms: you can compose morphisms together that are strings of properties. In this case, you can compose these two to get a morphism for “name of mother”.

name: Person ~> Text;
name = property @Person @Text !"myschema.name";
mother: Person ~> Person;
mother = property @Person @Person !"myschema.mother";

Now we need entities p and q. These might be generated at run-time or obtained elsewhere, but here we’ll declare them statically. Again, it is the anchor that identifies them in storage, not the bindings p and q.

p: Person;
p = openEntity @Person !"someperson";
q: Person;
q = openEntity @Person !"otherperson";

Actually storing the relationships is an Action (similar to IO in Haskell). And like Haskell, Pinafore has do notation to make working with actions easier:

do
    name !$ {p} := "James";
    mother !$ {p} := q;
end

The !$ operator applies a morphism to reference to get another reference. Since p is an entity, not a reference, we must first convert it to a reference using “reference notation” {p}.

The := notation sets the value of a reference.

Here’s a typed breakdown of that first action:

p: Person
{p}: WholeRef +Person
name !$ {p}: WholeRef Text
name !$ {p} := "James": Action ()

Here’s what it looks like put altogether:

let
    opentype Person;
    name: Person ~> Text;
    name = property @Person @Text !"myschema.name";
    mother: Person ~> Person;
    mother = property @Person @Person !"myschema.mother";
    p: Person;
    p = openEntity @Person !"someperson";
    q: Person;
    q = openEntity @Person !"otherperson";
in do
    name !$ {p} := "James";
    mother !$ {p} := q;
end

This is a complete Pinafore program. Running it will store those two relations in Pinafore’s persistent storage.

Of course, we can combine the morphisms in other ways:

# everyone who's mother's name is Kate
(name !. mother) !@ {"Kate"}

# the (name, mother) pair of p
(name !** mother) !$ {p}

It is important to note that Pinafore does not store entities per se, it stores relations between entities. One cannot, for example, retrieve all entities of type Person. Types of entities such as Person are erased for storage. However, one can retrieve all entities that have name “James”, or whose mother is some entity q.

Closed Entity Types

Closed entity types resemble data types, but they are all subtypes of Entity, and so can be stored. Every constructor of a closed entity type includes an anchor, to identify that constructor in storage. Here’s an example:

closedtype CelestialLocation
    = EquatorialLoc Number Number !"EquatorialLoc"
    | EclipticLoc Number Number !"EclipticLoc";

Reinterpretability

Any item of information retrieved from storage can be “unknown”, and Pinafore is robust with regards to what it happens to find in storage. This gives a certain amount of flexibility in modifying an existing “schema”, or system of entity types, without having to transform data in storage. For example, if you remove a property from your program, Pinafore will simply ignore that information in storage. If you add a property, Pinafore will initially find all values of that property to be “unknown”. Constructors can be added and removed from closed entity types. If Pinafore finds something it doesn’t recognise or cannot parse as it expects, it treats it as “unknown”.

Composable User Interface Models

In terms of the Model/View/Controller way of looking at user interface, the view and controller are represented by user interface elements (of type UI), while the model is represented by references (of type WholeRef, SetRef, and FiniteSetRef).

A reference represents the state of some thing. The user may wish to retrieve some part of that state, or make some change to it, or be notified when it changes. References are thus “live”: when connected to a UI element, the user can use the UI to change the reference, but also the reference can update the UI when its state changes.

  • Whole references (WholeRef) represent a single value (which might be “unknown”). There are operations for getting, setting, and deleting (making unknown).
  • Set references (SetRef) represent some arbitrary set or predicate of some type. There are operations for adding and removing members, and for checking membership of some value.
  • Finite set references (FiniteSetRef) are set references that have a finite number of members, that can be retrieved.

The various kinds of references can be composed in various ways, such as various set operations (union, intersection, Cartesian product and sum).

Pinafore’s “reference notation” makes working with whole references a little easier. For example, given two whole references to integers, we can create a new whole reference that is the sum of them:

p: WholeRef Integer;
q: WholeRef Integer;
pq: WholeRef +Integer;
pq = {%p + %q};

Whenever p or q updates, then pq also updates. Of course, pq is read-only: attempts to set it will fail.

User Interface Elements

User interface elements are things such as text areas, buttons, check boxes, and tables. They are constructed from the references they control, and can be composed by horizontal or vertical layout, and put in windows.

Release

Version 0.1 of Pinafore is available from Github as a Debian package.

The Pinafore website has all the documentation.

— Ashley Yakeley

Forbidden Haskell Types

Sometimes, writing Haskell is like having an argument with the compiler. You give it your reasoning, and it checks it over for flaws. And if it thinks it finds one, it will tell you all about it. You then have to look over what it told you, and figure out exactly what its complaint is. Did I just express myself badly? Or am I actually wrong? Or, very occasionally, the compiler is just being petulant and you have to work around it. Of course, if you’re really upset about GHC not accepting your perfectly reasonably argued program, you can go tell its parents.

Haskell’s type system is pretty great, but one thing it doesn’t have, that some other type systems do have, is recursive types, by which I mean, types directly constructed from themselves. Recursive types are forbidden in Haskell.

Let’s say we want to create a recursive type T = Maybe T. The official introductory Haskell answer to this is that one cannot do this directly, but one can easily create a type T that contains Maybe T, sometimes called a “newtype wrapper”. Like this:

newtype T = MkT (Maybe T)

Simple enough. But can we do better than this? Can we create a type T that is actually equal to Maybe T? Can we create the forbidden recursive type?

Let’s start with won’t work:

type T = Maybe T
Example.hs:6:1: error:
    Cycle in type synonym declarations:
      Example.hs:6:1-16: type T = Maybe T

The type keyword creates an alias for a type, and aliases cannot be recursive as they must eventually refer to an actual type.

Oddly enough, GHC does accept this:

type family T where
    T = Maybe T

Type families are strange things. They’re not quite “real” types, but neither are they mere aliases for types, as they don’t have to be fully resolved. Indeed you can declare a type family in one module, use it all over as if it were a real type, and then define the actual instances of that family in another module.

Sadly, though, GHC won’t let us actually use the instance. It rejects the obvious proof that T and Maybe T are the same type:

type family T where
    T = Maybe T

proof :: T :~: Maybe T
proof = Refl
Example.hs:10:9: error:
    • Reduction stack overflow; size = 201
      When simplifying the following type: T

Why is this? It’s because GHC can’t just accept what it’s been told, and instead endlessly tries to “reduce” (that is, expand) T = Maybe T = Maybe (Maybe T) etc, giving up when it reaches a safety limit. You can think of it as a kind of intellectual black hole, or an “infohazard” for the compiler: if it ever starts examining the strange secrets of your recursive type, it will get lost in the infinite abyss, until a built-in emergency mechanism kicks it awake after it’s twisted in the vortex 200 times (you can change this number if you want, or even let it run forever).

It turns out we can construct a type T together with a proper (non-bottom) proof value of T :~: Maybe T, we just have to be discreet. The key is to make sure GHC never comes across the unfathomable constraint T ~ Maybe T, because it will immediately become ensnared in the contemplation of its endlessness.

Instead, we prepare a lemma that is both harmless and more general, and then carefully specialise it to get the terrifying conclusion we want, using the TypeApplications extension, thus avoiding the need for any kind of inference:

type family A p q where
    A () x = Maybe (A x ())

lemma :: forall x. (A () x) :~: Maybe (A x ())
lemma  = Refl

type T = A () ()

proof :: T :~: Maybe T
proof = lemma @()

Actually constructing and deconstructing values of T will require a similar circumspection. The trick is, we don’t let GHC infer types. We musn’t let GHC think, because thinking about the bad thing leads it to madness. Instead, we apply all types by hand. Here’s the full program, including all the necessary extensions:

{-# LANGUAGE RankNTypes, TypeApplications, TypeOperators,
    TypeFamilies, UndecidableInstances, AllowAmbiguousTypes #-}
module Main where
import Data.Type.Equality

type family A p q where
    A () x = Maybe (A x ())

lemma :: forall x. (A () x) :~: Maybe (A x ())
lemma  = Refl

type T = A () ()

proof :: T :~: Maybe T
proof = lemma @()

convert1 :: forall a b. a :~: b -> a -> b
convert1 Refl = id

convert2 :: forall a b. a :~: b -> b -> a
convert2 Refl = id

tconvert1 :: T -> Maybe T
tconvert1 = convert1 @T @(Maybe T) proof

tconvert2 :: Maybe T -> T
tconvert2 = convert2 @T @(Maybe T) proof

nothing :: T
nothing = tconvert2 $ Nothing @T

just :: T -> T
just x = tconvert2 $ Just @T x

count :: T -> Int
count t = case tconvert1 t of
    Nothing -> 0
    Just t' -> succ $ count t'

t3 :: T
t3 = just $ just $ just nothing

main :: IO ()
main = putStrLn $ show $ count t3

This program defines a truly recursive type T = Maybe T, constructs a value of it, and then deconstructs that value. (It does indeed correctly print “3”.) On the other hand, it’s much more complicated than just using a newtype like we’re supposed to…

— Ashley Yakeley

Empty and Unit Types

Remember when you were in school, and your mathematics teacher told you that 00 = 1, and you got very confused? This is the type theory equivalent of that.

So what’s this all about?

It’s about three type constructors:

0 is an “empty” type. It has no values.
1 is a “unit” type. It has one value.
A → B is the type of maps from A to B, where A and B are types.

“An” empty type?

Sure, there might be more than one empty type, and more than one unit type. But in each case they’re all isomorphic (formal way of saying “basically the same”) so we’ll treat them as one empty type and one unit type.

How would you store the unit type?

The unit type has one value, so it needs zero storage space. Any more than that would be wasting space.

How would you store the empty type?

The empty type has no values, so it doesn’t matter how much storage space it requires, as you’ll never have a value of it to store. You could decide to allocate 5G for the type, if you want, and it wouldn’t cause any problems, because those allocations would never actually happen. (Although, it might cause problems with the way you allocate non-empty types constructed with the empty type.)

What about the void type?

In some languages, “Void” is used to refer to an empty type, or some approximation thereof.

In C, C++, Java, C#, etc., the keyword void refers to the zero-byte function return structure. As such it has 2560 = 1 value, so it’s a unit type. You can think of void functions as returning only one possible “value” and therefore no information. (Functions in these languages are not maps, by the way.)

By “map”, you mean function, right?

Well, a pure function that always terminates and doesn’t do anything else. A → B just means assigning a B for every A. Two values of this type are the same if they assign the same B to each A. This kind of equality is called “extensional”, basically meaning that the things look the same from the outside.

How many values do maps have?

We’ll use |A| to indicate the number of values in type A. Then we have:

|0| = 0
|1| = 1
|A → B| = |B||A|.

So for example, |1A| = |A|. And |0A| = 1.

Wait, there’s a value of 0A ?

Yes. While 0 has no values, 0A has one value. That value is called the absurd map, and it exists but never gets looked up, or called, or whatever.

How is that possible?

Supposing I told you I could give you a value of 0. Then I’d be lying, because there are no values of 0.

But what if I told you that if you give me a value of 0, I could give you anything? Then I’d be telling the truth, because you can’t give me a value of 0. That promise is the absurd map.

“If you give me a value of 0, I’ll eat my hat.” Clearly I’m not going to break this promise.

Could there be more than one value of 0A ?

If you have two of these absurd maps, you have nothing you can do with them, so you have no way of telling them apart from the outside. So they’re the same map.

Doesn’t that mean 0A is itself a unit type?

I guess?? Sure, whatever.

How would you implement the absurd map as a function?

It doesn’t matter, since such a function will never get called. You could just make it the null pointer if that’s how you represent functions, or whatever you want.

What about the type A0 ?

If A has any values, then A0 has no values. But if A has no values, then A is basically the same as 0, and A0 has one value, the absurd map.

What’s this in category theory?

There’s a category where the objects are types, and the morphisms are maps. 0 is the initial object, and 1 is the final object. A → B is the exponential object BA.

How is all this done in Haskell?

Haskell does not have any of these types. It has a type of pure functions, but they are not guaranteed to terminate. It does not have an empty type or a unit type, because non-termination and other “bottom” values are included in all types, but it does have approximations of these types.

— Ashley Yakeley

How To Write an Interpreter for a Lambda-Calculus-Based Language

Domain-specific languages are very useful, and ones that allow computation are more useful still. And as it turns out, they are not that difficult to write. Assuming you are competent in Haskell, I’ll walk you through the basic steps of writing an interpreter for such a language, and you can fill in the details.

Our language will be

  • interpreted,
  • based on the lambda calculus, with variable references, function applications, and function abstractions,
  • lexically scoped, meaning that names are resolved at their use in the source code,
  • dynamically-typed (or unityped as Robert Harper might say), meaning that there is one type for all values that is a sum of various useful types for text, numbers, lists, etc., and most importantly, functions,
  • pure, meaning that functions have no side-effects, although it’s not hard to adapt this to IO or some other monad if you want.

I won’t spend much time on syntax, but for familiarity, any examples of the language itself will look roughly like Haskell where possible.

1. Value type

You’ll need a type to represent the values your language works on, a sum of all the types of values you want to work with. Since the language is based on the lambda calculus, it will need to include a constructor for functions. For example:

data Value =
    VBool Bool |
    VText Text |
    VInt Int |
    VList [Value] |
    VFunction (Value -> Value) |
    VError String

Note that we’re handling errors as a kind of value (VError), but there are other approaches to error-handling.

2. Apply function

This is a function takes a “function” value and an “argument” value and applies them. It’ll probably look something like this:

vapply :: Value -> Value -> Value
vapply (VFunction f) a = f a
vapply (VError err) _ = VError err
vapply _ _ = VError "not a function" -- or a more informative error message

Note how we pass through errors that are being used as functions. if we want to, we could also pass through errors that are being used as arguments, making all functions “strict”.

3. The expression type

You’ll need a type for expressions. An expressions has some set of free (unbound) variables, which can then be bound in composition with other expressions. For example:

f a 3 has free variables f a

let p=q in p has free variables q

(\a -> a) 1 has no free variables

Expressions with no free variables are “closed”, those with one or more are “open”.

Here’s the magic type for expressions with lexical scope:

data Expr a = ClosedExpr a | OpenExpr Name (Expr (Value -> a))

There are lots of variations and generalisations of this type, but this is the basic form. (Name is of course the type of names of variables, probably a synonym for String or Text.) Here’s how it represents expressions, all of type Expr Value:

textvalue
4 ClosedExpr $ VInt 4
a OpenExpr "a" $ ClosedExpr $ \a -> a
f 1 OpenExpr "f" $ ClosedExpr $ \f -> vapply f (VInt 1)
f a OpenExpr "f" $ OpenExpr "a" $ ClosedExpr $ \f a -> vapply f a

— be careful about argument order

4. instance Applicative Expr

You’ll need to create Functor and Applicative instances for your expression type, which I’ll leave for you to do. It’s easy to get the argument ordering wrong, but we’ll catch this with tests. The (<*>) function gives you the “application” operation of the lambda calculus, while pure creates constant expressions.

5. var, abstract and let

You’ll need to create functions for the other lambda calculus operations:

exprVar :: Name -> Expr Value
exprAbstract :: Name -> Expr a -> Expr (Value -> a)
exprLet :: Name -> Expr Value -> Expr a -> Expr a

It’s easy to write exprLet using exprAbstract, you simply make this transformation:

let var = bind in val

(\var -> val) bind

Again, if you implement these wrong your tests will catch them.

6. Recursive let-binding

Of course, this let-binding operation doesn’t allow recursion. The definition cannot refer to the name being defined, nor can you have a set of definitions that refer to each other cyclically. This might be what you want, but if it isn’t, no worries, Haskell is a lazy language and it’s straightforward to implement recursive let-binding (letrec) in terms of sequential let-binding (letseq).

To show how, let’s first see how it works in the case of a single binding. It’s simply this transformation:

letrec var = bind in val

letseq var = fix (\var -> bind) in val

Now for a set of bindings, the idea is to gather them together into a single binding, and perform the same transformation:

letrec var0=bind0; var1=bind1; var2=bind2 in val

letseq
vars = fix (\vars -> let var0=extract0 vars; var1=extract1 vars; var2=extract2 vars in join bind0 bind1 bind2);
var0=extract0 vars;
var1=extract1 vars;
var2=extract2 vars
in val

Of course, you’ll need to determine a suitable type for vars, with corresponding join and extractN functions, paying attention to strictness and irrefutability in matching.

7. Evaluation

Evaluation is turning an expression into a resulting value. This is pretty easy: if an expression is closed, you have a result; if it’s open, then variables have been left unbound:

exprEvaluate :: Monad m => Expr a -> m a
exprEvaluate (ClosedExpr a) = return a
exprEvaluate (OpenExpr name _) = fail $ "unbound variable: " ++ name -- or list all of them instead

8. Parsing, generally

So now you’ll need to parse text into expressions. The parsec library is well-suited for this; the Parser monad basically does what you want. The main caveat is how it handles failure in choice:

p <|> q: if p fails and consumed no input, then q (otherwise propagate p ‘s failure)

try p <|> q: if p fails then q

The key is to pay attention to the first character to be parsed in each branch of a choice. If they’re all different, you don’t need to use try .

9. Parsing whitespace

First thing is to create a parser for whitespace, including comments, which we’ll call readWS . After that, you can use the convention that all parsing functions munch trailing (but not leading) whitespace. I generally find these functions useful:

readCharAndWS :: Char -> Parser ()
readCharAndWS c = do
    _ <- char c
    readWS

readStringAndWS :: String -> Parser ()
readStringAndWS s = do
    _ <- string s
    readWS

Of course, your top-level parser will also have to munch leading whitespace.

parse :: Parser (Expr Value)
parse = do
    readWS -- munch leading whitespace
    readExpression

10. Parsing identifiers

It’s up to you of course, but names of variables and other identifiers typically start with a letter, followed by zero-or-more alphanumeric characters. You might want to also include underscores and some other characters. Use the character-test functions in Data.Char and you’ll get Unicode support for free.

If you have keywords in your language then you must specifically test and exclude them from identifiers, otherwise you’re going to have a bad time.

Don’t forget to add a trailing readWS !

11. Parsing expressions

You’ll want to pay careful attention to ambiguity and precedence, with functions for parsing “loose” and “tight” expressions. For numerical and other infix expressions, you can use the notion of “terms” (which contribute to a sum) and “factors” (which contribute to a product). For example, the sum a + b * c is compose of two terms, the latter of which is a product of two factors. For example:

readExpression :: Parser (Expr Value)
readExpression = ... readTerm ...

readTerm :: Parser (Expr Value)
readTerm = ... readFactor ...

readFactor :: Parser (Expr Value)
readFactor = ... parenthetical readExpression ...

12. Predefined functions

You’ll probably want to supply a set of bindings to useful functions, that you’ll need to convert to your Value type. The easiest way to do this is to create FromValue and ToValue classes:

class FromValue t where
    fromValue :: Value -> Either String t
class ToValue t where
    toValue :: t -> Value

-- Int example instance
instance FromValue Int where
    fromValue (VInt v) = Right v
    fromValue (VError err) = Left err
    fromValue _ = Left "expected int"
instance ToValue Int where
    toValue = VInt

Then you can create a system of instances that allows easy conversions of Haskell functions.

instance FromValue Value
instance ToValue Value
instance (ToValue a,FromValue b) => FromValue (a -> b)
instance (FromValue a,ToValue b) => ToValue (a -> b)
instance FromValue Int
instance ToValue Int
instance FromValue Text
instance ToValue Text

… etc.

You can then let-bind your library of values to expressions created by the user.

13. Putting it all together

Now you have all the pieces, the pipeline looks like this:

  • Parse the user’s text to get an expression (of type Expr Value).

  • Let-bind the library of predefined values.
  • Evaluate the result to get a value (of type Value).

And that’s it!

14. Tests

You’ll want hundreds of these. I use tasty-hunit as a basic test framework. The good news is that it’s very easy to write lots, as the basic form is just a language string and an expected result (fail, or pass with this value). The only awkwardness is that you probably don’t have an instance Eq Value, because of its function constructor, so you’ll need to create a value-comparison function, or even an orphan Eq instance just for testing.

Be sure to test

  • whitespace,
  • comments,
  • boolean literals,
  • text literals,
  • numeric literals,
  • list construction,
  • parsing ambiguity,
  • function abstraction,
  • function application,
  • let-binding,
  • recursive let-binding,
  • scoping,
  • predefined functions,
  • error propagation,
  • anything else you can think of…

And if you’re really serious, you can use QuickCheck to generate language strings and expected results.

— Ashley Yakeley