Giving up on NixOS (for now)

When I wrote a blog series about setting up NixOS for Haskell development, I was deep within cabal hell trying to get different Haskell projects with differing dependencies to build and install reliably. NixOS, with it’s rapid path reconfiguration and nix-shell environments saved the day. The cost, however, was the relatively minor headache of needing to understand cabal2nix to automatically build packages from Hackage.

Since then, though, my development preferences have shifted. My language of choice (for the time being) is Purescript, a new and shifting language with new features added each compiler release. As such, being able to easily grab the newest compiler version without jumping through a lot of nix expression hoops is essential. Also, each of my purescript projects uses bower, so my projects are already self-contained within their own directories without needing to use nix-shell. In that light, NixOS doesn’t have much to offer my development flow.

So I’m moving away from my NixOS VM for the time being and switching to Lubuntu. So far, I’m not missing NixOS. With only a handful of actual applications on my dev machine, dependency hell is less likely without needing to learn the arcane nix language. The help resources for Ubuntu distributions are also much more numerous than those for NixOS, so in many cases, problems can be solved fast. If I ever need better build isolation than current workflow requires, I might give it another shot.

Purescript Puzzler 2 – an experiment in FP architecture

Previously, I wrote about Purescript Puzzler, a toy game I wrote to see how the MVI architecture works in practice. A few things kind of bothered me about my formulation of the architecture, some of which is due to the MVI philosophy generally and probably some due to my inexperience with it. A few notable points:

  1. The model contains all state, even GUI state. This feels like a violation of separation of concerns. Ideally, I should be able keep my model and replace the GUI with a new one if I wanted, with minimal code changes.
  2. The intent is very thin. I still don’t understand its purpose really. It seems like the goal is to keep the Model distinct from the View by having the Intent work as a translator between the two, but if the Model needs to store GUI state anyway, then there’s not much point.
  3. Messages are sent between components using an ADT on a single channel between each component pair. Since channels can only accommodate a single type, my ADT needs to encompass every type of action communicated between components. Purescript Puzzler, a minimal application by any reasonable assessment, had 7 actions in its ADT. I can’t imagine how many would be in a decently sized one. The explosion of data constructors in the ADT is also not very compositional; what if I want to build my app out of smaller logical units? Now I need an ADT to wrap the ADTs for each component so I can put them all on the same channel.
  4. I felt like there was too much display logic in the GUI. I usually want the GUI to consist of reusable components that are configurable externally, and that can’t happen with so much logic in the View. This is my own fault, though. I could have put the display logic in the Model, but that would have coupled the Model even more to the View than it already was.

So I set out to update Purescript Puzzler with a new architecture with the following goals:

  1. The Model consists of the domain model and nothing more. Specifically, if the user were to save their game during play, the Model consist only of the data that should persist between play sessions.
  2. The View should consist of reusable components whose rendering and display logic is controlled externally.
  3. To reduce the possible “explosion” of ADTs as the application grows, communicate functions over the channel instead of ADT messages.
  4. As an experiment, have the GUI be ‘stateless”. Each GUI is configured with actions that it can perform, and state required to perform those actions is carried in closures.

Overall, I think points 1-3 were pretty successful, but point 4 became a big headache. In the rearchitected version, I renamed the Intent to Controller since it was no longer following the MVI philosophy. Here’s a link again to the playable, original Purescript Puzzler.

Limiting the Model

I don’t like the idea of the Model containing GUI state. To me, the model should represent the “save file” (meaning persistent state) for the application and the operations that can be used to query the model or update the model to a new, valid state. In that sense, model operations should be atomic. I used this example in my previous Purescript Puzzler post.

$ move west 50 yards

This is an atomic action that transitions the model from one valid state to a new valid state. However, in a GUI you might need to select several buttons in sequence to stage the command you want to execute before passing that data to a model function to perform the transition. For example, select Move, select West, enter 50 yards, then submit. The state of the application at the end of each of those actions includes the Model state (which has not changed since the action hasn’t been submitted yet) as well as the transient GUI state. It was this transient GUI state I wanted to keep out of the Model. As a result, previous model items like selectedPiece and dropTarget were removed because those were related solely to staging GUI actions (i.e. selecting the piece, then hovering over the board to see where to place it). The result is that my Model only contained clean, atomic actions.

(In retrospect, I think a better approach would be to utilize extensible records to have a Model that operates on only Model fields and a GUI that operates only on GUI fields. Logically, they are distinct, but can populate the same record to be a single source of truth.)

Using configurable View components

In the original Purescript Puzzler, I had code in the View such as this:

case gs.victory of
  Nothing -> vtext "Purescript Puzzler!"
  Just true -> vtext "You win!!!!!!"
  Just false -> vtext "You looooose.... :'("

This code is entirely dependent on the Model being displayed, so the View component can’t be re-used in any other context. Instead, we can pass the behavior logic for the component in as an input, then choose new behaviors as the situation requires. I called this set of behaviors the Spec, which seems to be fairly common terminology for this kind of pattern. For instance, here is the structure for the GridViewSpec.

newtype GridViewSpec = GridViewSpec
  { id :: String
  , className :: Maybe String
  , gridSize :: { r :: Number, c :: Number }
  , click :: Callback
  , squareClass :: Number -> Number -> Maybe String
  , squareFill :: Number -> Number -> Maybe String
  , enterSquare :: Number -> Number -> Callback
  , exitSquare :: Number -> Number -> Callback
  , clickSquare :: Number -> Number -> Callback
  , dblClickSquare :: Number -> Number -> Callback
  }

The Spec includes all actions the component can take, including actions to query for required data (like id) as well as callback actions invoked as a result of user input. In this case, the GridViewSpec is used to draw a rectangular grid of squares. In Purescript Puzzler 2, GridViewSpec is used to draw both the puzzle board as well as each selectable puzzle piece, with customized display logic handling each case.

This pattern largely worked well except for a few challenges. One is that I did not define a default Spec for each component type, so I had to completely define each one instead of redefining parts of the default. This led to pretty verbose code that in most cases merely specified that the functionality wasn’t used. Along similar lines, a Spec needs to strike a balance between being versatile enough to be useful in many situations, but not so general that typical functionality is difficult to define. Finding that balance can sometimes be difficult.

Another challenge was implementing components that contain other components, such as the area that contains all the selectable puzzle pieces. These containers should generically contain a list of displayable components, so to that end I created a Display type class with a single function display that takes an input (the Spec) and produces a VTree output. The component container could then contain a list of Specs with Display instances, or a list of raw VTrees, using the instance below.

class Display a where
display :: a -> VTree

instance vtreeDisplay :: Display VTree where
display = id

One side effect of Specs needing instances for Display is that all my Spec types needed to be newtypes instead of type synonyms. This posed some additional annoyances I’ll get to later, but otherwise the ComponentsContainerViewSpec worked pretty well in terms of displaying a homogenous collection of items. Sometimes, however, it was difficult to update the behavior of a single component in the collection. Also, a useful addition would be some kind of existential wrapper of each component so the container could display a heterogeneous collection of items.

Sending functions over the channel

The most successful development in Purescript Puzzler 2 compared to the original is communicating functions over channels instead of ADTs that represent actions.

Using ADTs, the space of possible actions can grow and grow as the application grows. Additionally, if you want to composed different Models together in a new Model’, you need to declare a new ADT just to wrap the ADTs of each component so that you can put them on the same channel. When received by Model’, the actions are unwrapped and routed to each component for execution and state update. Let’s look at the updateGame function for the original Purescript Puzzler, which has a monolithic Model.

updateGame (TogglePiece p) s = ...
updateGame (TargetDrop r c) s = ...
updateGame Hint s = ...
...

Notice the pattern? Each has type GameAction -> GameState -> GameState. So why do we need an ADT anyway? What if, instead, we did:

togglePiece p s = ...
targetDrop r c s = ...
hint s = ...

When curried with the data previously in the ADT, each of these functions has the same type GameState -> GameState, which takes the state and updates it. As such, we can send any of them on the channel to update the Model. And now, instead of requiring a dummy Init data constructor to initialize the channel, we can use simple, polymorphic id. We can even build new functions on the fly to update the game state, so the space of possible actions is limitless without needing to expand an ADT in parallel.

Another benefit to communicating between modules with update functions is that composing different Models together becomes really easy; simply compose the multiple constituent models together into a Model’ and any function from Model' -> Model' can update any subset of models, check relationships between them, etc.

To unlock this compositional advantage in a convenient way, however, we need to use lenses. One of the great things about lenses is that they allow easy composition of multiple, individual updater functions. For instance, here is an abridged lens from Purescript Puzzler 2 that updates the behaviors of several different visual components at once.

-- first change how the pieces are drawn so border is shown
(_PuzzlerViewSpec..pieces.._ComponentsContainerViewSpec..components .~ ...
)
.. -- Now change hover behavior for board
(_PuzzlerViewSpec..board.._GridViewSpec..enterSquare .~ ...
)
.. -- Now change click behavior of board
(_PuzzlerViewSpec..board.._GridViewSpec..clickSquare .~ ...
)

One annoying thing with using lenses with newtypes is how verbose the lenses become because a lens is required to unwrap the newtype. This is kind of silly since a newtype only has one component that a lens can focus on anyway, so there’s not really any other options. I think a useful solution to this would be to define a type class that defines a newtype wrapper/unwrapper lens. Then use the function (...) (three dots instead of two) to handle the lookup automatically. The above would then become:

-- first change how the pieces are drawn so border is shown
(...pieces...components .~ ...
)
.. -- Now change hover behavior for board
(...board...enterSquare .~ ...
)
.. -- Now change click behavior of board
(...board...clickSquare .~ ...
)

Note that the trailing ... after each set operation .~ are indicating removed code in the abridged version, not invocations of the (...) function.

Using lenses in practice was a big step in my FP skill development, and was pretty easy after deciphering how they work. Their use above still has room for improvement, though.

Carrying GUI state in closures

If sending functions through the channel was the big success with this experiment, carrying GUI state through closures was the big failure.

My original thinking on this issue was to think of GUIs as having no persistent state. Instead, they have a set of actions they can do in response to user input. Each action results in a new set of actions the GUI can perform. At no point does the GUI have data about the “state”, only the set of things it can do.

So, given the above assumptions, how can the new set of GUI actions depend on the previous actions? It can if the actions themselves carry their needed information, either by closure or by currying.

One of the problems with actions returning new actions, however, is that the callbacks for an action need to set the callbacks of the action it returns, and those callbacks need to set the callbacks of the actions they return. It’s callbacks all the way down. Generally, though, the callbacks only need to be defined until you reach some kind of “baseline” state that results when the Model performs an update action. But these baseline states might be a slight modification of another baseline state, which is what you’re trying to define in the first place!

To see what I mean, take a look at this excerpt from my code.

pieceSpec mSel p = GridViewSpec
    { id: ""
    , className: if (mSel == Just p) then Just "piece selected" else Just "piece"
    , gridSize: { r: rows p, c:cols p }

    , click: callback $ const $ send chan $ defer \_ -> 
        let newSelection = case mSel of
              Just sel | sel == p -> Nothing -- unselecting currently selected piece
              _ -> Just p -- new selection

        in  if isNothing newSelection
            -- if no piece is selected, return to base spec
            then \_ -> controller chan gs
            -- else, modify the behavior of the pieces area and board to
            -- highlight the selected piece and enable drop preview
            else
            -- first change how the pieces are drawn so border is shown
            (_PuzzlerViewSpec..pieces.._ComponentsContainerViewSpec..components .~
              A.map (pieceSpec newSelection) ps
            )

Look at the first line and the last line of the code. The specification of actions for the selectable piece pieceSpec defines the resulting set of available actions from a click event as a function of a new pieceSpec using the newly selected piece! This recursive definition would be right at home in Haskell, but the strict evaluation of semantics of Purescript won’t fly here. I had to “lazify” this code using the purescript-lazy package to defer its evaluation until it is actually called by the onClick event. This was my first clue that this approach wasn’t going to work too well.

Another problem with carrying GUI state in the action closures is that there is no longer one source of truth. For instance, if you pass the GameState to two different closures, there are two different version of truth, so updating one closure with new GameState will not automatically update the other. This was an issue in Purescript Puzzler 2 when the user has already selected a piece and wants to remove another from the board before placing the selected piece. The selection action binds the current game state to the closure when a piece is selected so the user can see what placements are legal and which are not. However, if the game state changes before the piece is placed (like by removing some other piece from the board), the old game state is still bound to the placement validity check, so valid moves might be shown as invalid. Really, it became kind of a mess, and in some cases I just left features from the original Purescript Puzzler unimplemented to avoid the hassle.

Yet another problem with using closures to record GUI state is that it’s never clear which objects are included in the closure and which aren’t. Is it possible that some objects are never garbage collected because they are always included in a closure in one way or another? I think it’s definitely possible. Toward that end, it’s much harder to inspect the environment included in a closure than it is to just inspect the state of a record in memory, so debugging is more difficult.

Next steps

Purescript Puzzler 2 was a success in that it informed what techniques in FP architecture might work and which won’t. If there’s ever a Purescript Puzzler 3, I’d include the following attributes:

  1. The Model can indeed include all domain model state as well as all GUI state (unless GUI state is stored in GUI components like in React), but extensible records should be use to logically separate the two.
  2. Communicate updater functions through the channel instead of using ADT messages. This was a nice improvement.
  3. Use lenses to define update relationships between components, but find some way to ameliorate the pain of newtype unwrapping.
  4. Try out a GUI framework like React. Virtual-dom works well for a from-scratch approach to a GUI, but I know of zero libraries of reusable virtual-dom components, whereas there are several React component libraries available.

That’s all, folks. Happy Valentine’s Day!

How functional programming lenses work

Lenses can be a powerful component of a functional programmer’s toolbox. This post is not about their power or how you can use them. There are plenty of decent tutorials for that, so if you want to know what the lens fuss is about, I encourage you to check them out.

Instead, I will try to explain the magic behind how lenses work. After scrutinizing the types and code for a while, I finally had my burrito moment, so hopefully this explanation helps someone else out. I’ll be explaining the implementation behind the purescript-lens library, which is modeled after the Haskell lens library. Other implementations might be out there, too, but this is the one I’m focusing on. If you’ve ever looked at this implementation and wondered “How the hell do you get a getter and setter out of that?!”, then this post is meant for you.

The gist: a lens is a function that executes a series of steps to get an intermediate result, performs an operation on the intermediate result, then follows the series of steps backwards to build up a new result. The manner in which the new result is built up is controlled by the functor used by the operation.

Lens type explained

A lens is a function with the following type:

type Lens s t a b = forall f. (Functor f) => (a -> f b) -> s -> f t

Looking at this type, it’s a little difficult to see what is going on. Essentially, a lens uses an operation a -> f b to convert data of type s into data of type f t. I call a -> f b the focus conversion and s -> f t the lens conversion.

Let’s start with the types s and a. s is the parent object that is being inspected, and a is the part of the parent the lens puts focus on. For example:

type Foo = 
    { foo :: {bar :: Number} 
    , baz :: Boolean }

bazL :: Lens Foo Foo Boolean Boolean

The bazL type definition using Foo Foo instead of two different types simply means that the lens conversion does not transform inputs of type Foo into some unrelated type, excepting the functor f. Indeed, a Foo input will become an f Foo output of the lens conversion. Similarly, Boolean Boolean indicates that the focus conversion does not change the type of of the focus (again, excepting the functor f) – Boolean becomes f Boolean .

This is the common pattern used to define a lens that works as both a getter and a setter. It’s so common that most lens libraries define a type synonym for it type LensP s a = Lens s s a a.

More than just a getter and setter

Getters and setters are common use cases for lenses, but really lenses perform a complete transformation across data. This is why there are four type variables instead of just two. Let’s look at a more general type for the bazL lens that allows it to operate on any record with the baz field and transform any baz type a into f b.

bazL :: forall a b r. Lens {baz :: a | r} {baz :: b | r} a b
bazL = lens (\o -> o.baz) (\o x -> o{baz = x})

This type indicates that the lens conversion is from a record with a baz field of any type to a record of a baz field with a possibly different type (wrapped in a functor). The focus conversion goes from a type a to a type f b. The lens function above is a utility function that builds a lens that can be used as both a getter and a setter from a getter function and a setter function.

Is it possible for a lens to have type Lens a a b c, meaning the focus conversion performs a type change but the lens conversion does not? I’m actually not sure, but I think yes. Such a lens could be composed with Lens b c d d to get Lens a a d d, which is the familiar type we already know. In fact, lens composition is just normal function composition because lenses are just regular functions!

Behind the curtain

What’s going on behind the scenes when we compose lenses? Let’s look at our Foo type again, along with a couple new lenses and the lens help function.

type Foo = 
    { foo :: {bar :: Number} 
    , baz :: Boolean }

fooL :: forall a b r. Lens {foo :: a | r} {foo :: b | r} a b 
fooL = lens (\o -> o.foo) (\o x -> o{foo = x})

barL :: forall a b r. Lens {bar :: a | r} {bar :: b | r} a b 
barL = lens (\o -> o.bar) (\o x -> o{bar = x})

lens :: forall s t a b. (s -> a) -> (s -> b -> t) -> Lens s t a b 
lens s2a s2b2t a2fb s = s2b2t s <$> a2fb (s2a s)

When we use lens to build fooL from a getter and a setter, here’s what the result looks like, with types specialized for our Foo data type instead of the more general extensible record types.

fooL :: ({bar :: Number} -> f {bar :: Number}) -> Foo -> f Foo 
fooL focusCon parent = 
    (\o x -> o{foo = x}) parent <$> focusCon ((\o -> o.foo) parent) 
-- which we can simplify to 
fooL focusCon parent = \x -> parent{foo = x} <$> focusCon (parent.foo)

Our resulting lens applies the focusCon function to parent.foo, then incorporates the result back into the parent structure using <$> applied for the functor.

The resulting fooL lens is just function, so we can compose it with other functions.

(..) = (<<<) -- normal function composition
(..) :: (b -> c) -> (a -> b) -> a -> c
(..) f g a = f (g a)
-- or alternatively
(..) f g = \a -> f (g a)

When we compose lenses fooL..barL, we have:

fooL..barL =
\fc -> fooL (barL fc) =
\fc parent -> \x -> parent{foo = x} <$> (barL fc) (parent.foo) =
\fc parent -> \x -> parent{foo = x} <$> 
    (\parent2 -> \y -> parent2{bar = y} <$> fc parent2.bar) 
    parent.foo

-- and now we can reason the following

fc :: Number -> f Number -- fc is the focus conversion function
parent :: Foo
fooL..barL :: (Number -> f Number) -> Foo -> f Foo
fooL..barL :: Lens Foo Foo Number Number

That’s amazing! Lenses compose using normal function composition.

Take a second look at the function pipeline above and assume that the focus conversion fc has been defined already. First, parent.foo is piped in as parent2. fc is applied to parent2.bar to give a result of type f Number. The result is mapped over using <$> to apply \y -> parent2{bar = y} to get a result of f {bar :: Number}. This is then mapped over again using <$> to get a result of type f Foo, which is the output of the lens conversion.

This is the essence of a lens; the composed lens looks deep within the object, focuses on a value of a certain type, does something with that type, then recurses back up the object, “combining” the results together using <$>.

That functor magic

One thing we haven’t looked at yet is the type variable f, the functor used by the focus conversion function. Not only is the functor used in the focus conversion function, but it is also used every time the lens recurses up one layer of the object and combines the results together. The functor, therefore, controls how the new result is built out intermediate results. The choice of functor is what determines the behavior of the lens.

Let’s take our fooL..barL example again.

fooL..barL =
\fc parent -> \x -> parent{foo = x} <$> 
    (\parent2 -> \y -> parent2{bar = y} <$> fc parent2.bar) 
    parent.foo

Let’s pick the Const functor for f, which has the following implementation (<$>) fx (Const x) = Const x. In other words, no matter what function fx is applied with <$>, a Const x value will never change. Let’s see what happens when we simplify our lens using Const as the focus function.

fooL..barL Const =
\parent -> \x -> parent{foo = x} <$> 
    (\parent2 -> \y -> parent2{bar = y} <$> Const parent2.bar)
    parent.foo =
\parent -> \x -> parent{foo = x} <$> 
    (\parent2 -> \y -> Const parent2.bar) 
    parent.foo =
\parent -> \x -> parent{foo = x} <$> Const parent.foo.bar =
\parent -> Const parent.foo.bar

When using the Const functor, the lens is a getter! Can you guess what happens when the Identity functor is used? For reference, (<$>) fx (Identity x) = Identity (fx x), which is just normal function application after unwrapping the Identity constructor.

fooL..barL Identity =
\parent -> \x -> parent{foo = x} <$> 
    (\parent2 -> \y -> parent2{bar = y} <$> Identity parent2.bar) 
    parent.foo =
\parent -> \x -> parent{foo = x} <$> 
    (\parent2 -> Identity (\y -> parent2{bar = y} $ parent2.bar)) 
    parent.foo =
\parent -> \x -> parent{foo = x} <$> 
    (\parent2 -> Identity (parent2{bar = parent2.bar})) 
    parent.foo =
\parent -> \x -> parent{foo = x} <$> 
    Identity (parent.foo{bar = parent.foo.bar}) =
\parent -> Identity (\x -> parent{foo = x} $ 
    parent.foo{bar = parent.foo.bar})) =
\parent -> Identity $ parent{foo = parent.foo{bar = parent.foo.bar}} =
\parent -> Identity parent

Under the Identity functor, the lens take a Foo and returns an equal Foo by building one from the ground up.

So what if, instead of using just Identity as the focus conversion, we first run the focus through a function? I.E. the focus function is \val -> Identity (fun val) for some fun? I’ll spare the derivation and jump to the end: we get back the parent we put in except the focus will have the value returned from fun. Set fun = const newVal and you have a setter!

As a little aside, I can say personally that understanding how drastically the functor changes the behavior of the lens was a major hurdle for me. So many lens tutorials say that lenses “combine a getter and a setter” or something along those lines, and then I’d check the implementation and see no obvious getter or setter and wonder, “What kind of devilry is this?!”

If the functor controls the rebuilding done at the end of the lens conversion, then how is the functor selected? They are built into the implementations of view, set, and any of the numerous operators that can be used to run a lens conversion.

The gist, again

Lenses are tools for ripping something into constituent pieces, doing something with the final piece, then recombining all the pieces together into something new. At least that’s how the person behind this keyboard thinks about them now. There’s probably an even more general interpretation that describes all kinds of wonky things lenses can do beyond the usual getter, setter, identity, and similar behaviors, but this covers the most common uses pretty well. I’ll save that more general discover for another day after my cognitive burrito appetite returns.

Exploring the MVI architecture with Purescript Puzzler

Purescript Puzzler is a simple puzzle game using Tetris-like shapes that I created for the sole purpose of

  • Practicing purescript generally.
  • Trying out the purescript libraries virtual-dom and signal.
  • Exploring FRP and the Model-View-Intent style of application architecture.
  • Feel out the workings of possible wrapper libraries to make the process easier.

The game is playable on github, but as configured is pretty difficult to solve, so I recommend using the Hint button liberally to adjust the game’s difficulty level to your tastes.

Below is a collection of my impressions of the experience in making this game. My goal here is not to argue for or against any of these libraries or techniques but simply to write my ideas down while they are still fresh in my head and invite discussion.

Approach Overview

Before talking about my impressions, I’ll briefly describe the design of the game from a high-level viewpoint.

The easiest place to start is with the View. The View accepts the entire game state as an argument and produces a VTree representing the desired DOM state used for display. The View itself has no internal state aside from some stateful operations that can be performed on the DOM when the VTree is actually rendered. Also passed as an argument to the View is a Channel from the signal library. The Channel is used in event handlers to send lightweight messages to the Intent that descibe the user’s interaction. For example, when the user selects a piece to place on the board, the View sends a PieceClicked Piece message on the Channel to describe the event (what happened and what it happened to).

The Intent’s purpose, according to the MVI blog post linked above, is to translate user input events into Model events. In my implementation, this was really simple and there was essentially a 1-to-1 correspondence between my ViewEvents and my GameUpdate events, which seems like a code smell.

The Model contains all the game’s state GameState. This includes both the “domain model” state as well as any GUI related state. The Model also provides an update function that consumes GameUpdate messages and returns updated GameState.

The individual components are wired together in Main. A Channel is created for communicating ViewEvents. The Channel is subscribed to to produce a Signal ViewEvent. This is fed to the Intent, which produces Signal [GameUpdates], which are folded over with the Model’s state update function to produce Signal GameState. This signal is then fed into the View to create a Signal VTree, which is rendered with an effectful rendering function.

The Intent produces a Signal [GameState] instead of Signal GameState because I figured, in the general case, a single event on the View might trigger multiple updates in the Model, but in that wasn’t the case in this game.

Signal library

The signal library has some quirks, but overall it’s really nice. It’s essentially a clone of Elm, which has been exploring the concept of FRP related GUIs for a long time, so a lot of the kinks have been worked out.

Signals defined for all time

Signals are defined for all points in time, including time zero. Signals of “events,” therefore, aren’t so much discrete events in the usual sense. Instead, they represent the most recent event to happen. A consequence of this is that signals require initial state. There’s some discussion already here.

This formulation occasionally presented me with pain points when using signals. For instance, in order to get notifications of DOM events, I needed to put a Channel in my event handlers. In most cases, this isn’t a problem, but consider events that fire rapidly like drag events (drag and drop was my first approach for Purescript Puzzler, which I abandoned after great difficulty). Usually I won’t want to update the model on every drag event, so I can use sampleOn (every 40) at the subscription site so that drag events are limited firing at a 40 ms rate. But wait…what if dragging stops? No new events are put on the channel, but I’m still sampling the most recent event at 40 ms! Gah! A similar situation as this caused a nasty bug in which a “toggle” event kept firing, causing a div border to flash at 80 ms period. I worked around this by combining my sampling with distinct' to eliminate samples that didn’t change, but it does show that constant-time signals can be tricky to think about.

Another nit is that a Channel must be initialized with some value for time zero, but if a channel is being used to send events, how do I initialize when no events have happened yet? I had to create a dummy event called Init, that did absolutely nothing and had no significance, just to give the channel some kind of value. It’s possible that with some smarter designing, I could order the MVI components in such a way that the initial value to start the chain is the initial GameState, eliminating the need for Init, but it is not obvious to me whether that would work out.

Trickiness with effectful Signals

I had a lot of problems trying to figure out a way to process signals in an effectful way that also incorporated state. Specifically, I wanted my render function to add the “first” VTree to the DOM and after that patch the DOM with the diff between previous and current VTrees. However, I couldn’t figure out a way to use the signal combinators to achieve this. Instead, I had to provide an initial, dummy VTree, render it, then use the rendered node and the previous VTree as internal state that I folded over with new VTrees.

Signal distinction

Even though I got bit by the rapid sampling problem I mentioned above, it was pretty easy to fix once I understood it. The ability to apply distinct' for reference equality or distinct for value equality is pretty awesome, and underscores how much power can be packed into a small set of well-designed combinators. It is something the developer needs to think about, however. For instance, typically I used the distinct' variant out of habit, but in doing so I had to do things like

myFunc state = 
  case state.value of
    Nothing -> state
    Just v -> state {value = Nothing}

Functionally speaking, the two lines have the same result: a state with value set to Nothing, but in the first case, the result is referentially the same as the input, but in the second case it’s not; a new state will be built and returned. I think it the future, it would probably be better to simply use the value equality variant of distinct and only bother with the referential version if it was required. However, the value variant requires an Eq typeclass, so I can’t use them on type synonyms (as of now). I use type synonyms for records pretty liberally, so actually I’m not sure it would even work with the above code.

(I’d love to have view patterns to simplify code for cases such as above.)

Interactions “set in stone”

Once a signal line is established, I don’t think there is any way to “unsubscribe” from signal line updates. Any signal sent through a function will be called when that signal updates, so the function itself needs to handle its own logic for listening to or ignoring the updated values.

It’s also not clear to me how I would listen to multiple signals on the fly, adding new signals as needed. I expect it to be possible; I just don’t have any experience with it and feel like it could be a pain point.

Virtual DOM

Overall, virtual-dom is really cool to use. Writing a declarative GUI without having to worry about update-this or change-color-that was a straightforward, easy experience. I never have to worry about keeping the GUI and game state in sync because I’m building the GUI from scratch at each update. A cool side effect of this is that I can save my Model state at any time, then load it up and instantly see exactly what I was looking at when I saved. A stateless GUI makes reasoning about the display really straightforward.

For small projects, I wouldn’t hesitate to use virtual-dom again. However, for more complex ones, I have some concerns.

Room for VDOM wrapper libraries

The virtual-dom bindings for purescript are pretty low level, and that’s the way I think they should be. In JS land, the virtual-hyperscript DSL can be use to easily set up VTrees and includes a lot of convenient features out of the box, but it is opinionated. I’d love to see a similar purescript library and will probably make one sometime in the future.

One thing I want to avoid, however, is over-generalizing by writing a virtual-dom interpreter for a general DOM declaration language. Doing so might make common use cases easy but virtual-dom specific cases hard.

Stateless GUI is a double-edged sword

Not worrying about GUI state is a huge relief on the GUI side, but that means that every single stateful component in the application is now included in the Model – open dialogs, mouse location clicks, you name it. Purescript Puzzler is pretty simple in both the model and the GUI, so this is manageable. However, I can’t help but feel that in the general case, this actually couples the Model to the GUI more instead of less. Now the Model needs to know about all the little components on the GUI side in order to capture their state. This kind of formulation does not lend itself well to composition or separation of concerns.

Along the same lines, one thing I’m confused about in the MVI style is the purpose of Intent. All mine did was act as a message translator, but I think it might make sense to capture some state in Intent. One use case might be to evaluate the sequence of users actions before translating them for the Model.

As an example, imagine a text-based adventure game in which the user can move around like this:

$ move west 50 yards

This can be represented in a single Model event very easily, but what if this were a GUI? Let’s say the user clicks the Move button, then clicks West, then types in 50, chooses yards, then hits enter. While cumbersome, a GUI like this should be possible. Including some internal state in the Intent component would allow each input event to be collected into a single message to be sent to the Model. However, because there are no Model updates after each input event, there is no feedback that the user did anything! There’s not a straightforward solution, which is why the MVI blog post claims this is still an open problem.

Animations are inherently stateful

Transitions and animations are part of the polish in a really nice UI experience, but they are inherently stateful. All of that state (position, orientation, size, alpha, etc.) needs to be captured in the Model, needlessly complicating it.

One away around this is to use virtual-dom hooks to change node properties on “nextTick”. The change in properties will trigger a CSS transition, which will handle all the animation state behind the scenes. I haven’t tried either of these approaches, so I can’t comment on them extensively, but it definitely feels right to keep a bunch of pointless GUI state out of the Model.

Full VTree diff each update

If I understand the internals of virtual-dom correctly, the entire VTree is diffed and patched at each update. Typically, only a small part of the tree will change, but the whole tree is diffed all the same. There are some implemented shortcuts for reference equality of nodes, but in straight-forward functional style, the entire VTree will be rebuilt from scratch without any kind of caching, so reference equality will never hold. I wonder what the performance limit on this strategy is – how big can the VTree be and how fast can it be updated before this is a problem?

Virtual-dom uses Javascript conventions

Virtual-dom uses a number of JS style conventions that present some impedance when trying to use them from Purescript. The biggest is probably the use of arbitrary records that define id, attributes, style, etc. These are pretty verbose and annoying. In fact, most of the work done by the virtual-hyperscript DSL is simply building these records for the user in an easier way. However, in Purscript, each record is technically a different type, which doesn’t always jive well with the type system.

Another JS convention is the use of undefined to remove properties from nodes. This only affected me in one case in which I needed to disable/enable the Hint button. To disable/enable, I use the following attributes.

{ attributes: { disable: "" } } 
-- presence of disable attribute disables button, regardless of value
{ attributes: {} }              
-- absence of disable attribute enables button

The problem is that the above two records have different types, so I cannot return one or the other from an if clause. Instead, what I did is use FFI to create def and undef variables, and assigned disabled with one or the other as appropriate. An alternative would be to return entirely different VTrees from the if/else clause instead of trying to select only their options.

Overall

Building Purescript Puzzler was a great learning experience, especially since I’m not the most experienced web dev. I sunk my teeth into a few cool libraries and got my first real taste of FRP outside of toy examples. However, I still feel like thar be dragons lurking in the shadows with this approach, and the apparent lack of complex examples is not reassuring. I think part of the problem with this is the absence of wrapper libraries that make common tasks easier, which inhibits exploration. Another open problem is handling of complex GUI state within the Model. I’m also concerned with integrating foreign GUI components (“widgets” in virtual-dom parlance) and persisting state across them.

The approach does, however, eliminate sync issues between the components, which naturally eliminates a large source of bugs.

Why Purescript?

One thing that I feel Haskell doesn’t do well presently is interface with GUI libraries. One issue is that, because most such libraries are largely stateful, there’s some impedance when trying to adopt them into idiomatic Haskell code. Another problem altogether is trying to get cross-platform GUI libraries to even build/run. It’s a problem I’ve written about before.

I’ve had some good success in the past with using Haskell libraries that leverage the browser as a robust, cross-platform GUI. However, running the GUI in the browser and the controller logic in Haskell is problematic for all but the simplest GUIs for a few reasons:

  1. The round trip communication between the browser and Haskell code introduces latency that might be unacceptably high. Indeed, some typical operations might require several round trips.
  2. More and more, the latest GUI component libraries are written in Javascript, so using GUI components that are designed with Javascript in mind is more difficult with a Haskell backend.
  3. Keeping the GUI in sync with the model requires fragile DOM updates. Smarter, more functional and performant approaches like virtual-dom don’t integrate well with Haskell backends because the entire model or virtual DOM tree must be serialized to the browser for each GUI update.
  4. More and more, browser technologies are becoming the basis for cross-platform apps.

But here’s the thing: writing Javascript is no fun. Haskell is fun. Can we write Haskell but get the advantages of Javascript?

Haskell to Javascript Compilers

Write Haskell. Compile to Javascript. Run anywhere. Sounds good, right? Some projects are developing exactly this kind of solution. Each has its advantages. For example, Fay has a nice FFI to bind to Javascript libraries and invoke custom Javascript code. Haste reportedly generates fast, understandable Javascript. GHCJS supposedly compiles any Haskell code with ease.

However, I think all these solutions suffer from a key deficiency: Haskell baggage. Each compiles Haskell syntax and supports Haskell semantics, including laziness. Usually, this necessitates a large Javascript runtime be packaged with the generated Javascript program just to execute it in a way Haskell would. After all, they are the forcing square peg of Javascript into the round hole of Haskell.

Purescript – A Different Approach

Purescript is a Haskell-like language that compiles to Javascript. Unlike the above approaches that accommodate all/most of Haskell’s syntax and features, Purescript is a functional language designed from the ground up to target Javascript, and its syntax is close enough to Haskell that it should be easy for any Haskeller to pick up quickly. It features:

  1. No burdensome runtime.
  2. Strict, Javascript-like evaluation.
  3. Support for Javascript object literal notation.
  4. A type system that is arguably more powerful and/or more convenient than Haskell’s.
  5. A dead simple FFI that makes Javascript library interop relatively painless.
  6. Other cool goodies.

In a future post, I’ll explore these advantages more in depth, but here I wanted to describe what might motivate a Haskeller (or anyone) to give Purescript a try.

Haskell desktop GUIs with bindings-cef3

While web technologies and apps “in the cloud” are easily the dominant trend in application programming these days, there are still some situations in which reliable old desktop apps are superior. For example, most of my experience is in the defense industry, and there is no way the cloud would get trusted with sensitive files without appropriate protections. So how can we leverage the efforts and progress of web interface programming on the desktop, especially with regard to Haskell?

One option that I have some experience with is threepenny-gui, which uses the web browser as an interface to a Haskell backend server that runs locally (or over a LAN – really any low latency situation). An early version of threepenny is used in FNIStash. While effective, something just feels wrong to me about running desktop applications inside a browser application. Really what I’d like to do is incorporate browser functionality inside my applications instead of relying on Google, Mozilla, or even Microsoft.

Enter the Chromium Embedded Framework. It’s a fully functioning, bare-bones browser and javascript engine packed inside a .dll or .so file. Add some tabs, a URL bar, bookmarks, etc, and you have a modern browser. Alternatively, you can use CEF to provide HTML and JS driven GUIs for desktop applications. Indeed, the Steam client does this very thing! Adobe uses it as well. In fact, it’s pretty popular.

CEF provides a C API, which we can call from Haskell with appropriate bindings. I cut my teeth recently with the bindings-dsl by defining the bindings to the low level interface of the HDF5 library. Bindings to the CEF library seemed like a good follow on activity, especially since I’d like to use it in my own Haskell projects.

So I created bindings-cef3, Haskell bindings for the CEF3 C API. The package includes an optional example program that creates a browser window and loads a URL. There’s a snapshot below. These bindings are very low level, so there is ample opportunity to wrap them in a smarter, more convenient Haskell API. This is something I plan to do myself eventually, unless a more experienced Haskeller beats me to it!

Beware that because of the multiple interdependent types in CEF3, nearly all of the bindings are provided in a single file. It can take a while to preprocess and compile. When I was working on it, I had up up the RAM on my virtual machine from 4 GB to around 5 GB to avoid cryptic gcc errors. If anyone knows a better way to structure the library to avoid these hurdles, I’d love to hear it.

The library is not currently on Hackage as of this writing, and that is intentional. The current version only supports Linux, and being an occasional Windows Haskell developer myself, I don’t like the idea of keeping out Windows (and MacOS) users. Only a little more coding effort and a decent amount of testing effort is required for other platforms, and I welcome contributions from anyone wanting to help. Once it has been vetted a little more and better compatibility is implemented, joining Hackage with other bindings library seems only natural.

While bindings to CEF3 is just a first step toward having “true” Haskell desktop GUIs driven by web technologies, it’s an important step. More experimentation is needed to uncover quirks and limitations; I’m certainly no CEF expert. In fact, I probably know only as much as is necessary to get the example application working :) . Hopefully this library will help out anyone frustrated with Haskell GUIs like I have been before!

Screenshot of cefcapi example program.

]8 Screenshot of cefcapi example program.

FNIStash r1.5 released!

I put up a new version of FNIStash today. There aren’t any new features, but successfully recognizing items in the shared stash without any error has been dramatically improved. My wife and I have been using this version for a while and only rarely, rarely come across an item FNIStash has problems handling. In fact, the only item I know of right now that has a problem is Landchewer…

If you’ve been holding off on trying out FNIStash because it wasn’t reliable, I think you’ll be a lot happier with r1.5 than previous releases.

Remember, if you have a question or problem, don’t be afraid to contact me or leave a comment! I like to think I’m pretty responsive.

Setting Up a Haskell Project on NixOS

Previously, we looked at how to add new packages to our nixpkgs expression tree. Often, the reason we are adding new packages is that we want to use them for Haskell development! This installment will explore starting a new Haskell project.

Again, we’re going to be using some custom nix commands that we set up in previous posts of this series, so if you’re jumping in here, you might want to back up a few installments if you find yourself getting lost.

I’m going to be using the janrain snap tutorial as the basis for this installment, but much of this exploration is getting set up to do the tutorial, so there is not much overlap.

Set Up Project Files

First, let’s set up the basic project files we’re going to be using for our exploration.

Basic Snap Setup

Run the following commands to set up a new projectomatic directory and initialize it using cabal init and snap init. For the cabal init call, you can accept all the defaults if you want; most of the files generated with that call will be overwritten by snap init anyway. When asked, you want to choose 2) Executable.

mkdir projectomatic
cd projectomatic
cabal init
snap init

If you encounter a snap: command not found message, then you need to install snap into your user environment. If you already have the package, installing it will just make it active for the user environment so you can call snap init. Otherwise, installing will both build it and make it active.

nix-install haskellPackages.snap

The project directory will be initialized with a projectomatic.cabal file, among others. Feel free to modify the values set by default in the cabal file. I’m going to leave the defaults as is for this tutorial.

Sanity Check: In your projectomatic directory, you should now have projectomatic.cabal, Setup.hs, and the directories log, snaplets, src, and static.

Generate default.nix

Just like we used the cabal2nix utility to automatically generate nix expression for hackage packages, we can also use cabal2nix to generate expressions for our new projectomatic project.

cabal2nix projectomatic.cabal --sha256 blah > default.nix

Check out the new default.nix and you’ll see a nice, clean expression for the new projectomatic package. We want to change the sha256 = "blah" to src = ./., but other than that, the expression is similar to the ones we saw previously.

We can’t run it just yet, though. This expression is generated so that it can be integrated into the haskellPackages environment of nixpkgs. Since our project will be distinct from haskellPackages, we need to do some additional customization.

The first issue is with the expression inputs:

{ cabal, heist, lens, MonadCatchIOTransformers, mtl, snap, snapCore
, snapLoaderStatic, snapServer, text, time, xmlhtml
}:

This looks just like the previous nix expressions we encountered, so what’s the problem? When we incorporated new nix expressions into the nixpkgs expression tree, expressions for dependent Haskell packages were defined in the same haskellPackages scope. Thus, when Nix needed the input cabal expression in order to put the cabal dependency on the path, it could do it because the cabal expression was in the same scope.

On the other hand, here we are defining a nix expression outside of that scope, so Nix doesn’t know what the cabal dependency means. In fact, if we try to run nix-build for projectomatic, we’ll see this:

[dan@nixos:~/Code/projectomatic]$ nix-build
error: cannot auto-call a function that has an argument without a default value (`cabal')

To fix this problem, we need to specify in our default.nix expression where the dependent expressions are defined. We’ll do this by redefining the expression to take a single haskellPackages input argument that has a default value defined as (import <nixpkgs> {}).haskellPackages), which is the scope we want!

{ haskellPackages ? (import < nixpkgs> {}).haskellPackages }:
let 
  inherit (haskellPackages) cabal heist lens MonadCatchIOTransformers mtl snap snapCore
  snapLoaderStatic snapServer text teim xmlhtml
  ;

in cabal.mkDerivation (self: {

The term <nixpkgs> instructs Nix to look for expressions beginning with $NIX_PATH/nixpkgs. Since my NIX_PATH has only the prefix that I set in my .bashrc file, the resolved file path is my desired local directory /home/dan/Code/nixpkgs.

[dan@nixos:~/Code/projectomatic]$ echo $NIX_PATH
/home/dan/Code:nixos-config=/etc/nixos/configuration.nix

Using nix-shell

Now we have all the files we need for a complete (but minimal) snap default project. A lot of our effort was spent tweaking the new default.nix to work with our new project. Surely there must be some way to check that it’s correct.

A Quick Reminder about Expressions

Remember the purpose of nix expressions.

  1. Define all system inputs to a package
  2. Define the build steps for the package

When a nix expression is run, Nix configures something of a private system for building the package. The private system is configured to only have the dependencies of the nix expression. If a required dependency is left out of the expression inputs, the package will fail to build.

nix-shell Command

The nix-shell command acts as a way to inject a CLI into this private system. Within the nix-shell, the user can build expressions and run commands as if they are being run as part of an expression. Let’s try it out

[dan@nixos:~/Code/projectomatic]$ nix-shell
error: Package ‘projectomatic-0.1’ in ‘/home/dan/Code/projectomatic/default.nix:18’ has an unfree license, refusing to evaluate. You can set
  { nixpkgs.config.allowUnfree = true; }
in configuration.nix to override this. If you use Nix standalone, you can add
  { allowUnfree = true; }
to ~/.nixpkgs/config.nix.

Whoa, what happened here? By default, Nix won’t evaluate expressions that have an unfree license, which, if you accepted all the defaults like I did, is what we have for our project. By following the instructions given in the error, we can allow unfree packages to be built. However, this will allow unfree packages everywhere, and we just want to build this one unfree package.

An unmentioned way to enable unfree packages is the NIXPKGS_ALLOW_UNFREE environment variable. We can temporarily enable unfree packages, build the one we want, and then disable it again to return us back to our starting point. I added this to my .bashrc file (along with nix-install and nix-search) to make it easy to drop into an unfree nix-shell.

nix-shell-unfree() {
  FLAGSAVE=$NIXPKGS_ALLOW_UNFREE;
  echo "Opening shell with NIXPKGS_ALLOW_UNFREE=1. Original setting of $FLAGSAVE will restored on exit.";
  export NIXPKGS_ALLOW_UNFREE=1;
  nix-shell;
  echo "Restoring NIXPKGS_ALLOW_UNFREE=$FLAGSAVE ...";
  export NIXPKGS_ALLOW_UNFREE=$FLAGSAVE;
}

Now our nix-shell-unfree command will put us in a nix shell.

[dan@nixos:~/Code/projectomatic]$ nix-shell-unfree
Opening shell with NIXPKGS_ALLOW_UNFREE=1.  Original setting of 0 will restored on exit

[nix-shell:~/Code/projectomatic]$ exit
exit
Restoring NIXPKGS_ALLOW_UNFREE=0 ...

Try entering a nix-shell-unfree and running nix-build. Did the build work without error? Depending on your dependency versions, maybe it did. For me, I get the following:

Setup: At least the following dependencies are missing:
lens >=3.7.6 && <4.2

Looks like I still have some work to do. At first, I started setting up a new local expression for a different lens version, but after getting it set up, I found that snap requires lens 4.2, even though the cabal file written by by snap init requires less than 4.2! I am chalking this up to a bug and manually edited the projectomatic.cabal file to use lens <= 4.2. After that, nix-build works without issue. Note that I also tried keeping the cabal file unmodified and using jailbreak=true, but that had no effect as far as I could tell.

Running the Default Application

We have successfully overcome several obstacles in getting the default, (mostly) unmodified snap project template to compile. Now let's run it.

When using nix-build, Nix puts a sym link into your project directory called result that points to the location in the store that holds the build results of the expression. We can use result to get easy access to our new binary.

snap-app


TADA! We are now ready to actually follow the janrain tutorial. This installment is long enough already, so I'll save exploring the tutorial itself for next time.

Adding Nix Expressions for New Packages

The nixpkgs repository does a good job of including many Haskell packages already. Indeed, it almost seems like the lazy, functional Nix-based NixOS might have a soft spot for Haskell :). Much of hackage is available with just a few keystrokes, relieving cabal of its package-managing tasks.

However, nixpkgs does not include all of hackage, and it’s actually not hard run into missing packages during development. For instance, not long ago I wanted to build a toy snap app as a first project in NixOS. While snap is available, snap-web-routes is not.

We’ll explore adding hackage packages to your local nixpkgs repo (and submitting them upstream) in this installment. In fact, we’re going to add snap-web-routes to nixpkgs.

Preparing git

If you followed my earlier installment in this series, you should have a local clone of the nixpkgs repository. Since we plan to submit a pull request with our changes, we need to take a few steps to prepare git. If you are pretty seasoned with git, you can probably skip this section.

Add and Sync Upstream

We need to add the primary NixOS/nixpkgs github repository to our remotes. This will enable us to pull in updates to the nix expressions easily.

git remote add upstream https://github.com/NixOS/nixpkgs.git
git fetch upstream
git merge upstream/master

The last two lines are what we’ll execute whenever we want to pull in updates for our custom nix-search and nix-install calls.

Make a New Branch

Next we need to create a new branch for adding our new expression. This will isolate our updates when we submit a pull request; otherwise, any additional, unrelated updates we make to our local repository and push to origin will get added to the pull request! I learned this the hard way.

Make a new branch and check it out using checkout and -b.

git checkout -b add-haskell-snap-web-routes

Adding Expressions

Nixpkgs Expression Hierarchy

The nix expressions for all the available package are included in our local nixpkgs repository. Each nix expression is specified using a path. If no file name is provided in the path, then the file name of default.nix is assumed. For example, nixpkgs/ has a default.nix, which has the following line

import ./pkgs/top-level-all-packages.nix

The default.nix expression imports all the expressions listed in pkgs/top-level/all-packages.nix. This expression, in turn, imports expressions from other locations in the nixpkgs repository. The one we are interested in is pkgs/top-level/haskell-packages.nix. Here’s just one excerpt from the haskell-packages.nix file.

snap = callPackage ../development/libraries/haskell/snap/snap.nix{}

So, when we invoke nix-install haskellPackages.snap, Nix calls the expression located at in the ../development/libraries/haskell/snap/snap.nix file. Thus, to add a new packages to our repo, we need to

  1. Write a nix expression file and put it in a logical spot in the nixpkgs repository.
  2. Put a line in haskell-packages.nix we can use to call the expression file.

A Nix Expression for a Cabal Package

To add snap-web-routes to the hierarchy, we need to write a nix expression for it. What does a nix expression look like? Here’s the one for snap given in the nix file mentioned above.

{ cabal, aeson, attoparsec, cereal, clientsession, comonad
, configurator, directoryTree, dlist, errors, filepath, hashable
, heist, lens, logict, MonadCatchIOTransformers, mtl, mwcRandom
, pwstoreFast, regexPosix, snapCore, snapServer, stm, syb, text
, time, transformers, unorderedContainers, vector, vectorAlgorithms
, xmlhtml
}:

cabal.mkDerivation (self: {
  pname = "snap";
  version = "0.13.2.7";
  sha256 = "1vw8c48rb1clahm1yw951si9dv9mk0gfldxvk3jd7rvsfzg97s4z";
  isLibrary = true;
  isExecutable = true;
  buildDepends = [
    aeson attoparsec cereal clientsession comonad configurator
    directoryTree dlist errors filepath hashable heist lens logict
    MonadCatchIOTransformers mtl mwcRandom pwstoreFast regexPosix
    snapCore snapServer stm syb text time transformers
    unorderedContainers vector vectorAlgorithms xmlhtml
  ];
  jailbreak = true;
  patchPhase = ''
    sed -i -e 's|lens .*< 4.2|lens|' snap.cabal
  '';
  meta = {
    homepage = "http://snapframework.com/";
    description = "Top-level package for the Snap Web Framework";
    license = self.stdenv.lib.licenses.bsd3;
    platforms = self.ghc.meta.platforms;
  };
})

Whoa, that's a lot to take in. Let's go through the main points.

Dependencies List

The big list at the top are the dependencies for the nix expression. When Nix runs this expression, it first ensure that all dependencies are available and placed on the path. Anything not in the dependencies list is removed from the path. If a necessary dependency is not specified in this list, Nix will refuse to run the expression.

One thing to note about these dependencies: they look like Haskell package names, but technically they are nix expressions in the same namespace as the snap.nix expression. By convention, nix expressions for Hackage packages are named slightly differently.

Cabal.mkDerivation

The call to cabal.mkDerivation is a function call defined elsewhere in the expression hierarchy, and each item specified within the curly braces is a named input argument to the function. The important items are mostly self-explanatory, but a couple could use some elaboration:

  • sha256 - a hash of the source code used to build the package. Nix checks the hash for consistency before building the package.
  • jailbreak - strips out the dependency version bounds from the cabal file before building.

Generating a Nix Expression with cabal2nix

Specifying a nix expression like the one for snap from scratch would be a huge PITA. Thankfully, NixOS has a sweet utility function called cabal2nix that essentially handles everything for us.

nix-install haskellPackages.cabal2nix

First, make sure your hackages list is up to date.

cabal update

Next, call cabal2nix and specify the hackage package name.

[dan@nixos:~/Code/nixpkgs]$ cabal2nix cabal://snap-web-routes
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0  7161    0     0  12992      0 --:--:-- --:--:-- --:--:-- 12992
path is ‘/nix/store/wyilc14fjal3mbhw0269qsr5r84c5iva-snap-web-routes-0.5.0.0.tar.gz’
{ cabal, heist, mtl, snap, snapCore, text, webRoutes, xmlhtml }:

cabal.mkDerivation (self: {
  pname = "snap-web-routes";
  version = "0.5.0.0";
  sha256 = "1ml0b759k2n9bd2x4akz4dfyk8ywnpgrdlcymng4vhjxbzngnniv";
  buildDepends = [ heist mtl snap snapCore text webRoutes xmlhtml ];
  meta = {
    homepage = "https://github.com/lukerandall/snap-web-routes";
    description = "Type safe URLs for Snap";
    license = self.stdenv.lib.licenses.bsd3;
    platforms = self.ghc.meta.platforms;
  };
})

Boom. We get a complete nix expression without doing any work. Now we just need to put it in the right spot in the hierarchy. The rest of the hackage packages are in pkgs/development/libraries/haskell, so we'll follow suit.

[dan@nixos:~/Code/nixpkgs]$ mkdir pkgs/development/libraries/haskell/snap-web-routes

[dan@nixos:~/Code/nixpkgs]$ cabal2nix cabal://snap-web-routes > pkgs/development/libraries/haskell/snap-web-routes/default.nix

Now we can check that our expression actually runs. The nix-install function we added isn't smart enough to handle --dry-run, so we'll use the nix-env command directly.

[dan@nixos:~/Code/nixpkgs]$ nix-env -iA haskellPackages.snapWebRoutes --dry-run
installing `haskell-snap-web-routes-ghc7.6.3-0.5.0.0'
these derivations will be built:
  /nix/store/f42ybqmcmq44nr73l6ap90l5wnm3s4kq-haskell-snap-web-routes-ghc7.6.3-0.5.0.0.drv
these paths will be fetched (21.88 MiB download, 416.87 MiB unpacked):
  /nix/store/0kw75f2qqx19vznsckw41wcp8zplwnl7-haskell-errors-ghc7.6.3-1.4.7
  /nix/store/14rjz7iaa9a1q8mlg00pmdhwwn7ypd4x-haskell-distributive-ghc7.6.3-0.4.4
  /nix/store/1jnipx1swkivg1ci0y7szdljklaj9cx1-haskell-skein-ghc7.6.3-1.0.9
...

So snap-web-routes will be built, and all of its dependencies will be fetched from pre-built binaries. Sweet! If the dry-run looks good, you can run the command again without the --dry-run option or use nix-install like we have previously. I omit that call here.

Note that this will install the package to your user environment. Remember: installing a package means making it active in the user environment. If you don't want to install it, you can just build it with nix-build, which will put it in your store but it won't be installed for the user.

nix-build -A haskellPackages.snapWebRoutes

Make the Pull Request

Again, if you're already adept at git and github, you can skip this section. We're going to push our changes up to github and make a pull request. The process is largely automatic! Add the new .nix file to the git repo, commit the changes, and then push the branch to github.

git add pkgs/development/libraries/haskell/snap-web-routes/default.nix
git commit -a -m "Added snapWebRoutes to haskellPackages."
git push origin add-haskell-snap-web-routes

Go to your github repo and you should see a prompt asking if you want to create a pull request; something like NixOS:master -> fluffynukeit:add-haskell-snap-web-routes. Follow the prompts and you're done!


That's it for this installment! New installments to come if/when I encounter new problems to write about!