# The higher-order function ‘fold’

When students encounter higher-order functions in functional programming, they are normally exposed to the following three first: map, filter, and fold. The higher-order functions map and filter are intuitively accessible. With the former you apply a function to each element of a list, while the latter retains only elements for which a given predicate is true. One might want to implement those functions in Haskell as follows:

```map :: (a -> b) -> [a] -> [b]
map _ []     = []
map f (x:xs) = f x : map f xs
```
```filter :: (a -> Bool) -> [a] -> [a]
filter _ [] = []
filter p (x:xs)
| p x       = x : filter p xs
| otherwise = filter p xs
```

On the other hand, fold is somewhat more confusing. As I’ve found, the treatment of this topic on the Haskell wiki is not overly accessible to novices, while the explanation given in Learn You A Haskell is, like pretty much everything in it, too cute for its own good.

There are two cases of fold, namely foldr and foldl, with ‘r’l and ‘l’ describing right-associativity and left-associativity, respectively. One possible definition of foldr is:

```foldr :: (a -> b -> b) -> b -> [a] -> b
foldr _ z []     = z
foldr f z (x:xs) = f x (foldr f z xs)
```

‘z’ is the last element of the list you’re applying the function to. I sometimes hear people refer to this as the “identity element”, but this is not necessarily correct. ‘z’ can be the identity element. For instance, if you want to multiply all values in a list of integers, you would chose the integer 1 to take the place of ‘z’. However, nothing is keeping you from picking any other integer.

To illustrate foldr with an example, let’s evaluate the following function call:

```foldr (*) 1 [1,2,3]
(*) 1 (foldr (*) 1 [2,3])
(*) 1 ((*) 2  (foldr (*) 1 [3]))
(*) 1 ((*) 2  ((*) 3 (foldr (*) 1 [])))
(*) 1 ((*) 2  ((*) 3 1))
(*) 1 ((*) 2  3)
(*) 1 6
6
```

To make this evaluation more digestible from the fourth line onwards, it could also be written as:

```1 * (2 * (3 * 1))
1 * (2 * 3)
1 * 6
6
```

This example shows the application of the function to each element in the list. Maybe this reminds you of how lists are constructed using the cons operator, i.e. (1 : (2 : (3 : []))) is represented, after adding syntactic sugar, as [1, 2, 3].

In fact, a list can be constructed using a fold. So, if you were in a silly mood, you could create a function that takes a list, runs it through fold, and returns the same list:

```listId :: [a] -> [a]
listId xs = foldr (:) [] xs
```

Of course, ‘xs’ can be omitted on both sides of this equation. If you now call this function with the argument [1, 2, 3], you’ll get (1 : (2 : (3 : []))) in return, which is [1, 2, 3].

Let’s now look at foldl, which is fold with left-associativity. One possible definition is as follows.

```foldl :: (a -> b -> a) -> a -> [b] -> a
foldl _ z []     = z
foldl f z (x:xs) = foldl f (f z x) xs
```

Evaluating the same expression as given above, with foldr, results in:

```foldl (*) 1 [1,2,3]
foldl (*) ((*) 1 1) [2, 3]
foldl (*) ((*) ((*) 1 1) 2) [3]
foldl (*) ((*) ((*) ((*) 1 1) 2) 3) []
((*) ((*) ((*) 1 1) 2) 3)
((*) ((*) 1 2) 3)
((*) 2 3)
6
```

You can probably see why defining a ‘listId’ function with foldl results in an error.

Also note that foldl and foldr only give the same result if the binary operator f is associative. Just consider what happens when you’re using an operator that isn’t, for instance:

```> foldr (/) 1 [1,2,3]
> 1.5
>
> foldl (/) 1 [1,2,3]
> 0.16666666666666666
```

The first function call evaluates to (1 / (2 / (3 / 1))), and the second to (((1 / 2) / 3) / 1).

# Finding All Eulerian Paths

I just went through the material of the first week of Combinatorial Mathematics, which is offered by Tsinghua University through EdX. My initial impression is that this is a mildly engaging course, marred by subpar presentation. Thus, I have a hard time recommending it in its current iteration.

One of the assignments in the first week reminded me of a program I posted some time ago. In Finding an Eulerian Path I present a somewhat verbose program for finding an Eulerian path in a graph of which it is known that it contains an Eulerian path. On the other hand, the assignment in that EdX course asks for the total number of Eulerian paths of this undirected multigraph:

That’s not a particularly challenging question. It was more fun for me to write a neat Haskell program to solve this task. However, instead of just counting all Eulerian paths, my program finds all of them. The code, which is given below, is quite clean and should be easy to follow. Let me describe the algorithm I’ve implemented in plain English, though.

For all nodes in the graph, the program finds all Eulerian paths starting from that node. The relevant part of the program at this step is the function call “findPath’ [(“”, node, g)] []”. When you set out to find all Eulerian paths, the string indicating the current path is empty. As the graph is traversed, that string grows. Two cases are possible. First, we could have reached a dead-end, meaning that there are untraversed edges that can’t be reached from the current node, due to the path that was chosen. Second, if there are reachable nodes left, then the traversal continues by using any of the unused edges.

This can be modelled in a rather intuitive functional style with a list containing elements of the structure “(Path, Node, [Edge])”. Start by taking the head of that list, and figure out whether there are any edges left to take from that position. If so, then discard the head, and append all possible paths to the list. In the given example, starting at node 1, the current path is the empty string, and the list of edges contains the entire graph. Subsequently, three elements are added to the list, with the respective paths “a”, “c” and “f”, and a list of remaining edges of which the edge that was just chosen was removed from. This goes on, recursively, until the entire graph has been processed. Of course, if there are no edges left to traverse, we’ve found an Eulerian path.

With this description, the program below should be straightforward to follow. To run it, with a graph modelled after the image above, load the code into GHCi, and execute it by typing “findPaths graph nodes”.

```import Data.List

type Node      = Int
type Edge      = (Char, (Int, Int))
type Graph     = [Edge]
type Path      = String
type Candidate = (Path, Node, [Edge])

graph :: Graph
graph = [('a', (1,2)),
('b', (2,3)),
('c', (1,3)),
('d', (3,4)),
('e', (3,4)),
('f', (1,4))]

nodes :: [Node]
nodes = [1..4]

findPaths :: Graph -> [Node] -> [Path]
findPaths g ns = findPaths' g ns []

findPaths' :: Graph -> [Node] -> [Path] -> [Path]
findPaths' _ []     acc = acc
findPaths' g (n:ns) acc = findPaths' g ns acc'
where acc' = findPath g n ++ acc

findPath :: Graph -> Node -> [Path]
findPath g node = findPath' [("", node, g)] []

findPath' :: [Candidate] -> [Path] -> [Path]
findPath' []                    acc = acc
findPath' ((path, _, []):xs)    acc = findPath' xs (path:acc)
findPath' ((path, node, es):xs) acc
| otherwise      = findPath' (xs' ++ xs) acc
where nextEdges  = filter (\(_,(a, b)) -> a == node || b == node) es
xs'        = nextPaths (path, node, es) nextEdges []

nextPaths :: Candidate -> [Edge] -> [Candidate] -> [Candidate]
nextPaths _                []     acc = acc
nextPaths (path, node, es) (x:xs) acc = nextPaths (path, node, es) xs acc'
where acc'  = (path', node', delete x es ) : acc
path' = path ++ [label]
node' = if node == a then b else a
(label, (a, b)) = x
```

# Review: Principles of Computing — Coursera

I’ve been eagerly awaiting the follow-up course to Rice University’s An Introduction to Interactive Programming in Python. While that course offered a playful introduction to computer science and programming through building a number of successively more complicated games, Principles of Computing attempted to improve your programming skills by embedding discrete mathematics problems and basic algorithms into, again, assignments that consisted of writing a number of games. Unlike the predecessor course, you didn’t have to bother with GUI code, but could instead fully focus on the underlying game logic.

On the mathematical side, Principles of Computing covered arithmetic sums, basic functions in order to describe growth rates, basic probability and a bit of combinatorics. Some assignments focused on algorithms, including standard topics like searching and sorting. Breadth-first search was covered in a rather amusing assignment where you had to write logic for ‘humans’ that would make them avoid slowly moving ‘zombies’. Further, minimax was covered in a disappointingly short assignment that was merely a variation of a previous one, an AI for Tic-Tac-Toe. The assignments themselves were the highlight of the course, though, covering the logic behind games such as 2048, Cookie Clicker, Yahtzee, the just mentioned Tic-Tac-Toe (Monte Carlo and tree search), and the 15-puzzle. Overall, they helped solidify the concepts.

While I thought that An Introduction to Interactive Programming in Python, which I took when it was offered the first time, was very polished, the same can’t unfortunately be said for Principles of Computing. Let’s me preface my subsequent criticism by stating that Principles of Computing is an excellent course, and easily comparable to the kind of course you would take at a brick-and-mortar university. While it isn’t as polished as its predecessor, it is nonetheless, in my opinion, one of the best-designed MOOCs in computer science you can take.

The video presentations are engaging. Production values seem to have increased over An Introduction to Interactive Programming in Python, which had videos that were seemingly captured by a webcam. Overall, the course material is presented very well. Some may find that the videos are moving a bit too slowly at times, but I prefer that approach over lecturers who rush through the material or even omit key concepts. Further, there was a lot of support available, such as in the form of relatively extensive notes on key topics. The discussion forums were not overly active, but it seemed that basically any question got answered quickly, either by other students or by the instructors themselves.

What stood out negatively, though, was a certain sloppiness in the presentation. In particular, the descriptions of the assignments were often lacking. There were several instances were parts of the assignment texts had to be revised by the instructors, after students posted that they were ambiguous or left out important information. In one assignment there were several typos. In others, important information was missing, or given in a rather misleading way. This was arguably one of the reasons for the tremendous rate of attrition. To quantify this impression: the first two assignments led to 19 pages of discussions, while the last few only had five or six pages.

The teaching language of this course was Python. Unfortunately, almost all starting templates prescribed OOP, and due to the way your programs were tested, you couldn’t deviate from that paradigm. In many cases a functional solution would have been much more straightforward, but that’s probably a general statement of OOP vs. FP. While I think that Python is a fine language for budding programmers, I don’t think it’s the best choice when writing more complex programs. For Principles of Computing, Python worked well, apart from the very last assignment, a solver for the 15-puzzle, which was several hundred lines long.

Due to the missing type system of Python that task felt like building a house of cards. You’re forced to write assertions and tests for basically anything. In a statically typed language you could have done with a fraction of the tests, and even with a rudimentary type system like Java’s, you would have been better off, with the downside that its lack of tuple assignment would have blown up the code by a factor of two or so. I’m curious to find out how long an implementation in Haskell would be. You could probably do it in half as many lines, and with just a few property-based tests, instead of a myriad of unit tests.

One last problem, which was voiced repeatedly on the forum, was that in the assignment description students were repeatedly told that, “once you’re confident that your methods are correct, use [the online unit test suite for this course], to confirm that they are correct.” Unfortunately, the unit tests provided did not cover all corner cases, so you certainly couldn’t confirm that your code is correct. If you were unlucky, you discovered that the very first method you’ve written, a check of a basic invariant, was actually incorrect, as you were working on the very last method, which called the various solvers depending on the state of the board. Some students made the spurious argument that those missing unit tests were “probably intended to make us write our own tests”. I didn’t find this very convincing. If that had really been the case, then state that the provided unit tests are lacking, but don’t tell students that they can use the provided unit tests to confirm the correctness (!) of their code, when that’s simply not possible.

Principles of Computing offers two kinds of statements: a regular one, and one “with distinction”. For the latter, you would have to average 90% over the entire course, and get 70% on the last assignment. As indicated by the description above, that assignment, the solver for the 15-puzzle, was rather complex, and a worthy criterion for a statement of achievement with distinction. Please note that your program is expected to solve this problem in full generality, meaning that it should be able to process any m * n puzzle. It is indeed a lot more involved than it might look.

Apart from a few minor problems, I found Principles of Computing to be tremendously enjoyable, and I warmly recommend it. The few issues I pointed out will most likely be eradicated in the second iteration, assuming that the same assignments will be used.

# Three plus One Programming Languages to Learn

Introduction

There are a lot of posts advising people to study this or that language. A relatively recent article on this topic I enjoyed was Michael O. Church’s Six languages to master. He picks Python, C, ML, Clojure, and Scala — and English. I’d like to offer a slight alternative. I fully agree on Python and C, but I differ in my recommendation for a functional language.

Based on my experience, recommend the following mix, if you want to maximise your time: Python, C, Haskell, and one language that is widely used in the industry to get your foot in the door. Your learning shouldn’t stop at that point, but thankfully one of the benefits of being versed in multiple paradigms is that you will find it easy to pick up a new language. Indeed, the benefits of Python, C, and Haskell are that they expose you to vastly different programming paradigms. Further, for all languages there are well-developed tools available, and they are surrounded by active communities.

Many CS curricula start out with an introductory or remedial programming course in Java, and more recently there has been a push towards Python at some schools. Java is hopelessly bloated, even though there are attempts to modernise the language, for instance with the recent introduction of anonymous functions in Java 8. Still, the language is way too verbose, and virtually impossible to program in without an IDE. Further, it forces upon its users object-orientation — in my biased view a detour in the evolution of programming languages if not a cul-de-sac —, and does so in a rather awkward manner. There is too much work involved. While a good IDE can make writing Java code less tedious, the resulting code is nonetheless too cumbersome.
On the other hand, Python is a relatively elegant language that allows you to grow as a programmer. I would not recommend it for any serious work, due to its missing type system, but it’s fairly easy to get started with it. For a beginner it’s arguably more motivating to be able to execute a half-broken program than being told over and over by the compiler that her types don’t match. At first, you can write short sequential programs. Afterwards,, you can learn about object-orientation if you want to, and you’ll even find typical functional programming constructs in Python, such as higher-order functions or list comprehensions.

Even better, a simple code editor will perfectly suffice to get work done. It’s possible to do achieve quite a bit in an interactive environment in a browser. I’ve completed a number of MOOCs that were using Python as a teaching language, and it turned out that is feasible to write programs with a few hundred lines in an environment like CodeSkulptorAs a budding programmer you could do a lot worse for a first language.

Personally, I hope that Pyret, developed by the PLT group at Brown University, will gain some traction in computer science education, since it is a better design than Python overall, and even provides a path towards handling types with confidence.

A Systems Language: C

James Iry describes C as “a powerful gun that shoots both forward and backward simultaneously”. Indeed, C is the kind of language that should be written by machines, not humans. There are domain-specific languages that do exactly that by only allowing a subset of all possible C programs to be written, but for those certain guarantees can be made.

When I first worked with C I was quite taken aback by the lack of support from the compiler. There were plenty of instances where I wondered why there was no in-built check, for instance when trying to modify variables that were declared but not initialised. It can’t think of a case where anybody would want to do that. The common counter-argument is that C is intended for programmers who know what they are doing. The sad reality, though, is that the arrogance of C programmers, which you often encounter, is entirely misplaced. Sure, if you pay really close attention, your code will be error-free. However, it is very easy to make mistakes in C, and people are making a lot of mistakes. Alas, this is the root of null-pointer exceptions, segmentation faults, buffer overflow errors, or memory leaks. I don’t want to know what working in C was like before the advent of Valgrind.

Given this rather negative impression, why should you bother to learn C at all? For one, it will teach you about the relation between the code you write and the machine it is executed on, since your code will map rather closely to the underlying hardware. You can directly access memory locations, for instance, which is beneficial when programming embedded systems. Furthermore, there are virtues in the language itself, like its simplicity. The entire language is described in a short and well-written book that has been around for almost four decades now.

You might never have to write C code in your professional career. However, there are plenty of instances when it will come in handy, even if your application is written in a higher-level language. Quite a few languages allow you to call C through so-called foreign-function interfaces, and maybe implementing that one tight loop in C will give your program a competitive edge.

I’m not very familiar with development regarding languages on the systems level, but Mozilla’s Rust programming language might be a strong contender to replace C in many domains. That language is not yet production-ready, but it seems to be an enormous improvement over C. On the other hand, given how ubiquitous software written in C is, one might reasonably cast into doubt whether C will ever be replaced. It’s more likely that it will be relegated to an ever-shrinking niche, in addition to never-ending maintenance work on existing systems written in C, which won’t be replaced.

There are many functional programming languages you could chose to learn. I’m admittedly biased because I study in a department that has played a fundamental role in the development of Haskell. Consequently, I was able to do some university coursework in that language, instead of Java or C++, like it is common in so many other computer science departments. The first functional language I learnt was the Lisp-dialect Scheme, though, and I’m familiar with a few others.

Haskell has a very mature ecosystem, a fast compiler, powerful libraries. Further, there are excellent textbooks available, ranging from the beginner to the advanced professional level. Those are all ancillary factors, though, but with them you will have much less reason to snub your nose at the beautiful syntax of Haskell, it’s expressiveness, and the sheer elegance in its code that is achievable with some practice. But don’t worry, if you really want to, you can write some downright horrendous code in it, too. The static type system might infuriate you at first, though, but once you’ve gotten used to it, you may sorely miss it when working in a language without type inference.

Quite a few students who are exposed to Haskell at university avoid this language like the plague afterwards, but for others it becomes their go-to language. When I’m working on some code for myself, or as an exercise, I often write the first version in Haskell, due to the fact that it is so straightforward to map the program logic you want to express to actual code. This helps you to check your understanding of the problem, and solve conceptual issues.

In CS there is the cliché that “X will make you a better programmer”. No matter what it is — arcane theory, technologies, languages — everything supposedly has the potential to turn you into a better programmer. I have my doubts about this claim in general, given that it is not at all uncommon that computer science students can barely program, thanks to mandatory group work and merely optional programming courses. Still, if you take Haskell seriously, it will expose you to a different model of programming. I’ve given some examples previously. For instance, in one post I highlighted the elegant syntax of Haskell, in another I described a very concise implementation of the game 2048. I hope this will make you at the very least curious about this language.

Another potential future benefit, which is the big hope of people wanting to see greater real world use of functional languages, is the fact that it is much easier to parallelise purely functional code. In some domains this is eminently useful. Lastly, unlike imperative languages, functional languages are all rather similar. If you know one, you can very easily pick up another one. The switch from Haskell to any other functional programming language will take much less time than switching from one imperative language to another, maybe with the exception of C# and Java.

But why not ML, Scala, or a Lisp?

Haskell is simply more advanced that Standard ML or OCaml, the only ML dialects that are widely used. Further, their respective communities are smaller. The IRC channel #haskell is generally very busy, while #ocaml sometimes feels like a ghost town. This might be an important aspect when learning a language. On the other hand, working with mutable state in OCaml is more straight-forward. Therefore, it might be easier to get used to OCaml, if you’ve never done any functional programming.

Scala is an immensely bloated language. My instinctive reaction to Scala was that something that ugly can’t have a clean implementation, and consequently I was not overly surprised when Paul Phillips, the main compiler writer on the Scala team, called it quits, and went on what seems like a retribution tour, spilling the beans on the nastiness hidden in the Scala compiler. It’s quite fascinating to watch his presentations. His talk We’re Doing it All Wrong will probably give you some good reasons to stay clear of Scala. I’ve learnt that the reality is even more unpalatable than I thought, given that the Scala team went out of their way to obfuscate (!) some of the complexities and poor design decisions.

Lisp is a nice language to explore, but maybe not as a first functional language. The syntax is somewhat awkward, and the missing type-inference can be frustrating. The neat thing about it is that when you’re writing Lisp code, you’re writing a parse tree, which leads to a possibly surprising amount of flexibility. I’d recommend Gregor Kiczales’ Coursera MOOC Introduciton to Systematic Program Design, which uses the Lisp dialect Racket. In some corners, the Lisp dialect Clojure is all the rage. If you have to target the JVM, then using that language is arguably quite tempting. Personally, I’d rather use Clojure than Scala.

A Practical Language that Pays the Bills

You got some experience in Python, C, and Haskell, but will this get you a job? While there are jobs available in which either of these languages is used, they will hardly maximise your chances of finding gainful employment. Python is a good language to know, but larger Python programs are rather fragile, due to their poor type system. C is common in embedded programming. This seems to be a good field to work in, but it’s a bit of a niche. It might require a much more specialised education, though, like a specialised MSc degree instead of a more open-ended degree in computer science or software engineering.

Compared to Haskell embedded programming is a huge niche. A very small number of companies is using Haskell, but normally only in small teams. Credit Suisse was a name that used to be mentioned all the time in functional programming circles when there was a need to prove the real-world relevance of Haskell, but, according to online sources, that team was disbanded. Standard Chartered employs some of the most well-known Haskell programmers, and there are some lesser-known companies using functional languages, but it’s not a lot, and many are highly specialised. A PhD almost seems like a prerequisite.

A more realistic option is therefore picking up one language and a relevant framework. With a solid grounding in languages from various paradigms, learning a new language is quite easy, as I wrote before. I’d recommend looking into your local job market, or the job market in cities you would like to work in, and tailor your education accordingly.

My impression is that Ruby on Rails is very popular, and there are countless jobs available. If you’ve been exposed to Java at university, then you could hope that your employer lets you use the more advanced features that were introduced in Java 8, which make this language more palatable. C# is often touted as a more pleasant alternative to Java, so that may be a direction to go into. Good knowledge of Java can also be leveraged for Android. Speaking of mobile applications, iOS development is still an exciting field. I had a look at ObjectiveC but wasn’t too pleased, but Apple is now fully committed to Swift, which might be an attractive alternative.

Knowing those three plus one languages, you’ll be far ahead of the typical graduate. Honestly, though, you’ll be way ahead of the curve if you can code up FizzBuzz, but that’s a topic for another day.

# Writing an Interpreter with BNFC and Haskell

Note: this article, and others following up on it, are primarily intended as notes for myself. Of course it’s great if you’re interested in compilers and interpreters, and find those articles useful. I’m glossing over a few steps, so if something isn’t quite clear, then please refer to Aarne Ranta’s book Implementing Programming Languages.

If you want to actively follow along, make sure the following components are installed on your system: bnfc, tex-live, haskell-platform.

Introduction

We live in exciting times. While writing a compiler used to be very time-consuming, modern tools have reached such a high level of sophistication that implementing a useful language is now within reach for anybody who is interested, because a lot of the possibly tedious work can be automated.

Compilation consists of the phases parsing, lexing, type checking, and ‘executing’. The last step entails either interpreting an abstract syntax tree, or generating code for a target language. This could be assembly, byte code, or another high-level language.

I’m tempted to say that the interesting work when writing a compiler consists of writing the type checker as well as the interpreter or code generator. The front-end part, writing a parser and a lexer, can be greatly automated. With BNFC you only have to write a grammar for your language, which BNFC will then use to generate the components of the compiler front-end.

If you want to write a ‘dynamic’ language, you can even omit writing a type checker. This major defect doesn’t seem to have harmed the adoption of Python. Types are pretty much a must-have, though, and I might briefly cover this issue in further articles. If you generate code that is targeting a virtual machine, such as JVM or LLVM, your won’t be restricted to just a single processor architecture. This will also make your task a lot easier.

In this article, I intend to give an overview of working with BNFC to write an interpreter. While the example is simple, it is nonetheless realistic, and can be extended to implement ‘real’ programming languages as well.

Implementing an Interpreter

The following presentation is based on the tutorial on the BNFC website. BNFC supports C, C++, C#, Haskell, Java, and OCaml. I’m focussing on Haskell, due to its expressive power, and because I’m more familiar with it than OCaml. I do not recommend any of the languages in the C family for this task.

The running example in this article is writing a simple calculator. While my example is based on the BNFC tutorial, I made one addition that shows how to handle a slightly more complex case. Therefore, the calculator will handle the following operations on integers: addition, subtraction, multiplication, division, and computing the factorial.

The Grammar

The grammar is straightforward to write, with the most important aspect being the order of preference of the operations:

```EAdd.  Exp  ::= Exp  "+" Exp1 ;
ESub.  Exp  ::= Exp  "-" Exp1 ;
EMul.  Exp1 ::= Exp1 "*" Exp2 ;
EDiv.  Exp1 ::= Exp1 "/" Exp2 ;
EFact. Exp2 ::= Exp3 "!" ;

EInt.  Exp3 ::= Integer ;

coercions Exp 3 ;
```

Put it in a file named Calculator.cf.

The syntax is explained on the BNFC website, but it should look very familiar to you if you’ve been exposed to Backus-Naur Form in one of your introductory computer science classes. On a related note, this is no longer a given. In a university course on functional programming that was mostly taken by MSc students, when discussing an example based on a grammar, our professor asked the audience to raise their hand if they’ve heard of Backus-Naur Form, and less than 10% had. The follow up question how many were familiar with UML, which everybody was, led him to conclude that computer science education was in a sad state.

Anyway, this is the grammar for our calculator. Now it has to be processed by BNFC, with the following commands:

```>bnfc -m -haskell Calculator.cf
>make
```

Testing the Front-end

At this point, it makes sense to test the front end on some sample files, and manually check the generated abstract syntax tree. It is straightforward to do this. All you have to do is run ‘TestCalculator’ on any input file. Let’s say your testfile is called ‘ex1.calc’, and contains this line:

```2 + 3 + 4
```

Executing this command on the command line:

```./TestCalculator ex1.calc
```

You should get this output:

```ex1.calc

Parse Successful!

[Abstract Syntax]

[Linearized tree]

2 + 3 + 4
```

It’s a good idea to test your grammar on some more elaborate expressions, though, such as ‘(((3 * 6!) / 216) – 1)!’:

```ex6.calc

Parse Successful!

[Abstract Syntax]

EFact (ESub (EDiv (EMul (EInt 3) (EFact (EInt 6))) (EInt 216)) (EInt 1))

[Linearized tree]

(3 * 6 ! / 216 - 1)!
```

I find it helpful to draw a few of those ASTs on paper, to convince myself that they really conform to what I intended the grammar to express.

The Interpreter

Now it’s time for the meat of the operation: writing the interpreter. BNFC helps you out here by providing a skeleton. Look for the file ‘SkelCalculator.hs’ in your current working directory. This is its content:

```module SkelCalculator where

-- Haskell module generated by the BNF converter

import AbsCalculator
import ErrM
type Result = Err String

failure :: Show a => a -> Result
failure x = Bad \$ "Undefined case: " ++ show x

transExp :: Exp -> Result
transExp x = case x of
EAdd exp1 exp2  -> failure x
ESub exp1 exp2  -> failure x
EMul exp1 exp2  -> failure x
EDiv exp1 exp2  -> failure x
EFact exp  -> failure x
EInt n  -> failure x
```

Now rename this file to Interpreter.hs, and add the required rules for processing the AST:

```module Interpreter where

import AbsCalculator

interpret :: Exp -> Integer
interpret x = case x of
EAdd  exp1 exp2  -> interpret exp1 + interpret exp2
ESub  exp1 exp2  -> interpret exp1 - interpret exp2
EMul  exp1 exp2  -> interpret exp1 * interpret exp2
EDiv  exp1 exp2  -> interpret exp1 `div` interpret exp2
EFact exp        -> if exp == EInt 0
then 1
else let eval = interpret exp
in  eval * interpret (EFact (EInt (eval - 1)))
EInt  n          -> n
```

As a last step, create the file Calculator.hs:

```module Main where

import LexCalculator
import ParCalculator
import AbsCalculator
import Interpreter

import ErrM

main = do
interact calc
putStrLn ""

calc s =
let Ok e = pExp (myLexer s)
in show (interpret e)
```

For this toy example, separating this program into two files might seem superfluous, but with a more complex interpreter, and some additional operations, like passing flags to the interpreter, it leads to a much cleaner design.

Of course, make sure that those filed type-check in GHCi, before attempting to compile them. Also, it makes sense to have some linearized trees ready to test the Interpreter with, while developing it.

Once you are convinced that everything is alright, proceed to the final step, compiling the interpreter:

```ghc --make Calculator.hs
```

You can then execute the interpreter on the command line, and run it again on your test files.

```>./Calculator < ex1.calc
9
>./Calculator < ex6.calc
362880
```

This concludes the work on an interpreter that can handle simple operations on integers.