A (JavaScript) Pilgrimage to Lambda Calculus

Todd Brown
14 min readDec 8, 2018

Many cultures have a creation myth. As a primitive story that explains how the world came into existence, the creation myth provides the basis of a cultures belief system. As a functional programmer my creation myth is the Lambda Calculus. This article is the start of my pilgrimage, an adventure into an implementation of the most primitive the lambda calculi: Untyped Lamdba Calculus.

The lambda calculi provides a basis for the formal study of logic, mathematics and computation. Untyped Lambda Calculus, published in the 1930’s, evolved to add features such as constants, types and partial functions. Eventually becoming tied to computer science. Lambda calculi influenced languages such as LISP, Scheme, ML and Haskell (1). Robert Harper determines the most important contribution of lambda calculus is the “notion of function as primary… nothing more is needed to obtain a… programming language”. Specifically:

Nearly all programming languages provide some form of function definition mechanism of the kind illustrated here. The main point of the present account is to demonstrate that a more natural, and more powerful, approach is to separate the generic concept of a definition from the specific concept of a function. Function types codify the general notion in a systematic manner that encompasses function definitions as a special case, and moreover, admits passing functions as arguments and returning them as results without special provision. The essential contribution of Church’s λcalculus (Church, 1941) was to take the notion of function as primary, and to demonstrate that nothing more is needed to obtain a fully expressive programming language. (2)

Impractical for modern software engineering needs, this is an exercise to demonstrate proficiency in the underlying principles as well as the techniques necessary to implement in JavaScript. In short I did this for fun. The grammar is simple, only containing three structures (4):

t ::=
x value
λx.t abstraction
t t application

With the operational semantics has three simple rules (4):

1)       t1 → t1'
—————————————————
t1 t2 → t1' t2
2) t2 → t2'
—————————————————
v1 t2 → v1 t2'
3) (λx. t3) v2 → [x ↦ v2]t3

The first rule states evaluate the left hand term first and can be read, “given that t1 reduces to t1’ then the application of t2 to t1 reduces to the application of t2 to t1’”. The second rule is the compliment that states once IF the left hand side has been reduced to a value, then you should then proceed by reducing the right hand side. The final rule is the rule of application. When a lambda expression parametrized by x with the body of t3 has the value v2 applied is reduced to the term t3 with all instances of x replaced (or substituted) with the value v2.

No Constants: Numbers versus Numerals

One of the key concepts that makes the lambda calculus able to compute results is the concept of Church Encodings. I have always conflated the two concepts — but they are really different things

A number is an abstract concept while a numeral is a symbol used to express that number. “Three,” “3” and “III” are all symbols used to express the same number (or the concept of “threeness”). One could say that the difference between a number and its numerals is like the difference between a person and her name. (7)

Once you consider the topic, it is easy to reflect back on Roman Numerals or Chinese Numerals expressing the same concept. But there are other ways to express the concept of “threeness” such as the process of adding three apples to a bag perhaps expressed in function form as Add(Add(Add(EmptyBag, Apple), Apple), Apple). Which isn’t too far removed from Church Encodings. Since the Lambda Calculus only has the concepts of values (really variables), abstractions and applications Church proposed an alternate to Arabic numerals for expressing numbers:

Zero: λs.λx.x
One: λs.λx.s x
Two: λs.λx.s s x
Three: λs.λx.s s s x
.
.
.
Ten: λs.λx.s s s s s s s s s s x

You can consider the variable x to be something that represents the number zero and the value s to to be a successor function. Therefore the number one is represented by the “successor” to “zero” and two to be the “successor” to the “successor” to “zero”… etc.

From there we can go on to define the successor function, addition (repeated successor) and the remaining functions of arithmetic (8).

The Abstract Syntax Tree (AST)

We code in the syntax of the language but compiler typically parse the syntax into an abstract syntax tree to be able to transform based on the semantics of the language. As expressed in the grammar the language is simple and at it’s base only requires three types. In all of the types we collect an info object which refers back to where in the syntax this was defined. It’s cool to have for debugging purposes but doesn’t add any value outside that capacity.

The first is the Variable type (or Value type, it’s ambiguous as to what the right name for this type is). The main bit that we want to track is the variableName.

const Variable = function(info, variableName, deBruijnIndex, ctx)
{
const self = getInstance(this, Variable)
self['project'] = /* defined later */ self['setDeBruijnIndex'] = (ctx) =>
{
/* defined later */
}
return Object.freeze(self)
}

he second type is the Abstraction which has key components variableName and term represent the left (x) and right (t) parts of a term (λx.t). The Boolean isParens represents whether a particular abstraction is surrounded by parentheses.

const Abstraction = 
function(info, variableName, term, isParens, ctx)
{
const self = getInstance(this, Abstraction)
self[‘project’] = /* defined later */ self[‘setDeBruijnIndex’] = (ctx) =>
{
/* defined later */
}
return Object.freeze(self)
}

The final type is the Application type which has two important component the left (lhs) and right (rhs) side of the application.

const Application = function(info, lhs, rhs, isParens, ctx){
const self = getInstance(this, Application)
self[‘project’] = /* defined later */ self[‘setDeBruijnIndex’] = (ctx) =>
{
/* defined later */
}
return Object.freeze(self)
}

These types leverage a class constructor trick hidden in getInstance, which allows for construction without the use of the new operator, just be calling the type name (witness in the definition of the Grammar that lack of the new keyword). Get instance takes an object and a type constructor:

const getInstance = (self, constructor) =>
(self instanceof constructor) ?
self :
Object.create(constructor.prototype) ;

If the passed in object (self) is an instance of the constructor being passed in we return it. However if the type is not yet instantiated this (in the invocation) and self (in getInstance) refers back to the global object and therefore not an instance of the passed in constructor, therefore it instantiates it. The reason one would want to avoid the new keyword and in general typical object construction is that (functional) composition between the constructor and some function that takes a type of that constructor is not really possible.

The Grammar

As I started down the path of coding this in JavaScript the first tool was PegJs (5) a library for defining recursive descent parser combinators. By providing a simple grammar (main components below) it will parse text into an Abstract Syntax Tree which we can then systematically apply the operational semantics over:

TERM
= ABSTRACTION
/ APPLICATION
TERM_OR_VARIABLE
= L_PARENS x:TERM R_PARENS
{
return setParens(x)
}
/ VARIABLE
VARIABLE
/************************************************************
variable ::= x
************************************************************/
= x:CHARACTER
{
return Variable(location(), x)
}
APPLICATION
/************************************************************
Term | Variable -> END
************************************************************/
= x:TERM_OR_VARIABLE END
{
return x
}
/************************************************************
Term | Variable -> ?
************************************************************/
/ x:TERM_OR_VARIABLE !TERM_OR_VARIABLE
{
return x
}
/*************************************************************
Term | Variable -> Term | Variable -> … -> Term | Variable
*************************************************************/
/ terms:TERM_OR_VARIABLE*
{
return reduceTerms(location(), terms)
}
ABSTRACTION
/************************************************************
‘λ’ -> Variable -> ‘.’ -> Term
*************************************************************/
= LAMBDA x:CHARACTER DOT term:TERM
{
return Abstraction(location(), x, term, false)
}

The Left Recursion problem

One of the issues I encountered was defining a the application of two terms. This forms a left recursion in the grammar which can be removed by a simple trick of splitting up the rule into two bits (6). Given this grammar where the type A is either “A followed by alpha” or “Beta”. The “A” is in the left most position and forms the left recursion.

A → A α | β

The trick states A becomes “Beta followed by A prime” and a prime is either the end (epsilon)

A → β A’A’ → ɛ | α A’

I implemented this in the PEG grammar as follows:

APP
= lhs:TERM_OR_VARIABLE rhs:APP_
{
return Application(location(), lhs, rhs, false)
}
APP_
= lhs:TERM_OR_VARIABLE rhs:APP_
{
return Application(location(), lhs, rhs, false)
}
/ lhs:TERM_OR_VARIABLE END
{
return lhs
}
END
= !.

Unfortunately the resulting tree is right associative while the lambda calculus requires left associatively. So the resolution is in the full grammar (The Grammar) where application matches the following patterns

  1. A single term
  2. A term followed by a single something else (another term)
  3. And a list of terms

The last match returns an array which needs to be reduced into a left associate tree:

terms:TERM_OR_VARIABLE*
{
return reduceTerms(location(), terms)
}

And is reduced via:

const reduceTerms = function(info, terms){
const initialLhs = terms.shift()
const initialRhs = terms.shift()
const initialApp = Application(
info,
initialLhs,
initialRhs,
false)
// reduce terms by combining via Application
const appAll = terms.reduce(
(lhs, rhs) => Application(info, lhs, rhs, false),
initialApp
)
return appAll ;
}

Which has the following steps

  1. Extract the first two terms and form an Application object which will serve as the initial instance of the reduction
  2. Reduce the following terms, where the reducing function takes the LHS as the aggregate and returns a new Application object with the RHS applying to the LHS.

The Term “T1 T2 T3 T4” would parse into the left associative tree (T4 is applied to the result of applying T3 to the result of applying T2 to T1):

Application(
Application(
Application(
T1,
T2
),
T3
),
T4
)

Abstractions, α-conversion and De Bruijn

In the Lambda Calculus abstractions are unary functions where multiple parameter functions can be chained together to capture multiple terms. For example:

λx.λy.x y

Captures parameters x and y and applies y to x. However this abstractions body is ambiguous as to the x which it is referring to

λx.λx.x x

Alpha Conversion allows us to rewrite the function by converting the variables to new names resolving the ambiguity:

λx.λy.y y

Nicolaas De Bruijn introduces an alternative technique that references the variable by a zero based index backwards from it’s application to the abstraction that defines it. The previous example is written in either of these two manners leveraging then De Bruijn index:

λx.λy.0 0
λ.λ.0 0

Since the variables are not referenced in the body of the abstraction by name, there really isn’t much value in naming them in the lambda definition. Our original example with x and y parameters with “y being applied to x” would be rewritten

λx.λy.1 0
λ.λ.1 0

This technique decreases the complexity of writing the interpreter that evaluates lambda terms. Without this we would need to create a function that is aware of the parameters previously defined and then create new names on the fly. That function would be a bit hairy to write.

De Bruijn index also helps convey the concept of alpha equivalence. Alpha equivalence states that two terms are the same if they are application is the same. Take these two examples of the identity function:

λx.x
λy.y

Once converted to the De Bruijn indexed form it is obvious they are the same term:

λ.0
λ.0

Match: An at implementing the functional pattern matching

Languages that have built in pattern matching allow for some really simple solutions. It essentially wraps in a case statement that works across types and values, with deconstruction of those types and values into their constituent parts. My fist attempt allowed for an object x to be passed in and an object that of key value pairs where the key is the class name and the value is a function that takes a projection on those values. Since javascript doesn’t have the type of deconstruction necessary, I have implemented a property project on each of the classes in the AST to pul the fields out.

const match = (x, o) => {     const className = x.constructor.name     /* object o has a matching type, with a project property */
if(o[className] && x[‘project’])
{
return o[className].apply(null, x[‘project’])
}
/* wildcard */
else if(o[‘_’])
{
return o[‘_’].apply(null, x[‘project’])
}
throw “Match case unhandled”
}

Given a use case we can see how this might work:

const Point = (x, y) =>
{
const self = getInstance(this, Application)
self[‘project’] = [x, y]
}
const p = Point(10, 15) ;match(p, { “Point”: ([x, y]) =>
console.log(`p is Point(${x}, ${y})`),
“SomeOtherType”: (_) =>
console.log(’p is SomeOtherType’),
“_”: (_) =>
console.log(’p is an unknown type’),
})

An original point of debate as to the order of the object being evaluated and the object containing the match function pairs. All languages that implement match natively reverse the parameters. I found that syntax in my implementation awkward. I also considered a more complex implementation of match with matches and projects across multiple input objects:

const match2 = (…xs) => {     const functionMap = xs.pop()
const getFromFunctionMap = x => functionMap[x]
const isInFunctionMap = x =>
getFromFunctionMap(x) !== undefined
const getMatchedF = S.pipe([
getTypeNamePermutations,
//=> [[T1,…TN]]
S.map(joinWithSpace),
//=> [“T1 …TN”]
S.map(trim),
//=> [“T1 …TN”]
concatUnderscore,
//=> [“T1 …TN”, “_”]
S.find(isInFunctionMap),
//=> Maybe “T1…TN”
S.map(getFromFunctionMap),
//=> Maybe (…xs -> ?)
getValue
])
const getProjections = xs =>
xs.reduce((a,x)=>{
const y = getProjection(x)
return a.concat([y])
}, [])
const matchedF = getMatchedF(xs) const args = getProjections(xs) if(matchedF)
{
return matchedF.apply(
null, (xs.length > 1) ? [args] : args[0])
}
throw “Match case unhandled”}

The interesting bits here include the use of the spread operator to take arbitrary input. As with the prior implementation the last parameter is an object that contains a map between the “type match” and the function that handles the match — so we pop that off and store it as the functionMap. Two helper functions assist in using that functionMap: getFromFunctionMap and isInFunctionMap are fairly self explanatory. The hairy part is getMatchedF which takes the remaining input and forms permutations of the passed in types (getTyperPermutations), converts them to a string (joinWithSpace), adds the wildcard match (concatUnderscore), and (potentially) pulls any matched function from the functionMap. The function getProjections returns a jagged array of projections from each of the underlying objects passed in. Finally the apply on matchedF.apply handles an edge case where we only match across a single input versus the behavior where there are multiple inputs. It was fairly hairy to write,

const p = Point(10, 15) ;match2(p, p, {

“Point Point”: ([[x, y], [x2, y2]]) =>
console.log(`Object1 is Point(${x}, ${y}) and Object2 is
Point(${x2}, ${y2})`),
“Point _”: ([[x, y], _]) =>
console.log(’Object1 is a Point(${x}, ${y}) and Object2 is
unknown’),
“_ Point”: ([_, [x, y]]) =>
console.log(’Object1 is unknown and Object2 is a
Point(${x}, ${y}) ’),
“_”: (_) =>
console.log(’they both are unknown types’),
})

While it is not nearly as powerful as the construct in the ML family of languages, it provided an easier implementation on many aspects of the eval and print components of the interpreter.

Evaluation

With evaluation we want to be able to perform the application. The fairly simple three rules from the operational semantics

  1. Evaluate the LHS first
  2. When that can no longer be reduced, evaluate the RHS
  3. When both sides are reduced to their furthest and you are left with Values perform substitution

As simple as that the code should read fairly easy (leveraging match or match2):

const isValue = (ctx, term) => {      return match2(term, {          “Abstraction”: _ => true,          “_”: _ => false     })}const _eval = (ctx, ast) => match2(ast, {      “Abstraction”: _ => ast,      “Application”: (info, lhs, rhs, _) => {          // Rule 3.
if(isValue(ctx, lhs) && isValue(ctx, rhs))
{
const lhsTerm = lhs.project[2] ;
return substitue(rhs, lhsTerm)
}
// Rule 2. Eval RHS
else if(isValue(ctx, rhs))
{
return Application.cloneWithRhs(ast, _eval(ctx, rhs))
}
// Rule 1. Eval LHS
else
{
return Application.cloneWithLhs(ast, _eval(ctx, lhs))
}
},
“_”: _ => { if(isValue(ast))
{
return ast
}
else
{
throw NO_RULES_APPLY
}
}})

In a language that implements match natively we wouldn’t need the if statements within application. We could identify those all as different matches using value guards. In the end it would have matched (pun intended) the text better and been slightly more readable, but this isn’t too bad. The two static helper methods on the Application type cloneWithLhs and cloneWithRhs for am new Application instance replacing the LHS and RHS respectively. Nestled into the clones you will see that the appropriate side is reduced through a recursive call to _eval. In the substitute case we want to take the RHS and substitute into the place of the term on the LHS.

The thing is: The De Bruijn Index!

Consider the term

λx.(λy.λz.y z)(λn.x)

When we apply the application we end up with some mis-referencing of the De Bruijn Index if we don’t take steps to correct it. It helps to look at the term as a tree [where @ stands for application] (9). We can consider the process of application (substitution) and visualize it’s impact of the movement on the De Bruijn index. The impact of the move requires us to shift the indexes in position.

By allowing the three components of the term to be visualized by a tree nodes representing the abstraction(λ) application (@) and the variables ({0 | 1 | 2}) we can easily visualize what need to happen in the substitution. Before considering the different panels examine the edges. The solid edges represents either the term of an abstraction or the sides of the application. The dotted edges help provide a reference to the abstraction from which their variables are captured — these edges are also labeled with the original variable name to help in the reference.

The second panel highlights the LHS of the application in Orange and the RHS of the application in Green. Substitution requires us to

  1. Remove the lambda term on the LHS and rewire the LHS parent node down to the next term
  2. Replace the variable reference in the LHS with he RHS Lambda Term

This gives us the new term λx.(λz.λn.xz) but if we look at this in the De Bruijn form without performing the shift we see a problem. The substituted term (λn.x) has the De Bruijn form of (λ.1) but after substitution the variable one abstraction away is z not x! x is two steps away. And therefore we need to shift the rule is essentially

  1. All abstraction below the substitution point where the the parent node is rewired need to be investigated.
  2. All variable references with a DE Bruijn Index > 0 need to have one level of reference added to it

This can all be accomplished by a function that looks like this:

// "shift" the deBruijnIndex all variables within AST by some amount
const substitue = (application) => {
const a = shift(-1, application) ;
const [info, lhs, rhs, _] = a ;
const lhsTerm = lhs.project[2] ;
return shift(1, _substitue(rhs, lhsTerm))
}
const _substitue = (replacement, ast) => { return match2(ast, {
"Application": (info, lhs, rhs, isParens, ctx) => {
return Application(
info,
_substitue(replacement, lhs),
_substitue(replacement, rhs),
isParens,
ctx
)
},

"Abstraction": (info, variableName, term, isParens, ctx) =>{
return Abstraction(
info,
variableName,
_substitue(replacement, term),
isParens,
ctx
)
},
"Variable": (info, variableName, deBruijnIndex, ctx) => {
if(deBruijnIndex === 0) {
return replacement ;
}
return ast ;
}
})
}
const shift = (amount, ast) => { return match2(ast, {
"Application": (info, lhs, rhs, isParens, ctx) => {
const lT = shift(amount, lhs)
const rT = shift(amount, rhs)
const a = Application(
info,
lT,
rT,
isParens,
ctx
)
return a
},
"Abstraction": (info, variableName, term, isParens, ctx) =>{
const t = shift(amount, term)
const a = Abstraction(
info,
variableName,
t,
isParens,
ctx
)
return a ;
},
"Variable": (info, variableName, deBruijnIndex, ctx) => {
const v = Variable(
info,
variableName,
deBruijnIndex + amount,
ctx
)
return v
}
})
}

The Source

It is still a bit of a work in process and I need to tidy off the repository, but you can find the code here: https://github.com/tb01923/lambda-calculi-et-al/tree/master/simple-untyped-lambda

--

--

Todd Brown

A 25 year software industry veteran with a passion for functional programming, architecture, mentoring / team development, xp/agile and doing the right thing.