Article 1 of 8 in the series Understanding Transducers through Python
By Rob Smallshire

What is a transducer?

Transducers - a portmanteau of ‘transform reducers’ - are a new functional programming concept introduced into the Clojure programming language. Although transducers are actually pretty straightforward in retrospect, wrapping your brain around them, especially if you’re not already a competent Clojureist, can be challenging. In this series of articles we'll explain transducers by implementing them from first principles in Python.

To understand transducers, we must first understand reducing functions. In a nutshell, a reducing function is any function which takes a partial result and a new piece of information to produce a new result. Reducers are what we pass to the reduce() function in Python and can be thought of as having the following structure:

(result, input) -> result

A transducer is a function which accepts a reducing function and returns a new reducing function:

((result, input) -> result)  ->  ((result, input) -> result)

A transducer function can be used to augment an existing reducing function with some additional reducing behaviour, producing a new reducing function which applies both reducing behaviours in a specific order. In this way, we can use transducers to chain arbitrary reducing functions together. This in turn is useful, because it allows us to perform complex reduction operations using only a single call to reduce(). In other words, transducers are primarily useful because they provide a means of composing reducers.

As we will see, transducers also facilitate a clear separation of concerns between the algorithmic essence of a particular reduction, and the details of from where the input data are coming from and where the output data are going to.

In this series, we'll introduce transducers by implementing them from scratch in everybody’s favourite executable pseudocode, Python. We’ll start with the familiar staples of functional programming, map(), filter() and reduce(), and derive transducers from first principles. We’ll work towards a set of general tools which work with eager collections, lazy ‘pull’ sequences, and ‘push’ event streams. Along the way we’ll cover stateful transducers and transducer composition, demonstrating that transducers are both more general, and more fundamental, than the functional programming tools baked into Python and many other languages.

By the end of this series, not only should transducers make sense to you, but you’ll have a recipe for implementing transducers in your own favourite programming language.

The main objective of this exercise is to understand transducers. A secondary objective is to see how some more advanced functional programming techniques can be realised in a language like Python. We're not particularly advocating that you should widely adopt transducers in Python instead of, say, generators. For the time being, we expect transducers in Python to remain something of curiosity. That said, if you do find any interesting applications of transducers in Python, do please let us know.

The origin of transducers

Transducers were introduced by Rich Hickey, creator of the Clojure programming language, in a blog post "Transducers are Coming" [1] on August 6th, 2014.

The best way to understand transducers is to implement them yourself. Unfortunately from an educational point of view, the Clojure implementations are mixed up with some other details of Clojure which are pretty much irrelevant to transducers, and Clojure transducers are heavily implemented using anonymous functions which makes them excessively difficult to discuss.

How does this relate to Python?

Python is a pretty straightforward language - at least on the surface, and although Python supports lambdas and other forms of anonymous functions, in this presentation we have striven to use named functions to make the intent of the code more obvious, rather than writing Python in the style of Clojure.

Both Clojure and Python happen to be dynamically typed – or uni-typed – languages, and although that's not particularly important here, it does mean that the Python version can stay fairly close to the Clojure original, and we don't need to get sidetracked into a discussion of how transducers are typed.

Reviewing Python's functional programming tools

Let's briefly review three key functional programming tools included with Python, chiefly for the benefit of those people new to Python, and specifically Python 3. We'll look at map(), filter() and reduce() in turn.

map()

First, the map() function:

>>> help(map)

Help on class map in module builtins:

class map(object)
 |  map(func, *iterables) --> map object
 |
 |  Make an iterator that computes the function using arguments from
 |  each of the iterables.  Stops when the shortest iterable is exhausted.

The map() function accepts a function - or more strictly in Python, any callable object – as its first argument, and then any number of iterable series. In Python an iterable is a sort of lazy sequence which supports only forward iteration. For now, we'll pass the built-in function len as the first argument, and we'll stick to using one iterable, which will be a list of words, produced by calling split() on a string object:

>>> map(len, "Compute the length of each word".split())
<map object at 0x102beaf28>

In fact, in Python 3, which we're using here, map() is lazy and produces an iterator. To force evaluation, we'll pass this iterator to the list constructor:

>>> list(map(len, "Compute the length of each word".split()))
[7, 3, 6, 2, 4, 4]

As expected, we get the length of each word.

filter()

Secondly, we'll look at the filter() function:

>>> help(filter)

Help on class filter in module builtins:

class filter(object)
 |  filter(function or None, iterable) --> filter object
 |
 |  Return an iterator yielding those items of iterable for which function(item)
 |  is true. If function is None, return the items that are true.

The filter() function accepts a predicate function – a function which returns True or False – and an iterable series. Each item from the series is passed in turn to the predicate, and filter() returns an iterator over a series which contains only those items from the source series which match the predicate:

>>> list(filter(lambda w: 'o' in w,
... "the quick brown fox jumps over the lazy dog".split()))
['brown', 'fox', 'over', 'dog']

Neither map() nor filter() see much use in contemporary Python owing to the capabilities of list comprehensions, which provide a more natural syntax for mapping functions over, and filtering items from, iterable series:

>>> [len(w) for w in "jackdaws love my big sphinx of quartz".split()
... if 'a' in w]
[8, 6]

reduce()

The built-in reduce() function has had a troubled life [2] in the history of Python, and in the transition from Python 2 to Python 3, it was moved out of the language proper into the Python standard library. In fact, Guido was ready to throw lambda out with the bathwater!

Let's take a closer look at reduce(), which will set us firmly on the road to understanding and implementing transducers:

>>> from functools import reduce
>>> help(reduce)

Help on built-in function reduce in module _functools:

reduce(...)
    reduce(function, sequence[, initial]) -> value

    Apply a function of two arguments cumulatively to the items of a sequence,
    from left to right, so as to reduce the sequence to a single value.
    For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
    ((((1+2)+3)+4)+5).  If initial is present, it is placed before the items
    of the sequence in the calculation, and serves as a default when the
    sequence is empty.

When we execute reduce, we can see that it evaluates its arguments eagerly, immediately producing a result.

>>> reduce(lambda x, y: x+y, [1, 2, 3, 4, 5])
15

The most common uses of reduction in Python, such as summing a series of items, have been replaced by the sum(), any() and all() functions built into the language, so reduce() doesn't see much use either.

We can also supply a initial item to be used in front of the supplied sequence, which is especially useful if the sequence is empty:

>>> reduce(lambda x, y: x+y, [1, 2, 3, 4, 5], 10)
25
>>> reduce(lambda x, y: x+y, [], 10)
10
>>> reduce(lambda x, y: x+y, [])
Traceback (most recent call last):
  File "", line 1, in
TypeError: reduce() of empty sequence with no initial value
>>>

Notice that with an empty series, the initial value becomes necessary to avoid an error.

It can sometimes be helpful to think of the initial value as being the identity of an operation. This value depends on the operation itself; for example, the additive identity is zero, whereas the multiplicative identity is one.

You might be tempted into thinking that the initial value should be of the same type as the values in the data series, but this isn't necessarily so. For example, using the following function we can accumulate a histogram into a dictionary item by item. In this case the identity value is an empty dictionary (i.e. histogram), not an integer:

>>> def count(hist, item):
...     if item not in hist:
...         hist[item] = 0
...     hist[item] += 1
...     return hist
...
>>> reduce(count, [1, 1, 5, 6, 5, 2, 1], {})
{1: 3, 2: 1, 5: 2, 6: 1}

Note that in production code, this use of reduce() with count() would be much better handled using the Counter type [3]. from the Python Standard Library collections module!

Our lambda expression for adding two numbers and our count() function serve as so-called "reducing functions", or simply 'reducers' for short. In fact, this is what during this session we'll refer to any function that can be passed to reduce() in this way.

Those of you familiar with functional programming techniques in other languages will recognise Python's reduce() as being equivalent to Haskell's foldl, Clojure's reduce, F#'s reduce or fold, Ruby's inject, C++'s std::accumulate(), or the Aggregate() extension method from .NET's LINQ.

Reduce is more fundamental than map or filter

Reduce is interesting because it's is more fundamental than map() or filter(). This is because we can implement map() and filter() in terms of reduce(), but not vice-versa. To do so, we just need to cook up the right reducing functions and supply a suitable initial value. Here is map() implemented in terms of reduce():

"""A module t.py in which we'll do live coding."""

from functools import reduce

def my_map(transform, iterable):

    def map_reducer(sequence, item):
        sequence.append(transform(item))
        return sequence

    return reduce(map_reducer, iterable, [])

Then in the REPL:

>>> my_map(str.upper, "Reduce can perform mapping too!".split())
['REDUCE', 'CAN', 'PERFORM', 'MAPPING', 'TOO!']

Similarly we can make an implementation of filter() in terms of reduce():

def my_filter(predicate, iterable):

    def filter_reducer(sequence, item):
        if predicate(item):
            sequence.append(item)
        return sequence

    return reduce(filter_reducer, iterable, [])

Then in the REPL:

>>> my_filter(lambda x: x % 2 != 0, [1, 2, 3, 4, 5, 6, 7, 8])
[1, 3, 5, 7]

Reducers

In the two previous examples, the local functions map_reducer() and filter_reducer() were used as the reducing functions with reduce(). Our my_map() and my_filter() functions reduce one collection to another, transforming values and collecting the results together. For the time being, we'll stick with the notion that a reducer is any operation you can pass to the reduce() function, although we'll extend and refine this notion later.

Composability is perhaps the most important quality of any programming abstraction. How composable are my_map() and my_filter()? Well, of course, given a predicate function is_prime() and transforming function square() defined like this:

from math import sqrt

def is_prime(x):
  if x < 2:
      return False
  for i in range(2, int(sqrt(x)) + 1):
      if x % i == 0:
          return False
  return True


def square(x):
    return x*x

We can compose this filtering expression,

>>> my_filter(is_prime, range(100))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61,
67, 71, 73, 79, 83, 89, 97]

with a mapping expression using regular function composition:

>>> my_map(square, my_filter(is_prime, range(100)))
[4, 9, 25, 49, 121, 169, 289, 361, 529, 841, 961, 1369, 1681, 1849,
2209, 2809, 3481, 3721, 4489, 5041, 5329, 6241, 6889, 7921, 9409]

One issue here, is that since my_map() and my_filter() are implemented in terms of reduce() they are eager and so a full intermediate list of elements is created. Notably, if we'd used the built-in Python 3 versions of map() and filter() evaluation would have been lazy, and consequently more memory efficient, so we've lost an important quality here.

Furthermore, these sort of functional pipelines can get hard to read in a language like Python as the arguments to the outermost (last applied) function become ever more widely separated. Other "real" functional languages such as Haskell and F# have better support for partial function application and pipelining results through a chain of functions. To some extent, this can be solved in Python by using named intermediate results, although this trades off the readability problem against the difficulty of coming up with meaningful names for the intermediate values.

Progressively abstracting reducers away from iterables

Notice that although reduce() abstracts away from some of the details of the operation - specifically how the items are transformed and combined, it is fundamentally coupled to the notion of iterable series. Furthermore, our my_map() and my_filter() functions are closely tied to the notion of mutable sequences, since they both call append():

my_filter(is_prime, my_map(prime_factors, range(32)))

Iterables in Python can be lazy, but our my_filter() and my_map() functions eagerly return fully realised list sequences.

Let's return to our function definitions to see why:

def my_map(transform, iterable):
    def map_reducer(sequence, item):
        sequence.append(transform(item))
        return sequence
    return reduce(map_reducer, iterable, [])

def my_filter(predicate, iterable):
    def filter_reducer(sequence, item):
        if predicate(item):
            sequence.append(item)
        return sequence
    return reduce(filter_reducer, iterable, [])

The my_map() and my_filter() functions have quite a lot in common:

  • Both my_map() and my_filter() call reduce()
  • Both my_map() and my_filter() supply a mutable sequence – specifically an empty list – as the initial value to reduce()
  • Both map_reducer() and filter_reducer() call append(), thereby expecting a mutable sequence

Let's progressively refactor these functions, extracting the common elements as we go. We'll start by moving the use of reduce() out of the functions completely. Instead we'll simply return the inner reducing function object, which can then be used externally. Our outer my_map() and my_filter() functions now become what are essentially reducer factories; we'll reflect this in the code by renaming them to make_mapper() and make_filterer():

def make_mapper(transform):

    def map_reducer(sequence, item):
        sequence.append(transform(item))
        return sequence

    return map_reducer

def make_filterer(predicate):

    def filter_reducer(sequence, item):
        if predicate(item):
            sequence.append(item)
        return sequence

    return filter_reducer

These can be used with individually with reduce() like this:

.. code-block:: python
>>> from t import *
>>> reduce(make_mapper(square), range(10), [])
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> reduce(make_filterer(is_prime), range(100), [])
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61,
67, 71, 73, 79, 83, 89, 97]

Unfortunately, it's not possible to compose the mapper and filterer into a single function which can be used with a single application of reduce(). If we want to perform both mapping and filtering we must still call reduce() twice.

We've pointed out that, both of our inner reducing functions depend on their first argument supporting an append() method, so we must pass a suitable mutable sequence type – such as a list – as the seed item for reduce(). In this case, append() is just a tool for combining an existing partial result (the list) with a new piece of data (the new item).

Let's abstract away that dependency on sequences, by arranging to pass in this "combining function", which in the case of appending to a sequence, will look like this:

def appender(result, item):
    result.append(item)
    return result

Look carefully at our combining function and recall that 'reducing' is just a synonymn for 'combining'. This innocuous function, which does nothing more than wrap a method call and return the modified sequence, is more interesting than it might first appear, because it too is a reducing-function that takes an intermediate result (the sequence) and the next piece of input (the item) and returns the next result for any subsequent iterations. Our combining function appender() is itself a reducer! We can prove this to ourselves by, using this roundabout way of constructing a list from a range:

>>> from t import *
>>> reduce(appender, range(5), [])
[0, 1, 2, 3, 4]

Having identified appender() as a reducer, we'll add another layer of abstraction, by using another layer of nested functions. The new layer affords us the opportunity to pass in a reducing function, such as appender(). We use a separate layer of inner functions, rather than adding an argument to the outer function, because the point in our client code at which we want to specify the transform or predicate is different from the point at which we want to specify the reducer:

We've made the following changes to make_mapper():

def mapping(transform):

    def make_mapping_reducer(reducer):

        def map_reducer(result, item):
            return reducer(result, transform(item))

        return map_reducer

    return make_mapping_reducer
  1. Renamed make_mapper() to mapping()
  2. Added a new inner function around map_reducer() called make_mapping_reducer(). This function accepts the 'combining' reducer and returns the map_reducer function.
  3. Adjusted the outer mapping() function to return the map_reducer function.
  4. Reworked the implementation of map_reducer() to be in terms of the passed-in combining reducer instead of append(), thereby breaking the dependency on sequence behaviour.
  5. Renamed the sequence argument of map_reducer to result, since it contains the partial result and is no longer necessarily a sequence.

And we've made the following changes to the following changes to make_filterer():

def filtering(predicate):

    def make_filtering_reducer(reducer):

        def filter_reducer(result, item):
            return reducer(result, item) if predicate(item) else result

        return filter_reducer

    return make_filtering_reducer
  1. Renamed make_filterer() to filtering()
  2. Added a new inner function around filter_reducer() called make_filtering_reducer(). This function accepts the combining reducer and returns the filter_reducer function.
  3. Adjusted the outer filtering() function to return the make_filtering_reducer function.
  4. Renamed the sequence argument of map_reducer to result.
  5. Reworked the implementation of filter_reducer() to be in terms of the passed-in combining reducer instead of append(), thereby breaking the dependency on sequence behaviour.
  6. Used a conditional expression rather than a conditional statement.

Finally: Transducers!

Recall that the word 'transducer' is a convenience (and contrivance) for functions which "transform reducers". Look closely at our inner functions make_mapping_reducer() and make_filtering_reducer(). Both of these functions accept a reducer and return a new reducer based on the one passed in. These inner functions are therefore our transducers; let's rename them as such to make this completely clear:

def mapping(transform):

    def mapping_transducer(reducer):

        def map_reducer(result, item):
            return reducer(result, transform(item))

        return map_reducer

    return mapping_transducer

def filtering(predicate):

    def filtering_transducer(reducer):

        def filter_reducer(result, item):
            return reducer(result, item) if predicate(item) else result

        return filter_reducer

    return filtering_transducer

The transducers identified, the outermost functions mapping() and filtering() are exposed for what they are: transducer factories.

The wonderful thing about such transducers is that they can be composed in such a way that the operations proceed element by element, without the need for intermediate sequences. This is achievable because the combining reducer passed to a transducer needn't be our appender() it could be any reducing function, with appender() only being used at the "bottom" of the composite reducer to put place final results into an actual data structure, such as a list. Transducer composition allows us to chain multiple reducing operations together into a single composite reducer.

Let's give our transducers a whirl, using named function arguments for clarity:

>>> from t import \*
>>> reduce(filtering(predicate=is_prime)(reducer=appender), range(20), [])
[2, 3, 5, 7, 11, 13, 17, 19]

Note that we pass the last reduction operation we want to perform on each item (appending) as a reducer to the first reduction operation we want to perform (primality testing). This is the opposite of normal function composition.

Instead of appender() we can pass a different reducer - in this case one produced by transducer made by the mapping() transducer factory:

>>> reduce(filtering(
... predicate=is_prime)(
... reducer=mapping(
... transform=square)(
... reducer=appender)),
... range(20),
... [])
[4, 9, 25, 49, 121, 169, 289, 361]

What's important to recognise here, is that we have composed the filtering and mapping operations to produce the effects of both using a single pass over the input sequence with a single application of reduce().

In the next article in this series, will look at improving on this effective, but somewhat clunky way of composing transducers.

[1]Transducers are Coming: The blog post introducing transducers to the world.
[2]The fate of reduce() in Python 3000, the Artima Developer posting in which Guido van Rossum explains his "reasons for dropping lambda, map() and filter()" from what was to become Python 3. He goes on "I expect tons of disagreement in the feedback, all from ex-Lisp-or-Scheme folks". Ultimately though, the proposal was watered down.
[3]Python documentation for Counter
Category: Transducers