Python wasn’t built as a functional language. When I write it, I usually think in terms of objects and classes, of for loops and lists that I change directly. But over time, I began to see the appeal of another way: functional programming. This style treats computation as the evaluation of mathematical functions. It avoids changing state or mutating data.
The core ideas are simple. A “pure” function, for example, always gives the same output for the same input. It doesn’t secretly read or change anything outside itself. This makes it incredibly predictable. Another idea is immutability. Instead of changing a list, you create a new list with the desired change. This prevents bugs where one part of your code accidentally breaks data another part is using.
Python has some functional features built-in, like map() and filter(). But to really adopt this style, you need better tools. That’s where libraries come in. They give us persistent data structures, safer error handling, and elegant ways to combine functions. Let’s look at five that have changed how I write Python.
I’ll start with toolz. If you’ve ever tried to chain a lot of data transformations, your code can become a nest of parentheses or temporary variables. The toolz library is a Swiss Army knife for creating clean, readable data pipelines. It provides a huge collection of functions that work well with iterators, promoting lazy evaluation and composition.
Think of it as supercharging Python’s own itertools and functools. One of its simplest yet most powerful tools is compose. It lets you build a new function by combining others, reading from right to left. Here’s what I mean.
from toolz import compose, curry
# Some simple, pure functions
def add(x, y):
return x + y
def multiply_by_two(x):
return x * 2
def subtract_five(x):
return x - 5
# Create a single transformation pipeline
# This reads as: subtract_five(multiply_by_two(add(x, y)))
complex_calculation = compose(subtract_five, multiply_by_two, add)
# Now use it
result = complex_calculation(10, 3) # ((10 + 3) * 2) - 5 = 21
print(result) # Output: 21
This is declarative. I describe what I want to happen: add, then double, then subtract five. The compose function handles how it’s executed. Another gem is curry. Currying lets you take a function that accepts multiple arguments and call it with one argument at a time, getting a new function each time.
from toolz import curry
# A normal function
@curry
def send_greeting(greeting, name, punctuation):
return f"{greeting}, {name}{punctuation}"
# Partial application: fix the first argument
say_hello = send_greeting("Hello")
# Now say_hello is a function that needs 'name' and 'punctuation'
say_hello_to_john = say_hello("John")
# Now this function just needs 'punctuation'
full_greeting = say_hello_to_john("!")
print(full_greeting) # Output: Hello, John!
This is incredibly useful for creating specific utility functions on the fly or for configuring behavior in a pipeline. toolz also has fantastic utilities for working with dictionaries and sequences, like groupby, merge, and sliding_window. It makes data processing feel fluid.
Next is fn.py. This library feels like it brings concepts from languages like Scala or Haskell directly into Python. Its most notable contributions are immutable data structures and monadic types for handling absence and errors. These concepts sound complex, but they solve everyday problems elegantly.
Let’s talk about Option. In Python, we often use None to represent the absence of a value. But this leads to countless if value is not None: checks scattered through your code. An Option type explicitly states a value might be missing. It can be Some(value) or Nothing.
from fn import Option
def safe_divide(dividend, divisor):
if divisor == 0:
return Option.Nothing
return Option.Some(dividend / divisor)
# Instead of checking for None, we use map and get_or
result = safe_divide(10, 2)
value = result.map(lambda x: x * 3) # Only runs if result is Some
print(value.get_or(0)) # Output: 15.0
result_nothing = safe_divide(10, 0)
value_nothing = result_nothing.map(lambda x: x * 3) # This does nothing
print(value_nothing.get_or("Cannot divide by zero")) # Output: Cannot divide by zero
The map method only applies the function if there’s a value inside. This chains operations safely without a single if statement. fn.py also provides Either, which is like Option but for operations that can fail with a specific error. It’s either Right(success_value) or Left(error_value).
from fn import Either
def parse_integer(user_input):
try:
return Either.Right(int(user_input))
except ValueError as e:
return Either.Left(f"Parse error: {e}")
# Work with the successful path
calculation = (
parse_integer("42")
.map(lambda x: x + 10) # Executes if it was a Right
.map(lambda x: x * 2)
)
# Finally, extract the result or handle the error
final = calculation.fold(
lambda error: f"Failed: {error}", # What to do if it's a Left (error)
lambda success: f"Result: {success}" # What to do if it's a Right (success)
)
print(final) # Output: Result: 104
This style forces you to handle both success and failure paths upfront, making your code more robust. fn.py encourages you to think about all possible outcomes as part of the data flow.
My third library is returns. If you like the ideas in fn.py but want something more explicit and type-friendly, returns is a fantastic choice. It’s built with type hints in mind and offers a rich set of primitives to make your function signatures honest about what they return and what can go wrong.
The Result type is its workhorse, similar to Either. A function returns a Result[SuccessType, FailureType]. This is incredibly clear to anyone reading your code or using an IDE. Let’s refactor the parsing example with returns.
from returns.result import Result, Success, Failure
from returns.pipeline import flow
def parse_integer_v2(user_input: str) -> Result[int, str]:
try:
return Success(int(user_input))
except ValueError:
return Failure("Invalid integer string")
def add_ten(number: int) -> int:
return number + 10
def double(number: int) -> int:
return number * 2
# Use `flow` to create a pipeline of operations that work on the Success value
pipeline_result = flow(
"42",
parse_integer_v2, # Result[int, str]
lambda r: r.map(add_ten), # Map over the Success case
lambda r: r.map(double),
)
print(pipeline_result) # Output: <Success: 104>
# To see a failure
failed_result = flow(
"not_a_number",
parse_integer_v2,
lambda r: r.map(add_ten),
)
print(failed_result) # Output: <Failure: Invalid integer string>
The flow function is like compose but designed to work seamlessly with container types like Result. returns also provides the @safe decorator to automatically catch exceptions and wrap them in a Failure.
from returns.result import safe
@safe
def risky_operation(data: dict) -> int:
# This could throw a KeyError or TypeError
return data['key'] + 100
result = risky_operation({'key': 5})
print(result) # Output: <Success: 105>
result_fail = risky_operation({})
print(result_fail) # Output: <Failure: KeyError('key')>
This declarative error handling keeps your main logic clean and separates the “what” from the “what if it goes wrong.” returns helps you build applications where errors are not exceptions to be feared, but expected data points to be managed.
Now, let’s talk about data. In functional programming, we prefer immutable data. Changing a shared list is a common source of bugs. The pyrsistent library gives us data structures that look like Python lists, dictionaries, and sets but are immutable. “Persistent” here means that when you “modify” one, you get a new structure, and the old one is left untouched.
This might sound inefficient, but pyrsistent uses smart structural sharing under the hood. It doesn’t copy the entire structure every time. Let’s see a vector, which is like an immutable list.
import pyrsistent as pr
# Create a persistent vector
original_vector = pr.v(1, 2, 3, 4, 5)
print(original_vector) # pvector([1, 2, 3, 4, 5])
# "Appending" returns a new vector
new_vector = original_vector.append(6)
print(new_vector) # pvector([1, 2, 3, 4, 5, 6])
print(original_vector) # pvector([1, 2, 3, 4, 5]) # Unchanged!
# "Setting" an index also returns a new one
modified_vector = original_vector.set(1, 99)
print(modified_vector) # pvector([1, 99, 3, 4, 5])
print(original_vector) # pvector([1, 2, 3, 4, 5]) # Still unchanged!
The same principle applies to pmap (persistent dictionary) and pset. This immutability is a huge benefit in concurrent programming, but even in single-threaded code, it gives you certainty. You can pass a pvector to a function and know it won’t be altered.
def process_data(data):
# I can safely use this data without worrying about side effects
total = sum(data)
# Attempting to mutate it will create a new object, not affect the caller
new_data = data.append(total)
return new_data
immutable_data = pr.v(10, 20, 30)
result = process_data(immutable_data)
print("Original:", immutable_data) # Original: pvector([10, 20, 30])
print("Result:", result) # Result: pvector([10, 20, 30, 60])
Finally, I want to highlight more-itertools. While the standard itertools module is great, more-itertools fills in the gaps with a treasure trove of iterator recipes. Functional programming in Python heavily relies on iterators for lazy, memory-efficient processing. This library provides the precise tools you often find yourself needing.
Need to process data in fixed-size chunks? Use chunked. Need to look at overlapping pairs or windows of data? Use windowed or pairwise. Need to locate a single item in an iterable based on a predicate? Use first_true.
import more_itertools as mit
# Chunking data for batch processing
data_stream = range(100) # A large, lazy iterable
for batch in mit.chunked(data_stream, 10):
# `batch` is a list of 10 items (except the last one)
print(f"Processing batch: {batch}")
# Simulate some work
pass
# Finding patterns with sliding windows
sequence = [0, 1, 2, 3, 4, 5]
for window in mit.sliding_window(sequence, 3):
print(window)
# Output:
# (0, 1, 2)
# (1, 2, 3)
# (2, 3, 4)
# (3, 4, 5)
# A very useful one: flattening a list of lists, but lazily
list_of_lists = [[1, 2], [3, 4, 5], [6]]
flattened = mit.flatten(list_of_lists)
print(list(flattened)) # Output: [1, 2, 3, 4, 5, 6]
One of my personal favorites is spy. It lets you “peek” at the first few items of an iterator without consuming them, which is invaluable for debugging lazy pipelines.
import more_itertools as mit
def data_generator():
for i in range(1000):
yield i * 2
# Create our lazy iterator
it = data_generator()
# Peek at the first 5 elements without removing them
first_5, new_iterator = mit.spy(it, 5)
print("Peeked elements:", first_5) # Peeked elements: [0, 2, 4, 6, 8]
# The original iterator is intact and can be used
print("Next from iterator:", next(new_iterator)) # Output: 0
print("Next again:", next(new_iterator)) # Output: 2
These five libraries—toolz, fn.py, returns, pyrsistent, and more-itertools—offer different entry points into functional thinking with Python. You don’t need to use them all at once. Start by using more-itertools to make your loops more expressive. Try using toolz.compose to tidy up a transformation. Experiment with returns.Result to handle errors in a new way.
The goal isn’t to make your code look like Haskell. It’s to borrow ideas that make your Python code clearer, more reliable, and easier to reason about. By treating data as immutable and functions as pure building blocks, you reduce hidden dependencies. By using types like Option and Result, you make potential failures a visible part of your design. These are practical benefits that, in my experience, lead to better software.