Rust Patterns That Matter #20: Channels - Message Passing
Post 20 of 22 in Rust Patterns That Matter. Companion series: Building a Chat Server in Rust.
Previous: #19: Arc<Mutex<T>> | Next: #21: Pin and Boxing Futures
The previous post covered Arc<Mutex<T>> for shared state
across threads. It works, but shared mutable state is inherently complex: lock
ordering, potential deadlocks, contention. Channels offer an alternative: instead
of sharing memory and protecting it with locks, send data between threads. No shared
state, no locks, no deadlocks.
The motivation
You have a producer thread generating work items and a consumer thread processing them. With shared state:
use std::sync::{Arc, Mutex, Condvar};
use std::collections::VecDeque;
let pair = Arc::new((Mutex::new(VecDeque::new()), Condvar::new()));
// Producer
let p = Arc::clone(&pair);
std::thread::spawn(move || {
for i in 0..5 {
let (lock, cvar) = &*p;
lock.lock().unwrap().push_back(format!("item {i}"));
cvar.notify_one();
}
});
// Consumer
let (lock, cvar) = &*pair;
for _ in 0..5 {
let mut queue = lock.lock().unwrap();
while queue.is_empty() {
queue = cvar.wait(queue).unwrap();
}
println!("received: {}", queue.pop_front().unwrap());
}
This works, but requires managing a mutex, a condition variable, and the wake-up logic manually. Lock, check if empty, wait, re-check - it's error-prone and hard to extend.
The pattern: channels
use std::sync::mpsc;
let (tx, rx) = mpsc::channel();
// Producer thread
std::thread::spawn(move || {
for i in 0..5 {
tx.send(format!("item {i}")).unwrap();
}
});
// Consumer (main thread)
for msg in rx {
println!("received: {msg}");
}
channel() returns a (Sender<T>, Receiver<T>)
pair. The sender sends values. The receiver blocks until a value arrives. When the
sender is dropped, the receiver's iterator ends. No locks, no condition variables,
no shared state.
Multiple producers
mpsc stands for "multiple producer, single consumer." Clone the sender
to share it among multiple producers:
let (tx, rx) = mpsc::channel();
for id in 0..4 {
let tx = tx.clone();
std::thread::spawn(move || {
tx.send(format!("from worker {id}")).unwrap();
});
}
drop(tx); // drop the original sender so rx knows when all senders are gone
for msg in rx {
println!("{msg}");
}
Each worker gets its own cloned sender. The receiver collects messages from all workers. When all senders are dropped, the receiver loop ends.
Bounded vs unbounded
channel() is unbounded: the producer never blocks, and the internal
buffer grows without limit. If the producer is faster than the consumer, memory
usage grows indefinitely.
sync_channel(n) is bounded: the buffer holds at most n
messages. If it's full, the producer blocks until the consumer takes a message.
This provides backpressure:
let (tx, rx) = mpsc::sync_channel(10); // buffer of 10
// Producer blocks when buffer is full
// Consumer processes at its own pace
// Memory usage is bounded
For most production systems, bounded channels are the right default. They prevent unbounded memory growth and naturally balance producer/consumer speeds.
crossbeam-channel
The standard library's mpsc covers basic cases. For more advanced
patterns, crossbeam-channel provides:
- MPMC (multiple producer, multiple consumer) - multiple threads can receive from the same channel
select!- wait on multiple channels simultaneously, acting on whichever has data first- Timeouts -
recv_timeoutfor non-blocking waits - Better performance - crossbeam channels are generally
faster than
std::sync::mpsc
use crossbeam_channel::{select, unbounded};
let (tx_work, rx_work) = unbounded();
let (tx_quit, rx_quit) = unbounded();
loop {
select! {
recv(rx_work) -> msg => {
let msg = msg.unwrap();
println!("work: {msg}");
}
recv(rx_quit) -> _ => {
println!("shutting down");
break;
}
}
}
Telex's use of channels
Telex uses channels for external event integration. When a background task (file watcher, network listener) needs to send events into the UI loop, it sends them through a channel. The main loop receives from the channel on each tick, triggering re-renders. See Designing a TUI Framework - Part 2 for the full story on ports and channels.
When to use channels vs shared state
- Channels: producer/consumer patterns, work queues, pipelines, event systems, decoupled components. When the data flows in one direction.
- Shared state (
Arc<Mutex<T>>): when multiple threads need low-latency access to the same data structure (a shared cache, a configuration map). When the data doesn't flow - it's consulted in place.
"Don't communicate by sharing memory; share memory by communicating." - Go proverb, equally applicable in Rust
When in doubt, start with channels. They're easier to reason about, can't deadlock (no locks to hold), and naturally decouple components. Move to shared state only when channels introduce unacceptable latency or awkward serialization of access.
See it in practice: Building a Chat Server #5: Going Multi-threaded uses this pattern for delivering messages to client writer threads.
Telex