Building a Chat Server in Rust #5: Going Multi-threaded
Post 5 of 6 in Building a Chat Server in Rust. Companion series: Rust Patterns That Matter.
Previous: #4: Commands and Plugins | Next: #6: Going Async
Until now, our server handled one client at a time. Connect a second client and it blocks until the first disconnects. That's not a chat server - it's a queue. Now we spawn a thread per client and make the server state thread-safe. Two patterns make this work.
The code is on the
05-threaded
branch.
What breaks
The first instinct is to thread::spawn and pass &mut server
to each thread. The compiler blocks it immediately: you can't send a mutable
reference to a thread because the reference's lifetime isn't 'static,
and you can't have multiple mutable references anyway. Two things need to change:
Rc<RefCell>->Arc<Mutex>for shared state- Direct writes -> channels for message delivery
Pattern #19: Arc<Mutex<T>> - shared state across threads
Rc isn't Send - it can't cross thread boundaries
because its reference count isn't atomic. RefCell isn't Sync
- it can't be shared between threads because its borrow flag isn't atomic.
The thread-safe equivalents: Arc (atomic reference counting) for
shared ownership, Mutex for exclusive access. Same pattern, different
guarantees:
// Stage 2 (single-threaded):
pub members: Rc<RefCell<Vec<UserId>>>
// Stage 5 (multi-threaded):
pub members: Arc<Mutex<Vec<UserId>>>
The server itself goes behind Arc<Mutex> too:
let server = Arc::new(Mutex::new(server));
for stream in listener.incoming() {
let stream = stream?;
let server = Arc::clone(&server);
std::thread::spawn(move || {
if let Err(e) = handle_client(server, stream) {
println!("Client error: {e}");
}
});
}
Each thread gets its own Arc handle (cheap - just an atomic
increment). When a thread needs server state, it calls .lock().unwrap()
to get exclusive access:
// Brief lock: register, get motd, join lobby.
let (user_id, motd) = {
let mut srv = server.lock().unwrap();
let uid = srv.register_client(username.clone(), tx);
let motd = srv.config.motd.clone();
srv.join_room(uid, RoomId::new(0));
(uid, motd)
}; // lock released here
The critical discipline: hold the lock for the shortest possible duration. Lock, do the work, drop the guard. Don't hold a lock while doing IO (reading from the network, writing to a socket). That would block all other threads.
Deep dive: Rust Patterns #19: Arc<Mutex> vs Arc<RwLock> covers when to use Mutex vs RwLock and common deadlock patterns.
Pattern #20: Channels - message passing
The lock discipline creates a problem: how do we deliver messages to clients without holding the server lock? We can't write to a socket while the lock is held - that would block all other threads during a slow write.
The answer: channels. Each client gets an mpsc::Sender registered
with the server, and a dedicated writer thread that reads from the corresponding
Receiver:
let (tx, rx) = mpsc::channel::<Event>();
// Writer thread: reads events, writes to the TCP stream.
let mut writer = write_stream.try_clone()?;
let writer_handle = std::thread::spawn(move || {
for event in rx {
match event {
Event::Message { from, body } => {
let _ = writeln!(writer, "<{from}> {body}");
}
Event::System(text) => {
let _ = writeln!(writer, "{text}");
}
Event::Quit => break,
}
}
});
When the server broadcasts a message, it locks briefly, reads the member list, and sends the event through each member's channel. The actual TCP write happens in each client's writer thread - no lock held:
fn broadcast_message(&mut self, room_id: RoomId, sender_id: UserId, username: &str, body: &str) {
let members = room.member_ids();
let event = Event::Message {
from: username.to_string(),
body: body.to_string(),
};
for &member_id in &members {
if let Some(Some(client)) = self.clients.get(member_id.index()) {
let _ = client.tx.send(event.clone());
}
}
}
"Don't communicate by sharing memory; share memory by communicating." The channel decouples the producer (server logic) from the consumer (TCP writer). No locks during IO, no contention, no blocking.
Deep dive: Rust Patterns #20: Channels covers mpsc, sync channels, and crossbeam channels.
The Send bound
One surprise: our FilterRegistry held Box<dyn FnMut(...)>.
When the server went behind Arc<Mutex>, the compiler complained:
dyn FnMut isn't Send. The fix: add + Send
to the trait object:
// Before:
filters: Vec<Box<dyn FnMut(&str, &str) -> FilterAction>>
// After:
filters: Vec<Box<dyn FnMut(&str, &str) -> FilterAction + Send>>
This ensures every closure stored in the registry can safely be sent to another
thread. The next post runs into the same
pattern with Send + Sync in async code.
Try it
# Terminal 1
git checkout 05-threaded
cargo run
# Terminal 2
nc 127.0.0.1 8080
alice
hello from alice # → <alice> hello from alice
# Terminal 3 (simultaneously!)
nc 127.0.0.1 8080
bob
hello from bob # → alice sees: <bob> hello from bob
Both clients work at the same time. This is a real chat server now.
What we have, what's missing
- Arc<Mutex> - shared server state across threads. Brief locks, no lock during IO.
- Channels - mpsc for delivering events to client writer threads. Decouples server logic from TCP writes.
What's missing: a thread per client works, but doesn't scale to thousands of connections. Next time we replace threads with tokio and async/await.
Telex