Designing a TUI Framework in Rust - Part 2
Part 2 of 2 in Designing a TUI Framework in Rust.
Previous: Part 1: The Foundational Design Decisions
The first version of Telex worked well for self-contained applications - counters, forms, lists. But real applications need to talk to the outside world, recover from errors, and run side effects. Here's what changed and why.
Killing hook order dependency
The original Telex had two hook APIs. The first was index-based, following React's model:
hooks were stored in a Vec, and each call to use_state returned
the next item by incrementing a counter. This worked, but it came with React's rules -
no hooks in conditionals, no hooks in loops, always call them in the same order, or the
indices desynchronize and the wrong state comes back.
The keyed API already existed alongside it, using TypeId keys generated by
the state! macro. It had none of these problems. So the index-based API was
removed entirely before anyone depended on it.
// Before: index-based (removed)
let count = cx.use_state(|| 0); // hook 0
let name = cx.use_state(|| ""); // hook 1
// swap these two lines and everything breaks
// After: keyed (current)
let count = state!(cx, || 0); // key = TypeId of anonymous struct at this call site
let name = state!(cx, || ""); // different call site = different key
// order doesn't matter, conditionals are fine
The cost is a HashMap lookup instead of a Vec index. In practice,
components have a handful of hooks and the lookup is noise compared to the terminal I/O
that follows. The benefit is that an entire class of runtime panics disappears.
Talking to the outside world
The first version of Telex had no story for external events. If you wanted to read from a WebSocket or a MIDI device, you were on your own - spawn a thread, put data somewhere, hope the render loop notices. There was no mechanism for a background thread to wake the event loop, so data sat in a buffer until the next keypress triggered a re-render.
Channels and ports solve this. A channel is a typed, inbound message queue. A port adds an outbound direction. The key insight is where the thread boundary falls:
let ch = channel!(cx, SensorReading);
// ch.tx() returns a WakingSender — Send + Clone
// Hand it to any thread
let tx = ch.tx();
std::thread::spawn(move || {
loop {
let reading = sensor.read();
tx.send(reading).ok();
}
});
// In the component: messages that arrived since last frame
for msg in ch.get() {
// process this frame's messages
}
The Sender is Send - it crosses the thread boundary.
Everything else (State, View, Scope) stays on the
main thread. No Arc, no Send bounds on components, no mutex
contention on state.
At the top of each frame, the run loop drains all registered channels. Components see only messages that arrived since the last render. This is frame-buffered delivery - messages are batched per frame rather than processed one at a time.
WakingSender
A plain mpsc::Sender drops messages into a queue, but nothing tells the event
loop to wake up. Without a wake mechanism, the loop polls on a 16ms timeout, meaning
external data has up to 16ms of latency before it appears on screen.
WakingSender solves this by signaling the event loop on every send, so the
render happens immediately. Near-zero latency, no polling overhead when idle.
Effects
Side effects - logging, starting a timer, opening a file - can't happen during tree construction. The component function builds a description of the UI; it shouldn't mutate the world while doing so. Effects run after render, when the view tree is complete and the buffer has been flushed:
let count = state!(cx, || 0);
// Runs after render, only when count changes
effect!(cx, count.get(), |&c| {
println!("Count is now {}", c);
|| {} // cleanup (runs before next effect, or on teardown)
});
// Runs once, on first render only
effect_once!(cx, || {
println!("Component mounted");
|| println!("Cleanup on exit")
});
Effects use the same TypeId keying as state - order-independent, safe in
conditionals. The run loop renders, flushes effects, and if any effect modified state,
re-renders once more. A cycle detector panics if effects fire more than 100 times in 10
frames, catching infinite loops early rather than hanging the terminal.
Each effect returns a cleanup function. Cleanups run before the next invocation of the same effect and on application exit. This is essential for timers, file handles, and anything else that needs deterministic teardown.
Error boundaries
When your data comes from external sources, it crashes. A device gets unplugged, a server returns malformed JSON, a codec panics on unexpected input. In the first version, a panic in any callback or rendering function killed the entire application.
Error boundaries make failure containable:
View::error_boundary()
.child(risky_widget)
.fallback(|err| View::text(format!("Error: {}", err)))
.build()
When a panic occurs inside the boundary, catch_unwind catches it, the subtree
is replaced with the fallback view, effect cleanups for the failed subtree run, and the rest
of the application continues. Without an error boundary wrapping it, a panic propagates to
the run loop and terminates the app - same as before. Safety is opt-in, not forced
overhead.
The escape hatch: custom widgets
The View enum is closed by design. You can compose existing widgets to build
new ones - a settings panel is a VStack of TextInput and
Checkbox widgets. But some things can't be composed. A piano roll, a
spectrogram, a hex editor - these need direct control over character cells.
View::Custom is the escape hatch. It accepts anything implementing the
Widget trait:
trait Widget {
fn render(&self, area: Rect, buf: &mut Buffer);
fn focusable(&self) -> bool { false }
fn handle_key(&mut self, key: KeyEvent) -> bool { false }
}
A custom widget participates fully in layout, focus navigation, and the render pipeline. It gets a rectangular area and a mutable reference to the cell buffer. The framework doesn't know or care what it draws - it just reserves the space and hands over control.
This is the trade-off acknowledged in the first post made explicit: the enum is closed for built-in widgets (fast, exhaustive, debuggable), but open for user-defined rendering when composition isn't enough.
Dirty render skipping
The first version re-rendered every frame unconditionally. For a TUI app that's mostly idle - waiting for the user to press a key - this wastes CPU rebuilding the same view tree and diffing the same buffer, finding nothing changed.
The wake mechanism fixes this. Three things can make a frame "dirty":
- A
Statemutation sets itsdirtyflag - Terminal input arrives (keypress, mouse, resize)
- A channel receives data (via
WakingSender's wake flag)
If none of these are true, the event loop skips the entire render pass and goes back to sleep. Idle CPU drops from 5-10% to near zero.
This sounds straightforward, but getting it right means every path that produces new data
must set the appropriate flag. A recent bug illustrated this precisely: the
stream!, text_stream!, and text_stream_with_restart!
macros used plain mpsc::Sender with no wake mechanism. Tokens arrived from
background threads, but the event loop never noticed - no dirty flag, no re-render.
Data sat invisible until the user happened to press a key. The fix was to add a wake flag
to stream handles, set by the background thread on each token, checked by the event loop
alongside the other dirty sources.
Every new data path into the render loop is a potential wake bug. The pattern is always the same: data arrives, flag is set, loop wakes, frame renders. Miss the flag and you get stale screens. It's the kind of invariant that's easy to state and easy to violate.
What's next
The architecture is stable, but there's room to grow. Component-level memoization would let
subtrees skip re-evaluation when their inputs haven't changed - not needed yet at TUI
scale, but the component identity mechanism is already in place to support it. Layout caching
would avoid recalculating constraints that haven't changed. And there's a model layer question:
ports and reducers hint at state that doesn't belong to any one component - a database
handle, a network connection, a shared resource. use_context partially addresses
this, but a standalone store that lives outside the component tree may eventually be needed.
The lesson from the past year is that each of these changes was motivated by a concrete failure, not a theoretical concern. Index-based hooks were removed because they caused panics. Channels were added because applications needed external data. Effects were added because side effects during render caused bugs. Error boundaries were added because external sources crash. The architecture evolves by hitting walls and finding doors.
Telex