Terminals: Why They Evolved the Way They Did
Someone needs to use the computer
In the 1960s and 70s, a computer was the size of a room and cost more than a house. A university or a company might own one. Dozens of people needed to use it, but you couldn't walk up to the machine and type - it had no keyboard, no screen. The computer itself was just a processor, memory, and storage in a rack. If you wanted to interact with it, you needed a separate device: something with a keyboard for input and some way to see output.
That device was the terminal. The word means exactly what it sounds like - the end of the line, the point where a human meets the machine. Many widely-used terminals were built by the Teletype Corporation: a keyboard and a printer in one unit. You typed a character, it traveled down a cable to the computer, the computer processed it and sent characters back, and the Teletype printed them on paper. That's where the abbreviation "TTY" comes from. It's short for Teletype.
The cable between the terminal and the computer was a serial line - a wire that carried data one bit at a time, like Morse code down a telegraph wire. Every terminal had its own cable running back to the computer. A university might have twenty Teletypes scattered across a building, all wired into the same machine.
The contract
On the computer's side, the operating system kernel had a simple job. For each serial
cable, it read bytes coming in, handed them to whatever program the user was running, took
that program's output bytes, and sent them back down the same cable. The kernel
represented each cable as a device file - /dev/ttyS0 for the first
serial port, /dev/ttyS1 for the second, and so on.
Programs didn't know or care what was on the other end of that file. They read input from a file descriptor and wrote output to one. Whether it was a Teletype printing on paper or a newer video terminal painting characters on a CRT screen, the program saw the same thing: a stream of bytes in, a stream of bytes out.
This simplicity became a contract. Software was written around it. Shells, editors, compilers - everything assumed it was talking to a character device mediated by the kernel. And that contract turned out to be so useful that when the physical hardware disappeared, nobody was willing to break it. Every complication that follows exists because the world changed but the contract had to be preserved.
The wire disappears
Remember that the terminal only existed as a separate device because of distance. The computer was in a machine room. The people who needed it were somewhere else - down the hall, across campus, eventually across the city via phone lines. The terminal was the endpoint of a long cable because it had to be. You couldn't sit at the computer.
As computers became personal, that distance collapsed. The machine moved to your desk. The screen and keyboard were attached directly to it. There was no "somewhere else" anymore, and no need for a remote endpoint. These personal machines came with graphical interfaces - windows, menus, a mouse. That was the new way to interact with a computer.
But by then, a decade of Unix software had been built around the TTY contract - shells, editors, compilers, all the command-line tools that assumed a character device on the other end. That ecosystem was too valuable to abandon. People still needed to run all of that software. So someone built a program that opens a window on the graphical display and pretends to be a terminal inside it: the terminal emulator.
The problem is that a window on a graphical display isn't a serial port. It has no device file. It's just pixels managed by a GUI application. The old software expects to read and write a TTY device - so the kernel needs to fake the cable. The pseudoterminal, or PTY, is a virtual cable: a pair of connected endpoints created in software.
- One end - the slave side (
/dev/pts/0,/dev/pts/1, etc.) - behaves exactly like a physical TTY. The shell attaches here. As far as it's concerned, it's talking to a serial port. - The other end - the master side - is held by the terminal emulator (Alacritty, GNOME Terminal, iTerm2, or anything else that displays that black rectangle with text in it).
When you type a character in your terminal window, the emulator writes it to the master end. The kernel passes it through to the slave end, where the shell reads it. When the shell produces output, it writes to the slave, the kernel passes it to the master, and the emulator draws the characters in its window.
You type 'l' 's' '\n'
│
▼
Terminal Emulator (Alacritty)
│
▼
PTY Master ──── kernel ──── PTY Slave (/dev/pts/1)
│
▼
bash
│
▼
"file1.txt file2.txt\n"
│
(back through the PTY)
│
▼
drawn in your window
The entire point is deception. Programs that expect a hardware cable get a software imitation of one. The slave end is the fake cable. The master end is what's really there. The contract is preserved.
Windows can resize
A physical Teletype had a fixed width - however wide the paper roll was. A CRT
terminal had fixed dimensions too, built into the hardware. But a window on a graphical
desktop can be dragged to any size. If you're running a program that draws a full-screen
layout - vim, htop, anything with columns and
borders - and you resize the window, the layout breaks. Columns misalign, text
wraps at the old width, borders end up in the wrong place. The program drew for one size
and is now living in another.
The program needs to be told. When you resize the window, the terminal emulator tells the kernel the new dimensions, and the kernel delivers a signal called SIGWINCH (Signal Window Change) directly to the running program. The program catches the signal, asks the kernel for the new width and height, and redraws itself.
This is why full-screen terminal programs redraw correctly when you resize the window, and why they don't if they weren't written to handle SIGWINCH. You either listen for that signal or you never find out.
When a program won't listen
You run a program. It gets stuck in an infinite loop. You press Ctrl+C and it stops. But think about what just happened - the program was too busy looping to read its input. It was never going to see that Ctrl+C keystroke in its input buffer. So how did it get killed?
Between the master and slave ends of every PTY, the kernel has a layer called the
line discipline. It watches every byte flowing through the virtual cable.
Most bytes pass straight through. But certain ones are special. When the line discipline
sees byte 0x03 - the code for Ctrl+C - it doesn't deliver it
to the program's input buffer. Instead, the kernel converts it into a signal (SIGINT) and
delivers it directly to the process, bypassing the input queue entirely. The program
doesn't need to cooperate. The kernel reaches in and interrupts it.
This works for other key combinations too. Ctrl+Z sends SIGTSTP to suspend a process. Ctrl+\ sends SIGQUIT to kill it with a core dump. The line discipline intercepts all of them.
But sometimes a program wants those keys for itself. When vim
starts, it puts the terminal into "raw mode" - telling the line discipline to stop
intercepting special bytes and pass everything through untouched. That's how
vim can use Ctrl+C for its own commands instead of being killed. When
vim exits, it restores the original settings. If a program crashes without
cleaning up, the terminal stays in raw mode - which is why your shell sometimes
acts strangely after a crash, and why reset exists to fix it.
Which window?
On the old mainframe, each user had one terminal, one cable, one session. But with graphical desktops, a single user might have ten terminal windows open, each with its own PTY. If a background process needs to prompt for a password, which window does the prompt appear in?
The answer is the controlling terminal. When you open a terminal window,
the shell that starts inside it is "married" to that window's PTY. That PTY becomes the
controlling terminal for everything launched from that shell. You can see which one it is
by running tty - it prints the device path, something like
/dev/pts/3.
This matters for more than just prompts. When the controlling terminal goes away - the window closes, the connection drops - the kernel uses this association to figure out which processes need to be told about it.
Using a computer across the internet
Everything so far happens on a single machine. But sometimes the computer you need to use is in a data center on the other side of the country. You're sitting at your laptop and you need a shell on that remote machine. The problem is the same one from the 1960s - you need to interact with a computer that isn't in front of you - but now the cable is the internet instead of a serial wire.
When you run ssh user@server, the SSH daemon on the remote machine
does something familiar: it creates a PTY pair. It takes the master side and spawns
bash on the slave side, just like a terminal emulator would. The only
difference is that instead of drawing characters in a window, sshd sends
them through an encrypted network tunnel back to the SSH client on your laptop, which
passes them to your local terminal emulator for display.
Your laptop Remote server
────────── ─────────────
Keyboard bash (remote)
│ │
▼ │
Alacritty PTY Slave
│ │
PTY Master ── local ── PTY Slave PTY Master
shell │ │
ssh client ═══ sshd
(encrypted tunnel)
The remote bash has no idea it's being controlled from another continent. It
sees a PTY slave and behaves as if someone were sitting right next to the machine.
sshd is just another terminal emulator - it fakes the cable, same as
Alacritty does.
The remote machine doesn't know your terminal
There's a catch when working remotely. Programs don't just print plain text to the terminal - they also send special byte sequences to control it: move the cursor to row 5 column 10, switch to red text, clear the screen. These are called escape sequences because they start with the escape character, signaling that what follows is a command, not text to display. Different terminals understand different sequences. What works on one might print garbage on another.
So the software on the server - vim, htop, anything that
draws to the screen - needs to know what kind of terminal is on the other end.
When you connect, your SSH client sends along that information: "My user's terminal
type is xterm-256color" (or whatever it happens to be). The remote server
stores this in the $TERM environment variable, and programs look it up in
a database called Terminfo to find the right escape sequences for your terminal.
If you've ever seen colors break, the backspace key print ^H instead of
deleting, or vim draw garbage after connecting from an unusual device, it's
almost certainly because the remote server doesn't have a Terminfo entry matching the
$TERM value your client sent. The lookup fails, the software guesses, and it
guesses wrong.
When the connection drops
You're running a long build over SSH and your Wi-Fi cuts out. The SSH connection dies.
The remote sshd process exits, which closes the master end of the PTY it
was holding. The kernel destroys the PTY. And bash - along with your
build - is killed. All your work, gone, because a wireless signal wobbled for a
few seconds.
Terminal multiplexers like tmux and screen fix this by adding
another layer of indirection. Instead of SSH connecting directly to your shell,
tmux sits in between. It creates its own PTY pair: tmux holds
the master end, your shell runs on the slave end. The SSH connection talks to
tmux, not to the shell.
Now when your Wi-Fi dies and sshd exits, only the outer connection is lost.
tmux is a local process on the server - it doesn't care about the
network. It keeps running, and the inner PTY stays alive. When you reconnect and run
tmux attach, you're hooking a new SSH session into the existing
tmux process. Your shell never noticed you left.
The word "pseudo" in pseudoterminal earns its keep here. The PTY is a simulation of a cable, and simulations can be layered - one fake cable inside another, as deep as you need.
Closing the window
When you close a terminal window, every process inside it dies. This happens automatically and most people never think about it. The mechanism behind it has a name that's an artifact from a different era.
Before the internet, people connected to remote computers over phone lines using modems. "Hanging up" meant physically disconnecting the call. The modem would detect the loss of signal, and the kernel needed a way to tell the programs on the other end that their user was gone. The signal it sends is called SIGHUP - Signal Hangup.
The mechanism survived long after the phone modems disappeared. Today, when the master end of any PTY closes - you click the X on a terminal window, your SSH connection drops, your Wi-Fi dies - the kernel sends SIGHUP to the session leader on the slave side. The session leader is typically your shell, and the shell passes the signal along to its child processes. Unless a program is specifically configured to ignore it, it exits.
But sometimes you want a process to keep running after the terminal closes -
a long backup, a batch job, something that should finish whether you're watching or not.
That's why nohup exists. Running nohup ./long-task &
tells the process to ignore SIGHUP, so it survives when the terminal goes away.
It's also why tmux works for keeping sessions alive - the inner
PTY's master is tmux itself, which doesn't close when the outer connection
dies, so no SIGHUP reaches your shell.
The shape of it
None of this was designed as a unified system. It accumulated over decades, one problem at a time. People needed to use a computer, so they built terminals. The computer needed a protocol, so software was written around a character device. The hardware went away but the software remained, so the kernel learned to fake the hardware. Windows could resize but programs couldn't tell, so a signal was added. Programs could get stuck and ignore their input, so the kernel learned to intercept special keystrokes. Users opened many windows, so each session was tied to a specific device. Computers moved across the network, so the virtual cable trick was played again over an encrypted tunnel. Connections dropped, so another layer of fake cable was inserted to absorb the failure.
Each solution preserved the contract that programs already depended on. From the outside, the result looks baroque. From the inside - following the path that got us here - each step makes sense. It was always the simplest thing that could work, given what already existed.
Telex