Research: Plan 9 from Bell Labs and Inferno OS
Lessons for a capability-based OS using Cap’n Proto wire format.
Table of Contents
- Per-Process Namespaces
- The 9P Protocol
- File-Based vs Capability-Based Interfaces
- 9P as IPC
- Inferno OS
- Relevance to capOS
1. Per-Process Namespaces
Overview
Plan 9’s most significant architectural contribution is per-process namespaces.
Every process has its own view of the file hierarchy – not a shared global
filesystem tree as in Unix. A process’s namespace is a mapping from path names
to file servers (channels to 9P-speaking services). Two processes running on
the same machine can see completely different contents at /dev, /net,
/proc, or any other path.
Namespaces are inherited by child processes (fork copies the namespace) but can be modified independently afterward. This provides a form of resource isolation that is orthogonal to traditional access control: a process simply cannot name resources that aren’t in its namespace.
The Three Namespace Operations
Plan 9 provides three system calls for namespace manipulation:
bind(name, old, flags) – Takes an existing file or directory name
already visible in the namespace and makes it also accessible at path old.
This is purely a namespace-level alias – no new file server is involved. The
name argument must resolve to something already in the namespace.
Example: bind("#c", "/dev", MREPL) makes the console device (#c is a
kernel device designator) appear at /dev. The # prefix addresses kernel
devices directly before they have been bound into the namespace.
mount(fd, old, flags, aname) – Like bind, but the source is a file
descriptor connected to a 9P server rather than an existing namespace path.
The kernel speaks 9P over fd to serve requests for paths under old. The
aname parameter selects which file tree the server should export (a single
server can serve multiple trees).
Example: mount(fd, "/net", MREPL, "") where fd is a connection to the
network stack’s file server, makes the TCP/IP interface appear at /net.
unmount(name, old) – Removes a previous bind or mount from the
namespace.
Flags and Union Directories
The flags argument to bind and mount controls how the new binding
interacts with existing content at the mount point:
MREPL(replace) – The new binding completely replaces whatever was at the mount point. Only the new server’s files are visible.MBEFORE(before) – The new binding is placed before the existing content. When looking up a name, the new binding is searched first. If not found there, the old content is searched.MAFTER(after) – The new binding is placed after the existing content. The old content is searched first.MCREATE– Combined withMBEFOREorMAFTER, controls which component of the union receives create operations.
Union directories are the result of stacking multiple bindings at one mount point. When a directory has multiple bindings, a directory listing returns the union of all names from all components. A lookup walks the bindings in order and returns the first match.
This is how Plan 9 constructs /bin: multiple directories (for different
architectures, local overrides, etc.) are union-mounted at /bin. The
shell finds commands by simple path lookup – no $PATH variable needed.
bind /rc/bin /bin # shell built-ins (MAFTER)
bind /386/bin /bin # architecture binaries (MAFTER)
bind $home/bin/386 /bin # personal overrides (MBEFORE)
A lookup for /bin/ls searches the personal directory first, then the
architecture directory, then the shell builtins – all via a single path.
Namespace Inheritance and Isolation
The rfork system call controls what the child inherits:
RFNAMEG– Child gets a copy of the parent’s namespace. Subsequent modifications by either side are independent.RFCNAMEG– Child starts with a clean (empty) namespace.- Without either flag, parent and child share the namespace (modifications by one affect the other).
This gives fine-grained control: a shell can construct a restricted namespace for a sandboxed command, or a server can create an isolated namespace for each client connection.
Namespace Construction at Boot
Plan 9’s boot process constructs the initial namespace step by step:
- The kernel provides “kernel devices” accessed via
#designators:#c(console),#e(environment),#p(proc),#I(IP stack), etc. - The boot script binds these into conventional paths:
bind "#c" /dev,bind "#p" /proc, etc. - Network connections mount remote file servers: the CPU server’s file system, the user’s home directory, etc.
- Per-user profile scripts further customize the namespace.
The result is that the “standard” file hierarchy is a convention, not a kernel requirement. Any process can rearrange it.
Namespace as Security Boundary
Plan 9 namespaces provide a form of capability-like access control:
- A process cannot access resources outside its namespace
- A parent can restrict a child’s namespace before exec
- There is no way to “escape” a namespace – there is no
..that crosses a mount boundary unexpectedly, and#designators can be restricted
However, this is not a formal capability system:
- The namespace contains string paths, which are ambient authority within the namespace
- Any process can
open("/dev/cons")if/dev/consis in its namespace – there is no per-open-call authorization - The isolation depends on correct namespace construction, not structural properties
2. The 9P Protocol
Overview
9P (and its updated version 9P2000) is the protocol spoken between clients and file servers. Every resource in Plan 9 is accessed through 9P – local kernel devices, remote file systems, user-space services, and network resources all speak the same protocol.
9P is a request-response protocol with fixed message types. It is connection-oriented: a client establishes a session, authenticates, walks paths to obtain file handles (fids), and then reads/writes through those handles.
Message Types (9P2000)
9P2000 defines the following message pairs (T = request from client, R = response from server):
Session management:
Tversion/Rversion– Negotiate protocol version and maximum message size. Must be the first message. The client proposes a version string (e.g.,"9P2000") and amsize(maximum message size in bytes). The server responds with the agreed version and msize.Tauth/Rauth– Establish an authentication fid. The client provides a user name and ananame(the file tree to access). The server returns anafidthat the client reads/writes to complete an authentication exchange.Tattach/Rattach– Attach to a file tree. The client provides theafidfrom authentication, a user name, and theaname. The server returns aqid(unique file identifier) for the root of the tree. Thisfidbecomes the client’s handle for the root directory.
Navigation:
Twalk/Rwalk– Walk a path from an existing fid. The client provides a starting fid and a sequence of name components (up to 16 per walk). The server returns a new fid pointing to the result and the qids of each intermediate step. Walk is how you traverse directories – there is noopen-by-pathoperation.
File operations:
Topen/Ropen– Open an existing file (by fid, obtained via walk). The client specifies a mode (read, write, read-write, exec, truncate). The server returns the qid and aniounit(maximum I/O size for atomic operations).Tcreate/Rcreate– Create a new file in a directory fid. The client specifies name, permissions, and mode.Tread/Rread– Readcountbytes atoffsetfrom an open fid. The server returns the data.Twrite/Rwrite– Writecountbytes atoffsetto an open fid. The server returns the number of bytes actually written.Tclunk/Rclunk– Release a fid. The server frees associated state. Equivalent toclose().Tremove/Rremove– Remove the file referenced by a fid and clunk the fid.Tstat/Rstat– Get file metadata (name, size, permissions, access times, qid, etc.).Twstat/Rwstat– Modify file metadata.
Error handling:
Rerror– Any T-message can receive anRerrorinstead of its normal response. Contains a text error string (9P2000) or an error number (9P2000.u).
Message Format
Every 9P message starts with a 4-byte length (little-endian, including the length field itself), a 1-byte type, and a 2-byte tag. The tag is chosen by the client and echoed in the response, enabling multiplexed operations over a single connection.
[4 bytes: size][1 byte: type][2 bytes: tag][... type-specific fields ...]
Field types are simple: 1/2/4/8-byte integers (little-endian), counted strings (2-byte length prefix + UTF-8), and counted data blobs (4-byte length prefix + raw bytes).
Qids and File Identity
A qid is a server-assigned 13-byte file identifier:
[1 byte: type][4 bytes: version][8 bytes: path]
- type – Bits indicating directory, append-only, exclusive-use, authentication file, etc.
- version – Incremented when the file is modified. The client can detect changes by comparing versions.
- path – A unique identifier for the file within the server. Typically a hash or inode number.
Qids allow clients to detect file identity (same path through different walks = same qid) and staleness (version changed = re-read needed).
Authentication
9P2000 authentication is pluggable. The protocol provides the Tauth/Rauth
mechanism to establish an authentication fid, but the actual authentication
exchange happens by reading and writing this fid – the protocol itself is
agnostic to the authentication method.
Plan 9’s standard mechanism is p9sk1, a shared-secret protocol using an authentication server. The flow:
- Client sends
Tauthto get anafid - Client and server exchange challenge-response messages by reading/writing
the
afid, mediated by the authentication server - Once authentication succeeds, the client uses the
afidinTattach
The key insight: authentication is just another read/write conversation over a special fid. New authentication methods can be implemented without changing the protocol.
Concurrency
9P supports concurrent operations through tags. A client can send multiple T-messages without waiting for responses. Each has a unique tag, and the server can respond out of order. The client matches responses to requests by tag.
A special tag value NOTAG (0xFFFF) is used for Tversion, which must
complete before any other messages.
The OEXCL open mode provides exclusive access to a file – only one client
can open it at a time. This is used for locking (e.g., the #l lock device
in some Plan 9 variants).
Fids are per-connection, not global. Different clients on different connections have independent fid spaces. A server maintains per-connection state.
Maximum Message Size
The msize negotiated in Tversion bounds all subsequent messages. A
typical default is 8192 or 65536 bytes. The iounit returned by Topen
tells the client the maximum useful count for read/write on that fid,
which may be less than msize minus the message header overhead.
This bounding is important for resource management – a server can limit memory consumption per connection.
3. File-Based vs Capability-Based Interfaces
Plan 9: Everything is a File
Plan 9 takes Unix’s “everything is a file” philosophy further than Unix itself ever did:
- Network stack – TCP connections are managed by reading/writing files
in
/net/tcp:clone(allocate a connection),ctl(write commands likeconnect 10.0.0.1!80),data(read/write payload),status(read connection state). - Window system – The
riowindow manager exports a file system: each window has acons,mouse,winname, etc. A program draws by writing to/dev/draw/*. - Process control –
/proc/<pid>/containsctl(writekillto signal),status(read state),mem(read/write process memory),text(read executable),note(signals). - Hardware devices – Kernel devices export file interfaces directly. The audio device is files, the graphics framebuffer is files, etc.
The interface contract is: open a file, read/write bytes, stat for metadata.
The semantics of those bytes are defined by the file server – there is no
ioctl().
Strengths of the file model:
- Universal tools work everywhere:
cat /net/tcp/0/status,echo kill > /proc/1234/ctl - Shell scripts can compose services trivially
- Network transparency is automatic: mount a remote file server, same tools work
- The interface is self-documenting:
lsshows available operations - Simple tools like
cat,echo,grepbecome universal adapters
Weaknesses of the file model:
- Type erasure. Everything is bytes. The protocol cannot express
structured data without conventions layered on top (text formats, fixed
layouts, etc.). A
read()returns raw bytes – the client must know the expected format. - Limited operation set. The only verbs are open, read, write, stat,
create, remove. Complex operations must be encoded as write-command /
read-response sequences (e.g.,
echo "connect 10.0.0.1!80" > /net/tcp/0/ctl). Error handling is ad-hoc. - No schema or type checking. Nothing prevents writing garbage to a ctl file. Errors are detected at runtime, often with cryptic messages.
- No structured errors. 9P errors are text strings. No error codes, no machine-parseable error metadata.
- Byte-stream orientation. 9P read/write are offset-based byte operations. This fits files naturally but is awkward for RPC-style request/response interactions. File servers work around this with conventions (write a command, read the response from offset 0).
- No pipelining of operations. You cannot say “open this file, then read it, and if that succeeds, write to this other file” atomically. Each step is a separate round-trip (though 9P’s tag multiplexing helps amortize latency).
Capability Systems: Everything is a Typed Interface
In a capability system like capOS, resources are accessed through typed interface references:
interface Console {
write @0 (data :Data) -> ();
writeLine @1 (text :Text) -> ();
}
interface NetworkManager {
createTcpSocket @0 (addr :Text, port :UInt16) -> (socket :TcpSocket);
}
interface TcpSocket {
read @0 (count :UInt32) -> (data :Data);
write @1 (data :Data) -> (written :UInt32);
close @2 () -> ();
}
Strengths of the capability model:
- Type safety. The interface contract is machine-checked. You cannot
call
writeon aNetworkManager– the type system prevents it. - Rich operations. Interfaces can define arbitrary methods with typed parameters and return values. No need to encode everything as byte read/writes.
- Structured errors. Return types can include error variants. Capabilities can define error enums in the schema.
- Schema evolution. Cap’n Proto supports backwards-compatible schema changes (adding fields, adding methods). Both old and new clients/servers interoperate.
- No ambient authority. A process has precisely the capabilities it
was granted. No path-based discovery, no
/procto enumerate. - Attenuation. A broad capability can be narrowed to a restricted
version (e.g.,
Fetch->HttpEndpoint). The restriction is structural, not a permission check.
Weaknesses of the capability model:
- No universal tools.
catandechodo not work on capabilities. Each interface needs its own client tool or library. Debugging requires interface-aware tools. - Harder composition. Shell pipes compose byte streams trivially. Capability composition requires typed adapters or a capability-aware shell.
- Discovery problem.
lsshows files. What shows capabilities? A management-onlyCapabilityManager.list()call, but that requires holding the manager cap and a tool that can render the result. - Steeper learning curve. A new developer can
ls /netto understand the network stack. Understanding a capability interface requires reading the schema definition. - Verbosity. Opening a TCP connection in Plan 9 is four file operations (clone, ctl, data, status). In a capability system, it is one typed method call. But defining the interface in the schema is more upfront work than just exporting files.
Synthesis
The file model and the capability model are not opposed – they are different points on a trade-off curve between universality and type safety. Plan 9 chose maximal universality (everything reduces to bytes + paths). Capability systems choose maximal type safety (everything has a schema).
The interesting question is whether a capability system can recover the ergonomic benefits of the file model while maintaining type safety. This is addressed in section 6.
4. 9P as IPC
File Servers as Services
In Plan 9, a “service” is simply a process that speaks 9P. When a client mounts a file server’s connection at some path, all file operations on that path become 9P messages to the server. This is the universal IPC mechanism – there are no Unix-domain sockets, no D-Bus, no shared memory primitives for service communication. Everything goes through 9P.
Examples of services-as-file-servers:
exportfs– Re-exports a subtree of the current namespace over a network connection, letting remote clients mount it.ramfs– A RAM-backed file server. Mount it and you have a tmpfs.ftpfs– Mounts a remote FTP server as a local directory. Programs read/write files; the file server translates to FTP protocol.mailfs– Presents a mail spool as a directory of messages. Each message is a directory withheader,body,rawbody, etc.plumber– The inter-application message router exports a file interface: write a message to/mnt/plumb/send, and it arrives in the target application’s plumb port.acme– The Acme editor exports its entire UI as a file system: windows, buffers, tags, event streams. External programs can control Acme by reading/writing these files.
The srv Device and Connection Passing
The kernel #s (srv) device provides a namespace for posting file
descriptors. A server process creates a pipe, starts serving 9P on one end,
and posts the other end as /srv/myservice. Other processes open
/srv/myservice to get a connection to the server, then mount it into
their namespace.
# Server side:
pipe = pipe()
post(pipe[0], "/srv/myfs")
serve_9p(pipe[1])
# Client side:
fd = open("/srv/myfs", ORDWR)
mount(fd, "/mnt/myfs", MREPL, "")
# Now /mnt/myfs/* are served by the server process
This decouples service registration from namespace mounting. Multiple clients can mount the same service at different paths in their own namespaces.
Performance and Overhead
9P’s overhead compared to direct function calls or shared memory:
- Serialization – Every operation is a 9P message: header parsing, field encoding/decoding. Messages are simple binary (not XML/JSON), so this is fast but nonzero.
- Copying – Data passes through the kernel (pipe or network): user buffer -> kernel pipe buffer -> server process buffer (and back for responses). This is at least two copies per direction.
- Context switches – Each request/response is a write (client) + read (server) + write (server) + read (client) = four context switches for a round-trip.
- No zero-copy – 9P does not support shared memory or page remapping. Large data transfers pay the full copy cost.
For metadata-heavy operations (stat, walk, open/close), the overhead is dominated by context switches, not data copying. Plan 9 is designed for networks where latency matters – the protocol’s simplicity and multiplexability help here.
For bulk data, the overhead is significant. Plan 9 compensates somewhat with
the iounit mechanism (encouraging large reads/writes to amortize per-call
costs) and the fact that most I/O is streaming (sequential reads/writes, not
random access).
In practice, Plan 9 systems are not optimized for raw throughput on local IPC. The design prioritizes simplicity and network transparency over local performance. The assumption is that the network is the bottleneck, so local protocol overhead is acceptable.
Network Transparency
9P’s power lies in its network transparency. The same protocol runs over:
- Pipes – Local IPC between processes on the same machine.
- TCP connections – Remote file access across the network.
- Serial lines – Early Plan 9 terminals connected to CPU servers.
- TLS/SSL – Encrypted connections (added later).
A CPU server is accessed by mounting its file system over the network. The
Plan 9 cpu command:
- Connects to a remote CPU server over TCP
- Authenticates
- Exports the local namespace (via
exportfs) to the remote side - The remote side mounts the local namespace, overlaying it with its own kernel devices
- A shell runs on the remote CPU, but with access to local files
The result: you work on the remote machine but your files, windows, and devices are local. This is more powerful than SSH because the integration is at the namespace level, not the terminal level.
Factoid: In the Plan 9 computing model, terminals were intentionally underpowered. The expensive hardware was the CPU server. Users mounted the CPU server’s filesystem and ran programs there, with the terminal providing I/O devices (keyboard, mouse, display) exported as files back to the CPU server.
5. Inferno OS
What Inferno Adds Beyond Plan 9
Inferno (also from Bell Labs, originally by the same team) took the Plan 9 architecture and adapted it for portable, networked computing. It can run as a native OS on bare hardware, as a hosted application on other OSes (Linux, Windows, macOS), or as a virtual machine.
Key additions and differences:
- Dis virtual machine – All user-space code runs on a register-based VM, not native machine code.
- Limbo language – A type-safe, garbage-collected, concurrent language (influenced Plan 9 C, CSP, Newsqueak, and Alef). All applications are written in Limbo.
- Styx protocol – Inferno’s name for its 9P variant (functionally identical to 9P2000 with minor encoding differences in early versions, later fully aligned with 9P2000).
- Portable execution – The same Limbo bytecode runs on any platform where the Dis VM is available. No recompilation needed.
- Built-in cryptography – TLS, certificate-based authentication, and signed modules are integrated into the system, not bolted on.
The Dis Virtual Machine
Dis is a register-based virtual machine (unlike the JVM, which is stack-based). Key characteristics:
- Memory model – Dis uses a module-based memory model. Each loaded module has its own data segment (frame). Instructions reference memory operands by offset within the current module’s frame, the current function’s frame, or a literal (mp, fp, or immediate addressing).
- Instruction set – CISC-inspired, with three-address instructions:
add src1, src2, dst. Opcodes cover arithmetic, comparison, branching, string operations, channel operations, and system calls. Around 80-90 opcodes. - Type descriptors – Each allocated block has a type descriptor that identifies which words are pointers. This enables exact garbage collection (no conservative scanning).
- Garbage collection – Reference counting with cycle detection. Deterministic deallocation for acyclic structures (important for resource management), with periodic cycle collection.
- Module loading – Dis modules are loaded on demand. A module declares its type signature (exported functions and their types), and the loader verifies type compatibility at link time.
- JIT compilation – On supported architectures (x86, ARM, MIPS, SPARC, PowerPC), Dis bytecode is compiled to native code at load time. This removes the interpretation overhead for hot code.
- Concurrency – Dis natively supports concurrent threads of execution within a module. Threads communicate via typed channels (from CSP/Limbo).
The Limbo Language
Limbo is Inferno’s application language. Its design reflects the system’s values:
- Type-safe – No pointer arithmetic, no unchecked casts, no buffer overflows. The type system is enforced at compile time and verified at module load time.
- Garbage collected – Programmers do not manage memory. Reference counting provides deterministic resource cleanup.
- Concurrent – First-class
chantypes (typed channels) andspawnfor creating threads. This is CSP-style concurrency, predating (and influencing) Go’s goroutines and channels. - Module system – Modules declare interfaces (like header files with
type signatures). A module
imports another module’s interface, and the runtime verifies type compatibility at load time. - ADTs – Algebraic data types with
pick(tagged unions). Pattern matching over variants. - Tuples – First-class tuple types for returning multiple values.
- No inheritance – Limbo has ADTs and modules, not objects and classes.
Example – a simple file server in Limbo:
implement Echo;
include "sys.m";
include "draw.m";
include "styx.m";
sys: Sys;
Echo: module {
init: fn(nil: ref Draw->Context, argv: list of string);
};
init(nil: ref Draw->Context, argv: list of string)
{
sys = load Sys Sys->PATH;
# ... set up Styx server, handle read/write on echo file
}
Limbo and the Namespace Model
Limbo programs interact with the namespace through the Sys module’s file
operations (open, read, write, mount, bind, etc.) – the same
operations as in Plan 9. The namespace model is identical:
- Each process group has its own namespace
bindandmountmanipulate the namespace- File servers (Styx servers) provide services
- Union directories compose multiple servers
The difference is that Limbo’s type safety extends to the file descriptors
and channels used to communicate. A Sys->FD is a reference type, not a
raw integer. You cannot fabricate a file descriptor from nothing.
Limbo’s channel type (chan of T) provides typed communication between
concurrent threads within a process. Channels are a local IPC mechanism
complementary to Styx, which handles inter-process and inter-machine
communication.
Styx (Inferno’s 9P)
Styx is Inferno’s name for the 9P2000 protocol. In the current version of Inferno, Styx and 9P2000 are wire-compatible – the same byte format, the same message types, the same semantics. The renaming reflects Inferno’s origin as a commercial product from Vita Nuova (and before that, Lucent Technologies) with its own branding.
The Inferno kernel includes a Styx library (Styx and Styxservers
modules) that makes implementing file servers straightforward in Limbo.
The Styxservers module provides a framework: you implement a navigator
(for walk/stat) and a file handler (for read/write), and the framework
handles the protocol boilerplate.
include "styx.m";
include "styxservers.m";
styx: Styx;
styxservers: Styxservers;
Srv: adt {
# ... file tree definition
};
# The framework calls navigator.walk(), navigator.stat() for metadata
# and file.read(), file.write() for data operations.
Inferno also provides the 9srvfs utility for mounting external 9P servers
and the mount command for attaching Styx servers to the namespace – the
same patterns as Plan 9.
Security Model
Inferno’s security model builds on namespaces with additional mechanisms:
- Signed modules – Dis modules can be cryptographically signed. The loader can verify signatures before executing code.
- Certificate-based authentication – Inferno uses a certificate infrastructure (not Kerberos like Plan 9) for authenticating connections.
- Namespace restriction – The
wm/shshell and other supervisory programs can construct restricted namespaces for untrusted code. - Type safety as security – Since Limbo prevents pointer forgery and buffer overflows, type safety is a security boundary. A Limbo program cannot escape its type system to forge file descriptors or access arbitrary memory.
6. Relevance to capOS
6.1 Namespace Composition via Capabilities
Plan 9 lesson: Per-process namespaces are a powerful isolation and composition mechanism. A process’s “view of the world” is constructed by its parent through bind/mount operations. The child cannot escape this view.
capOS parallel: Per-process capability tables serve an analogous role. A process’s “view of the world” is its set of granted capabilities. The child cannot discover or access capabilities outside its table.
What capOS could adopt:
The existing Namespace interface in the storage proposal
(docs/proposals/storage-and-naming-proposal.md) already captures some of this –
resolve, bind, list, and sub provide name-to-capability mappings.
But Plan 9’s namespace model suggests a more dynamic composition pattern:
interface Namespace {
# Resolve a name to a capability reference
resolve @0 (name :Text) -> (capId :UInt32, interfaceId :UInt64);
# Bind a capability at a name in this namespace
bind @1 (name :Text, capId :UInt32) -> ();
# Create a union: multiple capabilities behind one name
union @2 (name :Text, capId :UInt32, position :UnionPosition) -> ();
# List available names
list @3 () -> (entries :List(NamespaceEntry));
# Get a restricted sub-namespace
sub @4 (prefix :Text) -> (ns :Namespace);
}
enum UnionPosition {
before @0; # searched first (like Plan 9 MBEFORE)
after @1; # searched last (like Plan 9 MAFTER)
replace @2; # replaces existing (like Plan 9 MREPL)
}
struct NamespaceEntry {
name @0 :Text;
interfaceId @1 :UInt64;
label @2 :Text;
}
The key insight from Plan 9 is union composition – multiple capabilities can be bound at the same name, searched in order. This is useful for overlay patterns: a local cache capability layered before a remote store capability, or a per-user config namespace layered before a system-wide default.
Differences from Plan 9:
Plan 9 namespaces map names to file servers. capOS namespaces map names to typed capabilities. The advantage: capOS can verify at bind time that the capability matches the expected interface. Plan 9 cannot – you mount a file server and discover at runtime whether it exports the files you expect.
6.2 Cap’n Proto RPC vs 9P
Protocol comparison:
| Aspect | 9P2000 | Cap’n Proto RPC |
|---|---|---|
| Message format | Fixed binary fields, counted strings/data | Capnp wire format (pointer-based, zero-copy decode) |
| Operations | Fixed set (walk, open, read, write, stat, …) | Arbitrary per-interface (schema-defined methods) |
| Typing | Untyped bytes | Strongly typed (schema-checked) |
| Multiplexing | Tag-based (16-bit tags) | Question ID-based (32-bit) |
| Pipelining | Not supported (each op is independent) | Promise pipelining (call method on not-yet-returned result) |
| Authentication | Pluggable via auth fid | Application-level (not protocol-specified) |
| Capabilities | No (file fids are unforgeable handles, but no transfer/attenuation) | Native capability passing and attenuation |
| Maximum message | Negotiated msize | No inherent limit (segmented messages) |
| Schema evolution | N/A (fixed protocol) | Forward/backward compatible schema changes |
| Network transparency | Native design goal | Native design goal |
Key differences for capOS:
-
Promise pipelining – This is capnp RPC’s strongest advantage over 9P. In 9P, opening a TCP connection requires: walk to
/net/tcp-> walk toclone-> open clone -> read (get connection number) -> walk toctl-> open ctl -> write “connect …” -> walk todata-> open data. Eight round-trips minimum. With capnp pipelining:net.createTcpSocket("10.0.0.1", 80)returns a promise, and you can immediately call.write(data)on the promise – the runtime chains the calls without waiting for the first to complete. One logical round-trip. -
Typed interfaces – 9P’s strength is that
catworks on any file. Capnp’s strength is that the compiler catchesconsole.allocFrame()at compile time. capOS should not try to make everything a “file” – typed interfaces are the right abstraction for a capability system. But aFileServercapability interface could provide Plan 9-like flexibility where needed (see below). -
Capability passing – 9P has no way to pass a fid through a file server to a third party. (The
srvdevice is a workaround, not a protocol feature.) Capnp RPC natively supports passing capability references in messages. This is fundamental to capOS’s model.
6.3 File Server Pattern as a Capability
Plan 9’s file server pattern is useful and should not be discarded just
because capOS is capability-based. Instead, define a generic FileServer
capability interface:
interface FileServer {
walk @0 (names :List(Text)) -> (fid :FileFid);
list @1 (fid :FileFid) -> (entries :List(DirEntry));
}
interface FileFid {
open @0 (mode :OpenMode) -> (iounit :UInt32);
read @1 (offset :UInt64, count :UInt32) -> (data :Data);
write @2 (offset :UInt64, data :Data) -> (written :UInt32);
stat @3 () -> (info :FileInfo);
close @4 () -> ();
}
A FileServer capability enables:
/proc-like introspection – A debugging service exports process state as a file tree. Tools read files to inspect state.- Config storage – A configuration namespace can be exposed as files for tools that work with text.
- POSIX compatibility – The POSIX shim layer maps
open()/read()/write()toFileServercapability calls. - Shell scripting – A capability-aware shell could mount
FileServercaps and usecat/echo-style tools on them.
The point: FileServer is one capability interface among many. It is not
the universal abstraction (as in Plan 9), but it is available where the
file metaphor is natural.
6.4 IPC Lessons
Plan 9 lesson: 9P works as universal IPC because the protocol is simple and the kernel handles the plumbing (mount, pipe, network). The cost is per-message overhead (copies, context switches).
capOS implications:
-
Minimize copies. 9P’s two-copies-per-direction (user -> kernel pipe buffer -> server) is acceptable for networks but expensive for local IPC. capOS should investigate shared-memory regions for bulk data transfer between co-located processes, with capnp messages as the control plane. The roadmap’s io_uring-inspired submission/completion rings already point in this direction.
-
Direct context switch. The L4/seL4 IPC fast-path (direct switch from caller to callee without choosing an unrelated runnable process) now exists as a baseline for blocked Endpoint receivers. Plan 9 does not do this – every 9P round-trip goes through the kernel’s pipe/network layer. capOS can tune this further because capability calls have a known target process.
-
Batching. Plan 9 mitigates round-trip costs through large reads/ writes (the iounit mechanism). Capnp’s promise pipelining is the typed equivalent – batch multiple logical operations into a dependency chain that executes without intermediate round-trips.
6.5 Inferno Lessons
Dis VM / type safety: Inferno’s bet on a managed runtime (Dis + Limbo) gives it type safety as a security boundary. capOS, being written in Rust for kernel code and targeting native binaries, does not have this luxury for arbitrary user-space code. However:
- WASI support (on the roadmap) provides a sandboxed execution environment with type-checked interfaces, similar in spirit to Dis.
- Cap’n Proto schemas provide interface-level type safety even for native code. The schema is the contract, enforced at message boundaries.
Channel-based concurrency: Limbo’s chan of T type is a local IPC
mechanism within a process. capOS does not currently have this (it relies
on kernel-mediated capability calls for all IPC). For in-process threading
(on the roadmap), typed channels between threads could be useful –
implemented as a library on top of shared memory + futex, without kernel
involvement.
Portable execution: Inferno’s ability to run the same bytecode everywhere is appealing but orthogonal to capOS’s goals. The WASI runtime item on the roadmap serves this purpose for capOS.
6.6 Concrete Recommendations
Based on this research, the following items are most relevant to capOS development:
-
Add a
Namespacecapability with union semantics. Extend the existing Namespace design (from the storage proposal) with Plan 9-style union composition (before/after/replace). This enables overlay patterns for configuration, caching, and modularity. -
Implement a
FileServercapability interface. Not as the universal abstraction, but as one interface for resources that are naturally file-like (config trees, debug introspection, POSIX compatibility). AFileServercap is just another capability – no special kernel support needed. -
Prioritize promise pipelining. This is capnp’s killer feature over 9P and the biggest performance advantage for IPC-heavy workloads. Multiple logical operations collapse into one network/IPC round-trip. Async rings are in place; the remaining work is the Stage 6 pipeline dependency/result-cap mapping rule.
-
Plan 9-style namespace construction in init. The boot manifest already describes which capabilities each service receives. Consider adding namespace-level composition to the manifest: “this service sees capability X as
data/primaryand capability Y asdata/cache, with cache searched first” – union directory semantics expressed in capability terms. -
Study 9P’s
exportfspattern for network transparency. Plan 9’sexportfsre-exports a namespace subtree over the network. The capOS equivalent would be a proxy service that takes a set of local capabilities and makes them available as capnp RPC endpoints on the network. This is the “network transparency” roadmap item – 9P’s design proves it is achievable, and capnp’s richer type system makes it more robust. -
Do not replicate 9P’s weaknesses. The untyped byte-stream interface, the lack of structured errors, and the fixed operation set are 9P’s costs for universality. capOS pays none of these costs with Cap’n Proto. The temptation to “make everything a file for simplicity” should be resisted – typed capabilities are strictly more powerful, and the
FileServerinterface provides the file metaphor where needed without compromising the rest of the system.
Summary
| Plan 9 / Inferno Concept | capOS Equivalent | Gap / Action |
|---|---|---|
| Per-process namespace (bind/mount) | Per-process capability table | Add Namespace cap with union semantics |
| 9P protocol (file operations) | Cap’n Proto RPC (typed method calls) | capnp is strictly superior for typed IPC; FileServer cap provides file semantics where needed |
| Union directories | No current equivalent | Add union composition to Namespace interface |
| File servers as services | Capability-implementing processes | Already the model; manifest-driven service graph is close to Plan 9’s boot namespace construction |
| Network transparency via 9P | Network transparency via capnp RPC | Same goal, capnp adds promise pipelining and typed interfaces |
exportfs (namespace re-export) | Capability proxy service | Not yet designed; high-value future work |
| Styx/9P as universal IPC | Capnp messages as universal IPC | Already the model; prioritize fast-path and pipelining |
| Dis VM (portable, type-safe execution) | WASI runtime (roadmap) | Same goal, different mechanism |
| Limbo channels (typed local IPC) | Not yet present | Consider for in-process threading |
| Authentication via auth fid | Not yet designed | Cap’n Proto RPC has no built-in auth; needs design |
References
- Rob Pike, Dave Presotto, Sean Dorward, Bob Flandrena, Ken Thompson, Howard Trickey, Phil Winterbottom. “Plan 9 from Bell Labs.” Computing Systems, Vol. 8, No. 3, Summer 1995, pp. 221-254.
- Rob Pike, Dave Presotto, Ken Thompson, Howard Trickey, Phil Winterbottom. “The Use of Name Spaces in Plan 9.” Operating Systems Review, Vol. 27, No. 2, April 1993, pp. 72-76.
- Plan 9 Manual: intro(1), bind(1), mount(1), intro(5) (the 9P manual section).
- Russ Cox, Eric Grosse, Rob Pike, Dave Presotto, Sean Quinlan. “Security in Plan 9.” USENIX Security 2002.
- Sean Dorward, Rob Pike, Dave Presotto, Dennis Ritchie, Howard Trickey, Phil Winterbottom. “The Inferno Operating System.” Bell Labs Technical Journal, Vol. 2, No. 1, Winter 1997.
- Phil Winterbottom, Rob Pike. “The Design of the Inferno Virtual Machine.” Bell Labs, 1997.
- Vita Nuova. “The Dis Virtual Machine Specification.” 2003.
- Vita Nuova. “The Limbo Programming Language.” 2003.
- Sape Mullender (editor). “The 9P2000 Protocol.” Plan 9 manual, section 5 (intro(5)).
- Kenichi Okada. “9P Resource Sharing Protocol.” IETF Internet-Draft, 2010.