LLVM Target Customization for capOS
Deep research report on creating custom LLVM/Rust/Go targets for a capability-based OS.
Status as of 2026-04-22: capOS still builds kernel and userspace with
x86_64-unknown-none plus linker-script/build flags. A checked-in
x86_64-unknown-capos custom target does not exist yet. Since this report was
first written, PT_TLS parsing, userspace TLS block setup, FS-base
save/restore, the VirtualMemory capability, and a #[thread_local] QEMU
smoke have landed. Thread creation, a user-controlled FS-base syscall, futexes,
a timer capability, and a Go port remain future work.
Table of Contents
- Custom OS Target Triple
- Calling Conventions
- Relocations
- TLS (Thread-Local Storage) Models
- Rust Target Specification
- Go Runtime Requirements
- Relevance to capOS
1. Custom OS Target Triple
Target Triple Format
LLVM target triples follow the format <arch>-<vendor>-<os> or
<arch>-<vendor>-<os>-<env>:
- arch:
x86_64,aarch64,riscv64gc, etc. - vendor:
unknown,apple,pc, etc. (oftenunknownfor custom OSes) - os:
linux,none,redox,hermit,fuchsia, etc. - env (optional):
gnu,musl,eabi, etc.
For capOS, the eventual userspace target triple should be
x86_64-unknown-capos. The kernel should keep using a freestanding target
(x86_64-unknown-none) unless a kernel-specific target file becomes useful
for build hygiene.
What LLVM Needs
LLVM’s target description consists of:
- Target machine: Architecture (instruction set, register file, calling conventions). x86_64 already exists in LLVM.
- Object format: ELF, COFF, Mach-O. capOS uses ELF.
- Relocation model: static, PIC, PIE, dynamic-no-pic.
- Code model: small, kernel, medium, large.
- OS-specific ABI details: Stack alignment, calling convention defaults, TLS model, exception handling mechanism.
LLVM does NOT need kernel-level knowledge of your OS. It needs to know how to generate correct object code for the target environment. The OS name in the triple primarily affects:
- Default calling convention selection
- Default relocation model
- TLS model selection
- Object file format and flags
- C library assumptions (relevant for C compilation, less for Rust no_std)
Creating a New OS in LLVM (Upstream Path)
To add capos as a recognized OS in LLVM itself:
- Add the OS to
llvm/include/llvm/TargetParser/Triple.h(theOSTypeenum) - Add string parsing in
llvm/lib/TargetParser/Triple.cpp - Define ABI defaults in the relevant target (
llvm/lib/Target/X86/) - Update Clang’s driver for the new OS
(
clang/lib/Driver/ToolChains/,clang/lib/Basic/Targets/)
This is significant upstream work and not necessary initially. The pragmatic path is using Rust’s custom target JSON mechanism (see Section 5).
What Other OSes Do
| OS | LLVM status | Approach |
|---|---|---|
| Redox | Upstream in Rust; no dedicated LLVM OS enum in current LLVM | Full triple x86_64-unknown-redox, Tier 2 in Rust |
| Hermit | Upstream in LLVM and Rust | x86_64-unknown-hermit, Tier 3, unikernel |
| Fuchsia | Upstream in LLVM and Rust | x86_64-unknown-fuchsia, Tier 2 |
| Theseus | Custom target JSON | Uses x86_64-unknown-theseus JSON spec, not upstream |
| Blog OS (phil-opp) | Custom target JSON | Uses JSON target spec, targets x86_64-unknown-none base |
| seL4/Robigalia | Custom target JSON | Modified from x86_64-unknown-none |
Recommendation for capOS: keep the kernel on x86_64-unknown-none.
Introduce a userspace-only custom target JSON when cfg(target_os = "capos")
or toolchain packaging becomes valuable. Do not upstream a capos OS triple
until the userspace ABI is stable.
2. Calling Conventions
LLVM Calling Conventions
LLVM supports numerous calling conventions. The ones relevant to capOS:
| CC | LLVM ID | Description | Relevance |
|---|---|---|---|
| C | 0 | Default C calling convention (System V AMD64 ABI on x86_64) | Primary for interop |
| Fast | 8 | Optimized for internal use, passes in registers | Rust internal use |
| Cold | 9 | Rarely-called functions, callee-save heavy | Error paths |
| GHC | 10 | Glasgow Haskell Compiler, everything in registers | Not relevant |
| HiPE | 11 | Erlang HiPE, similar to GHC | Not relevant |
| WebKit JS | 12 | JavaScript JIT | Not relevant |
| AnyReg | 13 | Dynamic register allocation | JIT compilers |
| PreserveMost | 14 | Caller saves almost nothing | Interrupt handlers |
| PreserveAll | 15 | Caller saves nothing | Context switches |
| Swift | 16 | Swift self/error registers | Not relevant |
| CXX_FAST_TLS | 17 | C++ TLS access optimization | TLS wrappers |
| X86_StdCall | 64 | Windows stdcall | Not relevant |
| X86_FastCall | 65 | Windows fastcall | Not relevant |
| X86_RegCall | 95 | Register-based calling | Performance-critical code |
| X86_INTR | 83 | x86 interrupt handler | IDT handlers |
| Win64 | 79 | Windows x64 calling convention | Not relevant |
System V AMD64 ABI (The Default for capOS)
On x86_64, the System V AMD64 ABI (CC 0, “C”) is the standard:
- Integer args: RDI, RSI, RDX, RCX, R8, R9
- Float args: XMM0-XMM7
- Return: RAX (integer), XMM0 (float)
- Caller-saved: RAX, RCX, RDX, RSI, RDI, R8-R11, XMM0-XMM15
- Callee-saved: RBX, RBP, R12-R15
- Stack alignment: 16-byte at call site
- Red zone: 128 bytes below RSP (unavailable in kernel mode)
capOS already uses this convention – the syscall handler in
kernel/src/arch/x86_64/syscall.rs maps syscall registers to System V
registers before calling syscall_handler.
Customizing for a New OS Target
For a custom OS, calling convention customization is usually minimal:
-
Kernel code: Disable the red zone (capOS already does this via
x86_64-unknown-nonewhich sets"disable-redzone": true). The red zone is unsafe in interrupt/syscall contexts. -
Userspace code: Standard System V ABI is fine. The red zone is safe in userspace.
-
Syscall convention: This is an OS design choice, not an LLVM CC. capOS uses: RAX=syscall number, RDI-R9=args (matching System V for easy dispatch). Linux uses a slightly different register mapping (R10 instead of RCX for arg4, because SYSCALL clobbers RCX).
-
Interrupt handlers: Use
X86_INTR(CC 83) or manual save/restore. capOS currently uses manual asm stubs.
Cross-Language Interop Implications
| Languages | Convention | Notes |
|---|---|---|
| Rust <-> Rust | Rust ABI (unstable) | Internal to a crate, not stable across crates |
| Rust <-> C | extern "C" (System V) | Stable, well-defined. Used for libcapos API |
| Rust <-> Go | Complex (see Section 6) | Go has its own internal ABI (ABIInternal) |
| C <-> Go | extern "C" via cgo | Go’s cgo bridge, heavy overhead |
| Any <-> Kernel | Syscall convention | Register-based, OS-defined, not a CC |
Key point: The System V AMD64 ABI is the lingua franca. All languages
can produce extern "C" functions. capOS should standardize on System V
for all cross-language boundaries and capability invocations.
Go’s internal ABI (ABIInternal, using R14 as the g register) is different
from System V. Go functions called from outside Go must go through a
trampoline. This is handled by the Go runtime, not something capOS needs
to solve at the LLVM level.
3. Relocations
LLVM Relocation Models
| Model | Flag | Description |
|---|---|---|
| static | -relocation-model=static | All addresses resolved at link time. No GOT/PLT. |
| pic | -relocation-model=pic | Position-independent code. Uses GOT for globals, PLT for calls. |
| dynamic-no-pic | -relocation-model=dynamic-no-pic | Like static but with dynamic linking support (macOS legacy). |
| ropi | -relocation-model=ropi | Read-only position-independent (ARM embedded). |
| rwpi | -relocation-model=rwpi | Read-write position-independent (ARM embedded). |
| ropi-rwpi | -relocation-model=ropi-rwpi | Both ROPI and RWPI (ARM embedded). |
Code Models (x86_64)
| Model | Flag | Address Range | Use Case |
|---|---|---|---|
| small | -code-model=small | 0 to 2GB | Userspace default |
| kernel | -code-model=kernel | Top 2GB (negative 32-bit) | Higher-half kernel |
| medium | -code-model=medium | Code in low 2GB, data anywhere | Large data sets |
| large | -code-model=large | No assumptions | Maximum flexibility, worst performance |
What capOS Currently Uses
From .cargo/config.toml:
[target.x86_64-unknown-none]
rustflags = ["-C", "link-arg=-Tkernel/linker-x86_64.ld", "-C", "code-model=kernel", "-C", "relocation-model=static"]
-
Kernel:
code-model=kernel+relocation-model=static. Correct for a higher-half kernel at0xffffffff80000000. All kernel symbols are in the top 2GB of virtual address space, so 32-bit sign-extended addressing works. -
Init/demos/capos-rt userspace: The standalone userspace crates also target
x86_64-unknown-none, pass-Crelocation-model=static, and select their linker scripts through per-cratebuild.rsfiles. The binaries are loaded at0x200000. The pinned local toolchain (rustc 1.97.0-nightly, LLVM 22.1.2) printsx86_64-unknown-nonewithllvm-target = "x86_64-unknown-none-elf",code-model = "kernel", soft-float ABI, inline stack probes, and static PIE-capable defaults. A futurex86_64-unknown-caposuserspace target should setcode-model = "small"explicitly instead of inheriting the freestanding kernel-oriented default.
Kernel vs. Userspace Requirements
Kernel:
- Static relocations, kernel code model.
- No PIC overhead needed – the kernel is loaded at a known address.
- The linker script places everything in the higher half.
- This is the correct and standard approach (Linux kernel does the same).
Userspace (current – static binaries):
- Static relocations. A future custom userspace target should choose the small code model explicitly.
- Simple, no runtime relocator needed.
- Binary is loaded at a fixed address (
0x200000). - Works perfectly for single-binary-per-address-space.
Userspace (future – if shared libraries or ASLR desired):
- PIE (Position-Independent Executable) = PIC + static linking.
- Requires a dynamic loader or kernel-side relocator.
- Enables ASLR (Address Space Layout Randomization) for security.
- Adds GOT indirection overhead (typically < 5% performance impact).
Position-Independent Code in a Capability Context
PIC/PIE is relevant to capOS for several reasons:
-
ASLR: PIE enables loading binaries at random addresses, making ROP attacks harder. Even in a capability system, defense-in-depth matters.
-
Shared libraries: If capOS ever supports shared objects (e.g., a shared
libcapos.so), PIC is required for the shared library. -
WASI/Wasm: Not relevant – Wasm has its own memory model.
-
Multiple instances: With static linking, two instances of the same binary can share read-only pages (text, rodata) if loaded at the same address. PIC/PIE allows sharing even at different addresses (copy-on-write for the GOT).
Recommendation for capOS: Keep static relocation for now. Consider PIE for userspace when implementing ASLR (after threading and IPC are stable). The kernel should remain static forever.
4. TLS (Thread-Local Storage) Models
LLVM TLS Models
LLVM supports four TLS models, in order from most dynamic to most constrained:
| Model | Description | Runtime Requirement | Performance |
|---|---|---|---|
| general-dynamic | Any module, any time | Full __tls_get_addr via dynamic linker | Slowest (function call per access) |
| local-dynamic | Same module, any time | __tls_get_addr for module base, then offset | Slow (one call per module per thread) |
| initial-exec | Only modules loaded at startup | GOT slot populated by dynamic linker | Fast (one memory load) |
| local-exec | Main executable only | Direct FS/GS offset, known at link time | Fastest (single instruction) |
How TLS Works on x86_64
On x86_64, TLS is accessed via the FS segment register:
- The OS sets the FS base address for each thread (via
MSR_FS_BASEorarch_prctl(ARCH_SET_FS)). - TLS variables are accessed as offsets from FS base:
local-exec:mov %fs:OFFSET, %rax(offset known at link time)initial-exec:mov %fs:0, %rax; mov GOT_OFFSET(%rax), %rcx; mov %fs:(%rcx), %rdxgeneral-dynamic:call __tls_get_addr(returns pointer to TLS block)
Which Model for capOS?
Kernel:
- The kernel does not use compiler TLS. Current TLS support is for loaded userspace ELF images only.
- For SMP: per-CPU data via GS segment register (the standard approach).
Set
MSR_GS_BASEon each CPU to point to aPerCpustruct.swapgson kernel entry switches between user and kernel GS base. - Kernel TLS model: Not applicable (per-CPU data is accessed via GS, not the compiler’s TLS mechanism).
Userspace (static binaries, no dynamic linker):
- local-exec is the only correct choice. There’s no dynamic linker to resolve TLS relocations, so general-dynamic and initial-exec won’t work.
- Implemented for the current single-threaded process model: the ELF parser
records
PT_TLS, the loader maps a Variant II TLS block plus TCB self pointer, and the scheduler saves/restores FS base on context switch. - Still missing for future threading and Go: a syscall or
capability-authorized operation equivalent to
arch_prctl(ARCH_SET_FS)so a runtime can set each OS thread’s FS base itself.
Userspace (with dynamic linker, future):
- initial-exec for the main executable and preloaded libraries.
- general-dynamic for
dlopen()-loaded libraries. - Requires implementing
__tls_get_addrin the dynamic linker.
TLS Initialization Sequence
For a statically-linked userspace binary with local-exec TLS:
1. Kernel creates thread
2. Kernel allocates TLS block (size from ELF TLS program header)
3. Kernel copies .tdata (initialized TLS) into TLS block
4. Kernel zeros .tbss (uninitialized TLS) in TLS block
5. Kernel sets FS base = TLS block address (writes MSR_FS_BASE)
6. Thread starts executing; %fs:OFFSET accesses TLS directly
The ELF file contains two TLS sections:
.tdata(PT_TLS segment, initialized thread-local data).tbss(zero-initialized thread-local data, like.bssbut per-thread)
The PT_TLS program header tells the loader:
- Virtual address and file offset of
.tdata p_memsz= total TLS size (including.tbss)p_filesz= size of.tdataonlyp_align= required alignment
FS/GS Base Register Usage Plan
| Register | Used By | Purpose |
|---|---|---|
| FS | Userspace threads | Thread-local storage (set per-thread by kernel) |
| GS | Kernel (via swapgs) | Per-CPU data (set per-CPU during boot) |
This is the standard Linux convention and what Go expects (Go uses
arch_prctl(ARCH_SET_FS) to set the FS base for each OS thread).
What capOS Has and Still Needs
- Implemented: parse
PT_TLSincapos-lib/src/elf.rs. - Implemented: allocate/map a TLS block during process image load in
kernel/src/spawn.rs. - Implemented: copy
.tdata, zero.tbss, and write the TCB self pointer for the current Variant II static TLS layout. - Implemented: save/restore FS base through
kernel/src/sched.rsandkernel/src/arch/x86_64/tls.rs. - Still needed:
arch_prctl(ARCH_SET_FS)equivalent for Gosettls()and future multi-threaded userspace.
5. Rust Target Specification
How Custom Targets Work
Rust supports custom targets via JSON specification files. The workflow:
- Create a
<target-name>.jsonfile - Pass it to rustc:
--target path/to/x86_64-unknown-capos.json - Use with cargo via
-Zbuild-stdto build core/alloc/std from source
Target lookup priority:
- Built-in target names
- File path (if the target string contains
/or.json) RUST_TARGET_PATHenvironment variable directories
The Rust target JSON schema is explicitly unstable. Generate examples from the
pinned compiler with rustc -Z unstable-options --print target-spec-json and
validate against that same compiler’s target-spec-json-schema before checking
in a target file.
Viewing Existing Specs
# Print the JSON spec for a built-in target:
rustc +nightly -Z unstable-options --target=x86_64-unknown-none --print target-spec-json
# Print the JSON schema for all available fields:
rustc +nightly -Z unstable-options --print target-spec-json-schema
Example: x86_64-unknown-capos Kernel Target
Based on the current x86_64-unknown-none target, with capOS-specific
adjustments. This is a sketch; regenerate from the pinned rustc schema before
using it.
{
"llvm-target": "x86_64-unknown-none-elf",
"metadata": {
"description": "capOS kernel (x86_64)",
"tier": 3,
"host_tools": false,
"std": false
},
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64",
"cpu": "x86-64",
"target-endian": "little",
"target-pointer-width": 64,
"target-c-int-width": 32,
"os": "none",
"env": "",
"vendor": "unknown",
"linker-flavor": "gnu-lld",
"linker": "rust-lld",
"pre-link-args": {
"gnu-lld": ["-Tkernel/linker-x86_64.ld"]
},
"features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2,+soft-float",
"disable-redzone": true,
"panic-strategy": "abort",
"code-model": "kernel",
"relocation-model": "static",
"rustc-abi": "softfloat",
"executables": true,
"exe-suffix": "",
"has-thread-local": false,
"position-independent-executables": false,
"static-position-independent-executables": false,
"plt-by-default": false,
"max-atomic-width": 64,
"stack-probes": { "kind": "inline" }
}
Example: x86_64-unknown-capos Userspace Target
{
"llvm-target": "x86_64-unknown-none-elf",
"metadata": {
"description": "capOS userspace (x86_64)",
"tier": 3,
"host_tools": false,
"std": false
},
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64",
"cpu": "x86-64",
"target-endian": "little",
"target-pointer-width": 64,
"target-c-int-width": 32,
"os": "capos",
"env": "",
"vendor": "unknown",
"linker-flavor": "gnu-lld",
"linker": "rust-lld",
"pre-link-args": {
"gnu-lld": ["-Tinit/linker.ld"]
},
"features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2,+soft-float",
"disable-redzone": false,
"panic-strategy": "abort",
"code-model": "small",
"relocation-model": "static",
"rustc-abi": "softfloat",
"executables": true,
"exe-suffix": "",
"has-thread-local": true,
"position-independent-executables": false,
"static-position-independent-executables": false,
"max-atomic-width": 64,
"plt-by-default": false,
"stack-probes": { "kind": "inline" },
"tls-model": "local-exec"
}
Key JSON Fields
| Field | Purpose | Typical Values |
|---|---|---|
llvm-target | LLVM triple for code generation | x86_64-unknown-none-elf (reuse existing backend) |
os | OS name (affects cfg(target_os = "...")) | "none", "capos", "linux" |
arch | Architecture name | "x86_64", "aarch64" |
data-layout | LLVM data layout string | Copy from same-arch target |
linker-flavor | Which linker to use | "gnu-lld", "gcc", "msvc" |
linker | Linker binary | "rust-lld", "ld.lld" |
features | CPU features to enable/disable | Disable SIMD/FPU until context switching saves that state |
disable-redzone | Disable System V red zone | true for kernel, false for userspace |
code-model | LLVM code model | "kernel", "small" |
relocation-model | LLVM relocation model | "static", "pic" |
panic-strategy | How to handle panics | "abort", "unwind" |
has-thread-local | Enable #[thread_local] | true for userspace now that PT_TLS/FS base works |
tls-model | Default TLS model | "local-exec" for static binaries |
max-atomic-width | Largest atomic type (bits) | 64 for x86_64 |
pre-link-args | Arguments passed to linker before user args | Linker script path |
position-independent-executables | Generate PIE by default | false for now |
exe-suffix | Executable file extension | "" for ELF |
stack-probes | Stack overflow detection mechanism | {"kind": "inline"} in the current freestanding x86_64 spec |
no_std vs std Support Path
Current state: capOS uses no_std + alloc. This works with any
target, including x86_64-unknown-none.
Path to std support (what Redox, Hermit, and Fuchsia did):
-
Phase 1: Custom target with
os: "capos"(current report). Use-Zbuild-std=core,allocto build core and alloc. No std. -
Phase 2: Add capOS to Rust’s
stdlibrary. This requires:- Adding
mod caposunderlibrary/std/src/sys/with OS-specific implementations of: filesystem, networking, threads, time, stdio, process spawning, etc. - Each of these maps to capOS capabilities
- Use
cfg(target_os = "capos")throughout std - Build with
-Zbuild-std=std
- Adding
-
Phase 3: Upstream the target (optional). Submit the target spec and std implementations to the Rust project. Requires sustained maintenance.
What Redox did: Redox implemented a full POSIX-like userspace (relibc)
and added std support by implementing the sys module in terms of relibc
syscalls. This made Redox a Tier 2 target with pre-built std artifacts.
What Hermit did: Hermit is a unikernel, so std is implemented directly in terms of Hermit’s kernel-level APIs. Tier 3, community maintained.
What Fuchsia did: Fuchsia implemented std using Fuchsia’s native
zircon syscalls (handles, channels, VMOs – similar in spirit to
capabilities). Tier 2.
Recommendation for capOS: Stay on no_std + alloc with the custom
target JSON. std support is a large effort that should wait until the
syscall surface is stable and threading works. When the time comes, Fuchsia’s
approach (std over native capability syscalls) is the best model, since
Fuchsia’s handle-based API is conceptually close to capOS’s capabilities.
Other OS Projects Reference
| OS | Target | Tier | std | Approach |
|---|---|---|---|---|
| Redox | x86_64-unknown-redox | 2 | Yes | relibc (custom libc) over Redox syscalls |
| Hermit | x86_64-unknown-hermit | 3 | Yes | std directly over kernel API |
| Fuchsia | x86_64-unknown-fuchsia | 2 | Yes | std over zircon handles (capability-like) |
| Theseus | x86_64-unknown-theseus | N/A | No | Custom JSON, no_std, research OS |
| Blog OS | Custom JSON | N/A | No | Based on x86_64-unknown-none |
| MOROS | Custom JSON | N/A | No | Simple hobby OS |
6. Go Runtime Requirements
Go’s Runtime Architecture
Go’s runtime is essentially a userspace operating system. It manages goroutine scheduling, garbage collection, memory allocation, and I/O multiplexing. The runtime interfaces with the actual OS through a narrow set of functions that each GOOS must implement.
Minimum OS Interface for a Go Port
Based on analysis of runtime/os_linux.go, runtime/os_plan9.go, and
runtime/os_js.go, here is the minimum interface:
Tier 1: Absolute Minimum (single-threaded, like GOOS=js)
These functions are needed for “Hello, World!”:
func osinit() // OS initialization
func write1(fd uintptr, p unsafe.Pointer, n int32) int32 // stdout/stderr output
func exit(code int32) // process termination
func usleep(usec uint32) // sleep (can be no-op initially)
func readRandom(r []byte) int // random data (for maps, etc.)
func goenvs() // environment variables
func mpreinit(mp *m) // pre-init new M on parent thread
func minit() // init new M on its own thread
func unminit() // undo minit
func mdestroy(mp *m) // destroy M resources
Plus memory management (in runtime/mem_*.go):
func sysAllocOS(n uintptr) unsafe.Pointer // allocate memory (mmap)
func sysFreeOS(v unsafe.Pointer, n uintptr) // free memory (munmap)
func sysReserveOS(v unsafe.Pointer, n uintptr) unsafe.Pointer // reserve VA range
func sysMapOS(v unsafe.Pointer, n uintptr) // commit reserved pages
func sysUsedOS(v unsafe.Pointer, n uintptr) // mark as used
func sysUnusedOS(v unsafe.Pointer, n uintptr) // mark as unused (madvise)
func sysFaultOS(v unsafe.Pointer, n uintptr) // remove access
func sysHugePageOS(v unsafe.Pointer, n uintptr) // hint: use huge pages
Tier 2: Multi-threaded (real goroutines)
func newosproc(mp *m) // create OS thread (clone)
func exitThread(wait *atomic.Uint32) // exit current thread
func futexsleep(addr *uint32, val uint32, ns int64) // futex wait
func futexwakeup(addr *uint32, cnt uint32) // futex wake
func settls() // set FS base for TLS
func nanotime1() int64 // monotonic nanosecond clock
func walltime() (sec int64, nsec int32) // wall clock time
func osyield() // sched_yield
Tier 3: Full Runtime (signals, profiling, network poller)
func sigaction(sig uint32, new *sigactiont, old *sigactiont)
func signalM(mp *m, sig int) // send signal to thread
func setitimer(mode int32, new *itimerval, old *itimerval)
func netpollopen(fd uintptr, pd *pollDesc) uintptr
func netpoll(delta int64) (gList, int32)
func netpollBreak()
Linux Syscalls Used by Go Runtime (Complete List)
From runtime/sys_linux_amd64.s:
| Syscall | # | Go Wrapper | capOS Equivalent |
|---|---|---|---|
read | 0 | runtime.read | Store cap |
write | 1 | runtime.write1 | Console cap |
close | 3 | runtime.closefd | Cap drop |
mmap | 9 | runtime.sysMmap | VirtualMemory cap |
munmap | 11 | runtime.sysMunmap | VirtualMemory.unmap |
brk | 12 | runtime.sbrk0 | VirtualMemory cap |
rt_sigaction | 13 | runtime.rt_sigaction | Signal cap (future) |
rt_sigprocmask | 14 | runtime.rtsigprocmask | Signal cap (future) |
sched_yield | 24 | runtime.osyield | sys_yield |
mincore | 27 | runtime.mincore | VirtualMemory.query |
madvise | 28 | runtime.madvise | Future VirtualMemory decommit/query semantics, or unmap/remap policy |
nanosleep | 35 | runtime.usleep | Timer cap |
setitimer | 38 | runtime.setitimer | Timer cap |
getpid | 39 | runtime.getpid | Process info |
clone | 56 | runtime.clone | Thread cap |
exit | 60 | runtime.exit | sys_exit |
sigaltstack | 131 | runtime.sigaltstack | Not needed initially |
arch_prctl | 158 | runtime.settls | sys_arch_prctl (set FS base) |
gettid | 186 | runtime.gettid | Thread info |
futex | 202 | runtime.futex | sys_futex |
sched_getaffinity | 204 | runtime.sched_getaffinity | CPU info |
timer_create | 222 | runtime.timer_create | Timer cap |
timer_settime | 223 | runtime.timer_settime | Timer cap |
timer_delete | 226 | runtime.timer_delete | Timer cap |
clock_gettime | 228 | runtime.nanotime1 | Timer cap |
exit_group | 231 | runtime.exit | sys_exit |
tgkill | 234 | runtime.tgkill | Thread signal (future) |
openat | 257 | runtime.open | Namespace cap |
pipe2 | 293 | runtime.pipe2 | IPC cap |
Go’s TLS Model
Go uses arch_prctl(ARCH_SET_FS, addr) to set the FS segment base for
each OS thread. The convention:
- FS base points to the thread’s
m.tlsarray - Goroutine pointer
gis stored at-8(FS)(ELF TLS convention) - In Go’s ABIInternal, R14 is cached as the
gregister for performance - On signal entry or thread start,
gis loaded from TLS into R14
Go does NOT use the compiler’s TLS mechanisms (no __thread or
thread_local!). It manages TLS entirely in its own runtime via the FS
register.
For capOS, this means the kernel needs:
arch_prctl(ARCH_SET_FS)equivalent syscall- The kernel must save/restore FS base on context switch
- Each thread’s FS base must be independently settable
Adding GOOS=capos to Go
Files that need to be created/modified in a Go fork:
src/runtime/
os_capos.go // osinit, newosproc, futexsleep, etc.
os_capos_amd64.go // arch-specific OS functions
sys_capos_amd64.s // syscall wrappers in assembly
mem_capos.go // sysAlloc/sysFree/etc. over VirtualMemory cap
signal_capos.go // signal stubs (no real signals initially)
stubs_capos.go // misc stubs
netpoll_capos.go // network poller (stub initially)
defs_capos.go // OS-level constants
vdso_capos.go // VDSO stubs (no VDSO)
src/syscall/
syscall_capos.go // Go's syscall package
zsyscall_capos_amd64.go
src/internal/platform/
(modifications to supported.go, zosarch.go)
src/cmd/dist/
(modifications to add capOS to known OS list)
Estimated: ~2000-3000 lines for Phase 1 (single-threaded).
Feasibility Assessment
| Feature | Difficulty | Blocked On |
|---|---|---|
| Hello World (write + exit) | Easy | Console capability plus exit syscall |
| Memory allocator (mmap) | Medium | VirtualMemory capability exists; Go glue and any missing query/decommit semantics remain |
| Single-threaded goroutines (M=1) | Medium | VirtualMemory cap + timer |
| Multi-threaded (real threads) | Hard | Kernel thread support, futex, runtime-controlled FS base |
| Network poller | Hard | Async cap invocation, networking stack |
| Signal-based preemption | Hard | Signal delivery mechanism |
| Full stdlib | Very Hard | POSIX layer or native cap wrappers |
7. Relevance to capOS
Practical Scope of Work
Phase 1: Custom Target JSON (Low effort, high value)
What: Create a userspace x86_64-unknown-capos.json target spec. Keep
the kernel on x86_64-unknown-none unless a kernel JSON proves useful.
Why: Replaces the current approach of using x86_64-unknown-none with
rustflags overrides. Makes the build cleaner, enables cfg(target_os = "capos")
for conditional compilation, and is the foundation for everything else.
Effort: 1-2 hours for an initial file, plus recurring maintenance because Rust target JSON fields are not stable.
Blockers: None. Not required for the current no_std runtime path.
Phase 2: TLS Support (mostly landed, required for Go)
What: Parse PT_TLS from ELF, allocate per-thread TLS blocks, set FS base
on context switch, add arch_prctl-equivalent syscall.
Why: Required for Go runtime (Go’s settls() sets FS base), for Rust
#[thread_local] in userspace, and for C’s __thread.
Current state: PT_TLS parsing, static TLS mapping, FS-base context-switch
state, and a Rust #[thread_local] smoke are implemented. Remaining work is
the runtime-controlled FS-base operation and the thread model that makes it
per-thread rather than per-process.
Blockers: Thread support for the multi-threaded case.
Phase 3: VirtualMemory Capability (implemented baseline, required for Go)
What: Implement the VirtualMemory capability interface. The current schema has map, unmap, and protect; Go may need decommit/query semantics later.
Why: Go’s memory allocator (sysAlloc, sysReserve, sysMap, etc.)
needs mmap-like functionality. This is the single biggest kernel-side
requirement for Go.
Current state: VirtualMemoryCap implements map/unmap/protect over the
existing page-table code with ownership tracking and quota checks. Go-specific
work still has to map runtime sysAlloc/sysReserve/sysMap expectations
onto that interface.
Blockers: None for the baseline capability; timer/futex/threading still block useful Go.
Phase 4: Futex Operation (Low-medium effort, required for Go threading)
What: Implement futex(WAIT) and futex(WAKE) as a fast
capability-authorized kernel operation.
Why: Go’s runtime synchronization (lock_futex.go) is built on futexes.
The entire goroutine scheduler depends on futex-based sleeping.
Effort: ~100-200 lines for the first private-futex path. A wait queue keyed by address-space + userspace address is enough initially.
Blockers: Futex wait-queue design and, for full Go threading, the thread scheduler.
Phase 5: Kernel Threading (High effort, required for Go GOMAXPROCS>1)
What: Multiple threads per process sharing address space and cap table.
Why: Go’s newosproc() creates OS threads via clone(). Without real
threads, Go is limited to GOMAXPROCS=1.
Effort: ~500-800 lines. Major scheduler extension.
Blockers: Scheduler, per-CPU data, SMP support.
Biggest Blockers for Go
In priority order after the 2026-04-22 TLS and VirtualMemory work:
-
Timer / monotonic clock – Go’s scheduler needs
nanotime()for goroutine scheduling decisions. Without a timer, Go cannot preempt goroutines or manage timeouts. -
Runtime-controlled FS base – Go calls
arch_prctl(ARCH_SET_FS)on every new thread. capOS can load static ELF TLS today, but Go still needs a way to set the runtime’s own TLS base. -
Futex – Go’s M:N scheduler depends on futex for sleeping/waking OS threads. Without futex, Go falls back to spin-waiting (wasteful) or simply cannot block.
-
Thread creation – Required for
GOMAXPROCS > 1. Phase 1 Go can work single-threaded. -
Go runtime port glue – map
sysAlloc/write1/exit/random/env/time to capOS runtime and capabilities.
Biggest Blockers for C
C is much simpler than Go:
- Linker and toolchain setup – Need a cross-compilation toolchain targeting capOS (Clang with the custom target, or GCC cross-compiler).
libcapos.awith C headers – Rust library withextern "C"API.- musl integration (optional) – For full libc, replace musl’s
__syscall()with capability invocations.
Recommended Implementation Order
1. Custom userspace target JSON [optional build hygiene]
|
2. VirtualMemory capability [done: baseline map/unmap/protect]
|
3. TLS support (PT_TLS, FS base) [done for static ELF processes]
|
4. Futex authority cap + measured ABI [extends scheduler]
|
5. Timer capability (monotonic clock) [extends PIT/HPET driver]
|
6. Go Phase 1: minimal GOOS=capos [single-threaded, M=1]
|
7. Kernel threading [major scheduler work]
|
8. Go Phase 2: multi-threaded [GOMAXPROCS>1, concurrent GC]
|
9. C toolchain + libcapos [parallel with Go work]
|
10. Go Phase 3: network poller [depends on networking stack]
Steps 1-5 are kernel prerequisites. Step 6 is the Go fork. Steps 7-10 are incremental improvements that can proceed in parallel.
Key Architectural Decisions for capOS
-
Keep
x86_64-unknown-nonefor kernel,x86_64-unknown-caposfor userspace. The kernel does not benefit from a custom OS target (it’s freestanding). Userspace benefits fromcfg(target_os = "capos"). -
Use local-exec TLS model for static binaries. No dynamic linker means no general-dynamic or initial-exec TLS. local-exec is zero-overhead.
-
Implement FS base save/restore early. Both Go and Rust
#[thread_local]need it. It’s a small addition to context switch code. -
VirtualMemory cap stays on the Go critical path. The baseline exists; the Go port still needs exact runtime allocator semantics and any missing query/decommit behavior.
-
Futex is the synchronization primitive. Both Go and any future pthreads implementation need futex. Keep authority capability-based, but measure whether the hot path should use a compact transport operation rather than generic Cap’n Proto method dispatch.
-
Signals can be deferred. Go can start with cooperative-only preemption (no
SIGURG). Signal delivery is complex and can come much later.