Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Proposal: Hardware Abstraction and Cloud Deployment

How capOS goes from “boots in QEMU” to “boots on a real cloud VM” (GCP, AWS, Azure). This covers the hardware abstraction infrastructure missing between the current QEMU-only kernel and real x86_64 hardware, plus the build system changes needed to produce deployable images.

Depends on: Kernel Networking Smoke Test (for PCI enumeration), Stage 5 (for LAPIC timer), Stage 7 / SMP proposal Phase A (for LAPIC init).

Complements: Networking proposal (extends virtio-net to cloud NICs), Storage proposal (extends virtio-blk to NVMe), SMP proposal (LAPIC infrastructure shared).


Current State

The kernel boots via Limine UEFI, outputs to COM1 serial, and uses QEMU-specific features (isa-debug-exit). No PCI, no ACPI, no interrupt controller beyond the legacy PIC (implicitly via Limine setup). The only build artifact is an ISO.

What Cloud VMs Provide

GCP (n2-standard), AWS (m6i/c7i), and Azure (Dv5) all expose:

ResourceCloud interfacecapOS status
Boot firmwareUEFI (all three)Limine UEFI works
Serial consoleCOM1 0x3F8Works (serial.rs)
Boot mediaGPT disk image (raw/VMDK/VHD)Missing (ISO only)
StorageNVMe (EBS, PD, Managed Disk)Missing
NICENA (AWS), gVNIC (GCP), MANA (Azure)Missing
Virtio NICGCP (fallback), some bare-metalMissing (planned)
TimerLAPIC, TSC, HPETMissing
Interrupt deliveryI/O APIC, MSI/MSI-XMissing
Device discoveryACPI + PCI/PCIeMissing
DisplayNone (headless)N/A

What Already Works

  • UEFI boot – Limine ISO includes BOOTX64.EFI. The boot path itself is cloud-compatible.
  • Serial output – all three clouds expose COM1. gcloud compute instances get-serial-port-output, aws ec2 get-console-output, and Azure serial console all read from it.
  • x86_64 long mode – cloud VMs are KVM-based x86_64. Architecture matches.

Phase 1: Bootable Disk Image

Goal: Produce a GPT disk image that cloud VMs can boot from, alongside the existing ISO for QEMU.

The Problem

Cloud VMs boot from disk images, not ISOs. Each cloud has a preferred format:

CloudImage formatImport method
GCPraw (tar.gz)gcloud compute images create --source-uri=gs://...
AWSraw, VMDK, VHDaws ec2 import-image or register-image with EBS snapshot
AzureVHD (fixed size)az image create --source

All require a GPT-partitioned disk with an EFI System Partition (ESP) containing the bootloader.

Disk Layout

GPT disk image (64 MB minimum)
  Partition 1: EFI System Partition (FAT32, ~32 MB)
    /EFI/BOOT/BOOTX64.EFI     (Limine UEFI loader)
    /limine.conf               (bootloader config)
    /boot/kernel               (capOS kernel ELF)
    /boot/init                 (init process ELF)
  Partition 2: (reserved for future use -- persistent store backing)

Build Tooling

New Makefile target make image using standard tools:

IMAGE := capos.img
IMAGE_SIZE := 64  # MB

image: kernel init $(LIMINE_DIR)
	# Create raw disk image
	dd if=/dev/zero of=$(IMAGE) bs=1M count=$(IMAGE_SIZE)
	# Partition with GPT + ESP
	sgdisk -n 1:2048:+32M -t 1:ef00 $(IMAGE)
	# Format ESP as FAT32, copy files
	# (mtools or loop mount + mkfs.fat)
	mformat -i $(IMAGE)@@1M -F -T 65536 ::
	mcopy -i $(IMAGE)@@1M $(LIMINE_DIR)/BOOTX64.EFI ::/EFI/BOOT/
	mcopy -i $(IMAGE)@@1M limine.conf ::/
	mcopy -i $(IMAGE)@@1M $(KERNEL) ::/boot/kernel
	mcopy -i $(IMAGE)@@1M $(INIT) ::/boot/init
	# Install Limine
	# bios-install is for hybrid BIOS/UEFI boot in local QEMU testing.
	# For cloud-only images (UEFI-only), this line can be omitted.
	$(LIMINE_DIR)/limine bios-install $(IMAGE)

New QEMU target to test disk boot locally:

run-disk: $(IMAGE)
	qemu-system-x86_64 -drive file=$(IMAGE),format=raw \
		-bios /usr/share/edk2/x64/OVMF.4m.fd \
		-display none $(QEMU_COMMON); \
	test $$? -eq 1

Cloud upload helpers (scripts, not Makefile targets):

# GCP
tar czf capos.tar.gz capos.img
gsutil cp capos.tar.gz gs://my-bucket/
gcloud compute images create capos --source-uri=gs://my-bucket/capos.tar.gz

# AWS
aws ec2 import-image --disk-containers \
  "Format=raw,UserBucket={S3Bucket=my-bucket,S3Key=capos.img}"

Dependencies

  • sgdisk (gdisk package) – GPT partitioning
  • mtools (mformat, mcopy) – FAT32 manipulation without root/loop mount

Scope

~30 lines of Makefile + a helper script for cloud uploads. No kernel changes.


Phase 2: ACPI and Device Discovery

Goal: Parse ACPI tables to discover hardware topology, interrupt routing, and PCI root complexes. This replaces QEMU-specific hardcoded assumptions.

Why ACPI

On QEMU with default settings, you can hardcode PCI config space at 0xCF8/0xCFC and assume legacy interrupt routing. On real cloud hardware:

  • PCI root complex addresses come from ACPI MCFG table (PCIe ECAM)
  • Interrupt routing comes from ACPI MADT (I/O APIC entries) and _PRT
  • CPU topology comes from ACPI MADT (LAPIC entries)
  • Timer info comes from ACPI HPET/PMTIMER tables

Limine provides the RSDP (Root System Description Pointer) address via its protocol. From there, the kernel can walk RSDT/XSDT to find specific tables.

Required Tables

TablePurposePriority
MADTLAPIC and I/O APIC addresses, CPU enumerationHigh (Phase 2)
MCFGPCIe Enhanced Configuration Access Mechanism baseHigh (Phase 2)
HPETHigh Precision Event Timer addressMedium (fallback timer)
FADTPM timer, shutdown/reset methodsLow (future)

Implementation

#![allow(unused)]
fn main() {
// kernel/src/acpi.rs

/// Minimal ACPI table parser.
/// Walks RSDP -> XSDT -> individual tables.
/// Does NOT implement AML interpretation -- static tables only.

pub struct AcpiInfo {
    pub lapics: Vec<LapicEntry>,
    pub io_apics: Vec<IoApicEntry>,
    pub iso_overrides: Vec<InterruptSourceOverride>,
    pub mcfg_base: Option<u64>,  // PCIe ECAM base address
    pub hpet_base: Option<u64>,
}

pub fn parse_acpi(rsdp_addr: u64, hhdm: u64) -> AcpiInfo { ... }
}

Use the acpi crate (no_std, well-maintained) for parsing rather than hand-rolling. It handles RSDP, RSDT/XSDT, MADT, MCFG, and HPET.

Limine RSDP

#![allow(unused)]
fn main() {
use limine::request::RsdpRequest;

static RSDP: RsdpRequest = RsdpRequest::new();

// In kmain:
let rsdp_addr = RSDP.response().expect("no RSDP").address() as u64;
let acpi_info = acpi::parse_acpi(rsdp_addr, hhdm_offset);
}

Crate Dependencies

CratePurposeno_std
acpiACPI table parsing (MADT, MCFG, etc.)yes

Scope

~200-300 lines of glue code wrapping the acpi crate. The crate does the heavy lifting.


Phase 3: Interrupt Infrastructure

Goal: Set up I/O APIC for device interrupt routing and MSI/MSI-X for modern PCI devices. This replaces the implicit legacy PIC setup.

I/O APIC

The I/O APIC routes external device interrupts (keyboard, serial, PCI devices) to specific LAPIC entries (CPUs). Its address and configuration come from the ACPI MADT (Phase 2).

#![allow(unused)]
fn main() {
// kernel/src/ioapic.rs

pub struct IoApic {
    base: *mut u32,  // MMIO registers via HHDM
}

impl IoApic {
    /// Route an IRQ to a specific LAPIC/vector.
    pub fn route_irq(&mut self, irq: u8, lapic_id: u8, vector: u8) { ... }

    /// Mask/unmask an IRQ line.
    pub fn set_mask(&mut self, irq: u8, masked: bool) { ... }
}
}

The I/O APIC must respect Interrupt Source Override entries from MADT (e.g., IRQ 0 might be remapped to GSI 2 on real hardware).

MSI/MSI-X

Modern PCI/PCIe devices (NVMe, cloud NICs) use Message Signaled Interrupts instead of pin-based IRQs routed through the I/O APIC. MSI/MSI-X writes directly to the LAPIC’s interrupt command register, bypassing the I/O APIC entirely.

This is critical for cloud deployment because:

  • NVMe controllers require MSI or MSI-X (no legacy IRQ fallback on many controllers)
  • Cloud NICs (ENA, gVNIC) use MSI-X exclusively
  • MSI-X supports per-queue interrupts (one vector per virtqueue/submission queue), enabling better SMP scalability
#![allow(unused)]
fn main() {
// kernel/src/pci/msi.rs

/// Configure MSI for a PCI device.
pub fn enable_msi(device: &PciDevice, vector: u8, lapic_id: u8) { ... }

/// Configure MSI-X for a PCI device.
pub fn enable_msix(
    device: &PciDevice,
    table_bar: u8,
    entries: &[(u16, u8, u8)],  // (index, vector, lapic_id)
) { ... }
}

MSI/MSI-X capability structures are found by walking the PCI capability list (already needed for PCI enumeration in the networking proposal).

Integration with SMP

LAPIC initialization is shared between this phase and the SMP proposal (Phase A). If SMP is implemented first, LAPIC is already available. If this phase comes first, it initializes the BSP’s LAPIC and the SMP proposal extends to APs.

Scope

~300-400 lines total:

  • I/O APIC driver: ~150 lines
  • MSI/MSI-X setup: ~100-150 lines
  • Integration/routing logic: ~50-100 lines

Phase 4: PCI/PCIe Infrastructure

Goal: Standalone PCI bus enumeration and device management, usable by all device drivers (virtio-net, NVMe, cloud NICs).

The networking proposal includes PCI enumeration as a substep for finding virtio-net. This phase promotes it to a reusable kernel subsystem that all device drivers build on.

PCI Configuration Access

Two mechanisms, determined by ACPI:

  1. Legacy I/O ports (0xCF8/0xCFC) – works in QEMU, limited to 256 bytes of config space per function. Insufficient for PCIe extended capabilities.
  2. PCIe ECAM (Enhanced Configuration Access Mechanism) – memory-mapped config space, 4 KB per function. Base address from ACPI MCFG table. Required for MSI-X capability parsing and NVMe BAR discovery on real hardware.

Start with legacy I/O for QEMU, add ECAM when ACPI parsing (Phase 2) is available.

Device Enumeration

#![allow(unused)]
fn main() {
// kernel/src/pci/mod.rs

pub struct PciDevice {
    pub bus: u8,
    pub device: u8,
    pub function: u8,
    pub vendor_id: u16,
    pub device_id: u16,
    pub class: u8,
    pub subclass: u8,
    pub bars: [Option<Bar>; 6],
    pub interrupt_pin: u8,
    pub interrupt_line: u8,
}

pub enum Bar {
    Memory { base: u64, size: u64, prefetchable: bool },
    Io { base: u16, size: u16 },
}

/// Scan all PCI buses and return discovered devices.
pub fn enumerate() -> Vec<PciDevice> { ... }

/// Find a device by vendor/device ID.
pub fn find_device(vendor: u16, device: u16) -> Option<PciDevice> { ... }

/// Walk the PCI capability list for a device.
pub fn capabilities(device: &PciDevice) -> Vec<PciCapability> { ... }
}

BAR Mapping

Device drivers need MMIO access to BAR regions. The kernel maps BAR physical addresses into virtual address space (via HHDM for kernel-mode drivers, or via a DeviceMmio capability for userspace drivers as described in the networking proposal).

PCI Device IDs for Cloud Hardware

DeviceVendor:DeviceCloud
virtio-net1AF4:1000 (transitional) or 1AF4:1041 (modern)QEMU, GCP fallback
virtio-blk1AF4:1001 (transitional) or 1AF4:1042 (modern)QEMU
NVMe8086:various, 144D:various, etc.All clouds (EBS, PD, Managed Disk)
AWS ENA1D0F:EC20 / 1D0F:EC21AWS
GCP gVNIC1AE0:0042GCP
Azure MANA1414:00BAAzure

Scope

~400-500 lines:

  • Config space access (I/O + ECAM): ~100 lines
  • Bus enumeration: ~150 lines
  • BAR parsing and mapping: ~100 lines
  • Capability list walking: ~50-100 lines

Phase 5: NVMe Driver

Goal: Basic NVMe block device driver, sufficient to read/write sectors. This is the storage equivalent of virtio-net for networking – the first real storage driver.

Why NVMe Over virtio-blk

The storage-and-naming proposal mentions virtio-blk for Phase 3 (persistent store). On cloud VMs, all three providers expose NVMe:

  • AWS EBS – NVMe interface (even for gp3/io2 volumes)
  • GCP Persistent Disk – NVMe or SCSI (NVMe is default for newer VMs)
  • Azure Managed Disks – NVMe on newer VM series (Ev5, Dv5)

virtio-blk is QEMU-only. An NVMe driver unlocks persistent storage on all cloud platforms. For QEMU testing, QEMU also emulates NVMe well: -drive file=disk.img,if=none,id=d0 -device nvme,drive=d0,serial=capos0.

NVMe Architecture

NVMe is a register-level standard with well-defined queue-pair semantics:

Application
    |
    v
Submission Queue (SQ) -- ring buffer of 64-byte command entries
    |
    | doorbell write (MMIO)
    v
NVMe Controller (hardware)
    |
    | DMA completion
    v
Completion Queue (CQ) -- ring buffer of 16-byte completion entries
    |
    | MSI-X interrupt
    v
Driver processes completions

Minimum viable driver needs:

  1. Admin Queue Pair (for identify, create I/O queues)
  2. One I/O Queue Pair (for read/write commands)
  3. MSI-X for completion notification (or polling)

Implementation Sketch

#![allow(unused)]
fn main() {
// kernel/src/nvme.rs (or kernel/src/drivers/nvme.rs)

pub struct NvmeController {
    bar0: *mut u8,          // MMIO registers
    admin_sq: SubmissionQueue,
    admin_cq: CompletionQueue,
    io_sq: SubmissionQueue,
    io_cq: CompletionQueue,
    namespace_id: u32,
    block_size: u32,
    block_count: u64,
}

impl NvmeController {
    pub fn init(pci_device: &PciDevice) -> Result<Self, NvmeError> { ... }
    pub fn read(&self, lba: u64, count: u16, buf: &mut [u8]) -> Result<(), NvmeError> { ... }
    pub fn write(&self, lba: u64, count: u16, buf: &[u8]) -> Result<(), NvmeError> { ... }
    pub fn identify(&self) -> NvmeIdentify { ... }
}
}

DMA Considerations

NVMe uses DMA for data transfer. The controller reads/writes directly from physical memory addresses provided in commands. Requirements:

  • Buffers must be physically contiguous (or use PRP lists / SGLs for scatter-gather)
  • Physical addresses must be provided (not virtual)
  • Cache coherence is handled by hardware on x86_64 (DMA-coherent architecture)

The existing frame allocator can provide physically contiguous pages. For larger transfers, PRP (Physical Region Page) lists allow scatter-gather.

Crate Dependencies

CratePurposeno_std
(none)NVMe register-level protocol is simple enough to implement directlyN/A

The NVMe spec is cleaner than virtio and the register interface is straightforward. A minimal driver (admin + 1 I/O queue pair, read/write) is ~500-700 lines without external dependencies.

Integration with Storage Proposal

The storage proposal’s Phase 3 (Persistent Store) specifies virtio-blk as the backing device. This can be generalized to a BlockDevice trait:

#![allow(unused)]
fn main() {
trait BlockDevice {
    fn read(&self, lba: u64, count: u16, buf: &mut [u8]) -> Result<(), Error>;
    fn write(&self, lba: u64, count: u16, buf: &[u8]) -> Result<(), Error>;
    fn block_size(&self) -> u32;
    fn block_count(&self) -> u64;
}
}

Both NVMe and virtio-blk implement this trait. The store service doesn’t care which backing driver it uses.

Scope

~500-700 lines for a minimal in-kernel NVMe driver (admin queue + 1 I/O queue pair, read/write, identify). Userspace decomposition follows the same pattern as the networking proposal (kernel driver first, then extract to userspace process with DeviceMmio + Interrupt caps).


Phase 6: Cloud NIC Strategy

Goal: Define the path to networking on cloud VMs, given that each cloud uses a different proprietary NIC.

The Landscape

CloudPrimary NICVirtio NIC available?Open-source driver?
GCPgVNIC (1AE0:0042)Yes (fallback option)Yes (Linux, ~3000 LoC)
AWSENA (1D0F:EC20)No (Nitro only)Yes (Linux, ~8000 LoC)
AzureMANA (1414:00BA)No (accelerated networking)Yes (Linux, ~6000 LoC)

Short term: virtio-net on GCP

GCP allows selecting VIRTIO_NET as the NIC type when creating instances. This is a first-class option, not a legacy fallback. Combined with the virtio-net driver from the networking proposal, this gives cloud networking with zero additional driver work.

gcloud compute instances create capos-test \
    --image=capos \
    --machine-type=e2-micro \
    --network-interface=nic-type=VIRTIO_NET

Medium term: gVNIC driver

gVNIC is a simpler device than ENA or MANA. The Linux driver is ~3000 lines (vs ~8000 for ENA). It uses standard PCI BAR MMIO + MSI-X interrupts. A minimal gVNIC driver (init, link up, send/receive) would be ~800-1200 lines.

gVNIC is worth prioritizing because:

  • GCP is the only cloud with a virtio-net fallback, making it the natural first target
  • Graduating from virtio-net to gVNIC on the same cloud is a clean progression
  • The gVNIC register interface is documented in the Linux driver source

Long term: ENA and MANA

ENA and MANA are more complex and less well-documented outside their Linux drivers. These should be deferred until the driver model is mature (userspace drivers with DeviceMmio caps, as described in the networking proposal Part 2).

At that point, the kernel only needs to provide PCI enumeration + BAR mapping + MSI-X routing. The actual NIC driver logic runs in a userspace process, making it feasible to port from the Linux driver source with appropriate licensing considerations.

Alternative: Paravirt Abstraction Layer

Instead of writing native drivers for each cloud NIC, an alternative is a thin paravirt layer:

Application -> NetworkManager cap -> Net Stack (smoltcp) -> NIC cap -> [driver]

Where [driver] is one of:

  • virtio-net (QEMU, GCP fallback)
  • gvnic (GCP)
  • ena (AWS)
  • mana (Azure)

All drivers implement the same Nic capability interface from the networking proposal. The network stack and applications are driver-agnostic.

This is already the architecture described in the networking proposal. The only addition is recognizing that multiple driver implementations will exist behind the same Nic interface.


Phase Summary and Dependencies

graph TD
    P1[Phase 1: Disk Image Build] --> BOOT[Boots on Cloud VM]
    P2[Phase 2: ACPI Parsing] --> P3[Phase 3: Interrupt Infrastructure]
    P2 --> P4[Phase 4: PCI/PCIe]
    P3 --> P5[Phase 5: NVMe Driver]
    P4 --> P5
    P4 --> NET[Networking Smoke Test<br>virtio-net driver]
    P3 --> NET
    P4 --> P6[Phase 6: Cloud NIC Drivers]
    P3 --> P6
    NET --> P6

    S5[Stage 5: Scheduling] --> P3
    SMP_A[SMP Phase A: LAPIC] --> P3

    style P1 fill:#2d5,stroke:#333
    style BOOT fill:#2d5,stroke:#333
PhaseDepends onEstimated scopeEnables
1: Disk imageNothing~30 lines MakefileCloud boot
2: ACPINothing (kernel code)~200-300 linesPhases 3, 4
3: InterruptsPhase 2, LAPIC (SMP/Stage 5)~300-400 linesNVMe, cloud NICs
4: PCI/PCIePhase 2~400-500 linesAll device drivers
5: NVMePhases 3, 4~500-700 linesCloud storage
6: Cloud NICsPhases 3, 4, networking smoke test~800-1200 lines eachCloud networking

Minimum Path to “Boots on Cloud VM, Prints Hello”

Phases 1 only. Everything else (serial, UEFI) already works. This is a build system change, not a kernel change.

Minimum Path to “Useful on Cloud VM”

Phases 1-5 (disk image + ACPI + interrupts + PCI + NVMe) plus the existing roadmap items (Stages 4-6 for capability syscalls, scheduling, IPC). With GCP’s virtio-net fallback, networking can use the existing networking proposal without Phase 6.


QEMU Testing

All phases can be tested in QEMU before deploying to cloud:

PhaseQEMU flags
Disk image-drive file=capos.img,format=raw -bios OVMF.4m.fd
ACPIDefault QEMU provides ACPI tables (MADT, MCFG, etc.)
I/O APICDefault QEMU emulates I/O APIC
PCI/PCIe-device ... adds PCI devices; QEMU has PCIe root complex
NVMe-drive file=disk.img,if=none,id=d0 -device nvme,drive=d0,serial=capos0
MSI-XSupported by QEMU’s NVMe and virtio-net-pci emulation
Multi-CPU-smp 4 (already works with Limine SMP)

aarch64 and ARM Cloud Instances

This proposal focuses on x86_64 because that’s the current kernel target, but ARM-based cloud instances are significant and growing:

CloudARM offeringInstance types
AWSGraviton2/3/4m7g, c7g, r7g, etc.
GCPTau T2A (Ampere Altra)t2a-standard-*
AzureCobalt 100 (Arm Neoverse)Dpsv6, Dplsv6

ARM cloud VMs have the same general requirements (UEFI boot, ACPI tables, PCI/PCIe, NVMe storage) but different specifics:

  • Interrupt controller: GIC (Generic Interrupt Controller) instead of APIC. GICv3 is standard on cloud ARM instances.
  • Boot: UEFI via Limine (already targets aarch64). Limine handles the architecture differences at boot time.
  • Timer: ARM generic timer (CNTPCT_EL0) instead of LAPIC/PIT/TSC.
  • Serial: PL011 UART instead of 16550 COM1. Different register interface.
  • NIC: Same PCI devices (ENA, gVNIC, MANA) with the same register interfaces – PCI/PCIe is architecture-neutral.
  • NVMe: Same NVMe register interface – PCIe is architecture-neutral.

The arch-neutral parts of this proposal (PCI enumeration, NVMe, disk image format, ACPI table parsing) apply equally to aarch64. The arch-specific parts (I/O APIC, MSI delivery address format, LAPIC) need aarch64 equivalents (GIC, ARM MSI translation).

The existing roadmap lists “aarch64 support” as a future item. For cloud deployment, aarch64 should be considered as soon as the x86_64 hardware abstraction is stable, since:

  1. Device drivers (NVMe, virtio-net, cloud NICs) are architecture-neutral – they talk to PCI config space and MMIO BARs, which are the same on both architectures
  2. The acpi crate handles both x86_64 and aarch64 ACPI tables
  3. Limine already targets aarch64
  4. AWS Graviton instances are often cheaper than x86_64 equivalents

The main aarch64 kernel work is: exception handling (EL0/EL1 instead of Ring 0/3), GIC driver (instead of APIC), ARM generic timer, PL011 serial, and the MMU setup (4-level page tables exist on both but with different register interfaces).


Open Questions

  1. ACPI scope. The acpi crate can parse static tables (MADT, MCFG, HPET, FADT). Full ACPI requires AML interpretation (for _PRT interrupt routing, dynamic device enumeration). Do we need AML, or are static tables sufficient for cloud VMs? Cloud VM firmware typically provides simple, static ACPI tables – AML interpretation is likely unnecessary initially.

  2. PCIe ECAM vs legacy. Should we support both config access methods, or require ECAM (which all cloud VMs and modern QEMU provide)? Supporting both adds ~50 lines but makes bare-metal testing on older hardware possible.

  3. NVMe queue depth. A single I/O queue pair with depth 32 is sufficient for initial use. Per-CPU queues (leveraging MSI-X per-queue interrupts) improve SMP throughput but add complexity. Defer per-CPU queues to after SMP is working.

  4. Driver model unification. Resolved: PCI enumeration is the standalone PCI/PCIe Infrastructure item in the roadmap. The networking smoke test and NVMe driver both consume this shared subsystem. The networking proposal’s Part 1 Step 1 has been updated to reference this phase.

  5. GCP vs AWS as first cloud target. GCP has virtio-net fallback, making it the easiest first target. AWS has the largest market share and EBS/NVMe is well-documented. Recommendation: GCP first (virtio-net path), then AWS (requires ENA or a workaround).


References

Specifications

Crates

  • acpi – no_std ACPI table parser
  • virtio-drivers – no_std virtio (already in networking proposal)

Prior Art

Cloud Documentation