WebAssembly was designed to make the web faster. A compact binary instruction format that runs at near-native speed in the browser — that was the pitch when Wasm shipped in all major browsers back in 2017. But something unexpected happened. The same properties that make Wasm excellent for browsers — sandboxed execution, language independence, small binary size, and deterministic behavior — turned out to be exactly what the server-side world was missing.
Today, WebAssembly is running in places its creators never anticipated: on Cloudflare's edge network, inside Docker containers, as a plugin format for databases and proxies, and even in embedded IoT devices. This post explores the key use cases driving Wasm adoption outside the browser and why it matters for developers who have never written a line of Wasm by hand.
Why Wasm Escaped the Browser
The core properties of WebAssembly map directly to longstanding server-side problems:
- Sandboxing without overhead: Traditional containers isolate processes using OS-level mechanisms (namespaces, cgroups) that add startup latency measured in hundreds of milliseconds. Wasm modules start in microseconds because isolation is built into the instruction set itself — a Wasm module cannot access memory outside its linear memory space, cannot make syscalls directly, and cannot reach the filesystem unless the host explicitly grants access through WASI (WebAssembly System Interface).
- Language independence: C, C++, Rust, Go, C#, and even Python can compile to Wasm. This means a single runtime can execute code written in any of these languages without requiring language-specific runtimes or interpreters on the host.
- Deterministic execution: Given the same inputs, a Wasm module produces the same outputs regardless of the host architecture. This makes Wasm modules genuinely portable — compile once, run on x86, ARM, or RISC-V without recompilation.
- Small binary size: A typical Wasm module is measured in kilobytes to low megabytes, making it practical to distribute over networks and cache at the edge.
Solomon Hykes, the creator of Docker, captured this shift in a widely-quoted remark: "If WASM+WASI existed in 2008, we wouldn't have needed to create Docker." He later clarified that containers are not going away, but the quote reflects a genuine insight — Wasm solves the portability and isolation problems that Docker was originally built to address, with dramatically less overhead.
Edge Computing: The First Big Win
Edge computing — running code on servers geographically close to users rather than in centralized data centers — was the first domain where Wasm outside the browser found serious production adoption.
Cloudflare Workers is the most prominent example. When Cloudflare needed to run customer code at the edge across 300+ data centers, traditional containers were a non-starter. Cold start times of 200-500ms would defeat the purpose of being close to the user. V8 isolates (the same JavaScript engine used in Chrome) reduced startup time dramatically, but Cloudflare went further and added Wasm support. A Cloudflare Worker written in Rust, compiled to Wasm, cold-starts in under 5 milliseconds.
Fastly took an even more Wasm-native approach with Compute@Edge, building their entire edge compute platform on Wasmtime (a standalone Wasm runtime developed by the Bytecode Alliance). Fastly's platform does not use V8 at all — every workload, whether written in Rust, Go, or JavaScript, compiles to Wasm and runs on Wasmtime. This architectural choice gives them consistent performance characteristics across all supported languages.
The pattern is clear: when you need to run untrusted code in thousands of locations with microsecond startup times, Wasm is the only viable option that does not compromise on security.
Serverless Functions: Faster Cold Starts
The cold start problem plagues every serverless platform. AWS Lambda functions written in Java can take 3-10 seconds to cold-start. Even Node.js functions take 100-300ms. These delays are tolerable for background jobs but unacceptable for latency-sensitive APIs.
Wasm runtimes change the equation. Projects like Spin (by Fermyon) and wasmCloud let developers write serverless functions that cold-start in single-digit milliseconds regardless of the source language. The key insight is that Wasm modules are pre-compiled to native code (ahead-of-time compilation), so there is no JIT warmup phase. The runtime loads the pre-compiled module, sets up the linear memory, and starts executing immediately.
// A Spin HTTP handler in Rust, compiled to Wasm
// Cold start: ~1ms. No container. No runtime initialization.
#[http_component]
fn handle_request(req: Request) -> Result<Response> {
let uri = req.uri().to_string();
Ok(http::Response::builder()
.status(200)
.body(Some(format!("Hello from {}", uri).into()))?)
}
Fermyon's benchmarks show Spin handling 10x more requests per second than equivalent AWS Lambda functions on the same hardware, primarily because the Wasm runtime's memory footprint is tiny (a few megabytes per instance versus hundreds of megabytes for a Lambda container).
Plugin Systems: Safe Extension Points
One of the most practical applications of Wasm outside the browser is as a plugin format. The problem it solves is old: how do you let third-party code extend your application without risking crashes, security holes, or version conflicts?
Traditional approaches have well-known drawbacks. Shared libraries (DLLs, .so files) run in the host process with full access to memory — a buggy plugin can corrupt the host. Separate processes add IPC overhead and complexity. Scripting language embeds (Lua, Python) provide sandboxing but sacrifice performance.
Wasm gives you sandboxed execution at near-native speed with a well-defined interface boundary. Several major projects have adopted this approach:
- Envoy Proxy: The service mesh proxy used by Istio supports Wasm filters, allowing developers to add custom request processing logic without recompiling Envoy or writing C++. Filters run in a Wasm sandbox with controlled access to request headers and bodies.
- PostgreSQL (via Supabase Wrappers): Supabase is experimenting with Wasm-based foreign data wrappers, allowing custom data source connectors to run safely inside the database process.
- Zellij: This terminal multiplexer uses Wasm for its plugin system. Plugins written in Rust, Go, or any Wasm-targeting language run in isolated sandboxes and communicate with the host through a defined API. A misbehaving plugin cannot crash the terminal.
- Figma: While technically in-browser, Figma's plugin system demonstrates the pattern. Plugins run in a Wasm sandbox separate from the main application, preventing plugins from accessing other users' design data or interfering with the core rendering engine.
WASI: The Missing Piece
WebAssembly in the browser has access to Web APIs (DOM, fetch, WebGL). Outside the browser, there are no Web APIs. WASI — the WebAssembly System Interface — fills this gap by defining a standardized set of system-level APIs that Wasm modules can call: filesystem access, network sockets, clocks, random number generation, and environment variables.
WASI is designed with the capability-based security model. A Wasm module cannot simply open any file on disk. Instead, the host grants specific directory handles at startup, and the module can only access files within those directories. This is a fundamentally different security model from POSIX, where any process can attempt to open any path and relies on OS-level permissions for access control.
// Running a Wasm module with WASI, granting access to /data only
// The module sees /data as its root filesystem
wasmtime run --dir /data::/ my_module.wasm
WASI is still evolving. The current stable version (preview 1) covers basic I/O. Preview 2, which reached stability in early 2025, adds async I/O, HTTP client/server primitives, and a component model that lets Wasm modules expose and consume typed interfaces. This component model is the real game-changer — it allows modules written in different languages to interoperate without going through lowest-common-denominator C-style FFI.
The Container Question
Docker and the OCI ecosystem are not going away. But the boundary between containers and Wasm is blurring. Docker Desktop now ships with built-in Wasm support through the containerd-wasm-shim project. You can run a Wasm workload alongside Linux containers in the same Docker Compose file:
# docker-compose.yml mixing containers and Wasm
services:
api:
image: my-api:latest
runtime: io.containerd.wasmtime.v1
platform: wasi/wasm
database:
image: postgres:16
# Regular Linux container
Kubernetes is following the same path. The runwasi project integrates Wasm runtimes as container runtime shims, letting Kubernetes schedule Wasm workloads alongside traditional containers. The scheduling abstraction stays the same; only the execution layer changes.
The practical implication: you do not need to choose between containers and Wasm. Use containers for workloads that need full OS capabilities (database servers, legacy applications). Use Wasm for workloads that benefit from fast startup, small footprint, and strong sandboxing (edge functions, event handlers, data transformation pipelines).
What Wasm Cannot Do (Yet)
Wasm outside the browser has real limitations that are important to acknowledge:
- No threads (in most runtimes): The Wasm threads proposal exists, but most server-side runtimes either do not support it or support it experimentally. CPU-bound parallel workloads still perform better in native containers.
- Limited networking: WASI's networking story is still maturing. Raw socket access is not available in the stable WASI API. HTTP works through the WASI-HTTP proposal, but lower-level protocols require host-specific extensions.
- Garbage-collected language performance: Languages with garbage collectors (Go, Python, Java) compile to Wasm but carry their entire runtime, resulting in larger binaries and more memory usage. Rust and C compile to lean Wasm modules; Go and Python modules are often 10-20x larger.
- Ecosystem maturity: The tooling is improving rapidly but is not at parity with container tooling. Debugging Wasm modules, profiling performance, and managing deployments still require specialized knowledge.
Where This Is Going
The trajectory is clear. Wasm is becoming a universal binary format for code that needs to run safely across trust boundaries — whether those boundaries are between a CDN and customer code, a database and its plugins, or a host application and its extensions.
The WASI component model will be the inflection point. Once modules can expose and consume typed, language-independent interfaces, the ecosystem shifts from "compile your app to Wasm" to "compose applications from Wasm components." A data validation module written in Rust, a business logic module written in Go, and a templating module written in Python could all be linked together into a single application without any of them knowing about the others' implementation languages.
For developers, the practical advice is straightforward: if you are building anything that runs at the edge, processes untrusted input, or needs a plugin system, evaluate Wasm as an execution environment. The cold start advantage alone is worth the investigation. And if you are writing Rust or C, the path to Wasm is already well-paved — cargo build --target wasm32-wasi may be the most impactful flag you add to your build this year.