WebGPU vs WebGL

What changed and why

WebGL brought GPU-accelerated graphics to the browser in 2011. It was a breakthrough—suddenly web developers could render 3D scenes without plugins. But WebGL was designed for a different era. GPUs have evolved, and the assumptions baked into WebGL's design now hold us back.

WebGPU is the replacement. It is not WebGL 3.0. It is a fundamentally different API, designed from the ground up for modern GPUs and modern applications. Understanding what changed—and why—will make everything that follows easier to grasp.

The WebGL Legacy

WebGL is a thin wrapper around OpenGL ES 2.0 (and later ES 3.0), which itself descends from OpenGL 1.0 from 1992. That lineage brings both compatibility and baggage.

The OpenGL model was designed when GPUs were fixed-function pipelines. You configured a series of stages—lighting, texturing, fog—and the GPU marched through them in order. State was global. You set the current texture, the current blend mode, the current shader, and then you issued a draw call. The GPU used whatever state happened to be active at that moment.

This model made sense when state changes were cheap and draw calls were expensive. But modern GPUs have inverted that relationship. State changes are now the bottleneck. The driver must validate each change, potentially recompile shaders, and synchronize with the GPU. Meanwhile, the GPU sits idle, waiting.

WebGL inherited this state machine model. Every piece of render state—blending, depth testing, stencil operations, bound textures—lives in a single global context. Change one thing, and you had better remember to change it back, or the next draw call will use the wrong state.

The Explicit Model

WebGPU takes a different approach. Instead of a global state machine, you create explicit objects that bundle related state together.

Interactive: API Concepts Side by Side

WebGL

Global State Machine

All state is global. You bind textures, set uniforms, configure blending—all on a single global context. The GPU uses whatever state was last set.

gl.bindTexture(gl.TEXTURE_2D, texture);
gl.activeTexture(gl.TEXTURE0);
gl.uniform1i(samplerLoc, 0);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
WebGPU

Pipeline State Objects

All render state is bundled into immutable pipeline objects created upfront. No global state to track or accidentally corrupt.

const pipeline = device.createRenderPipeline({
  vertex: { module: shaderModule, entryPoint: 'vs' },
  fragment: { module: shaderModule, entryPoint: 'fs',
    targets: [{ format: 'bgra8unorm',
      blend: { color: { srcFactor: 'src-alpha', 
        dstFactor: 'one-minus-src-alpha' }}}]
  }
});

Pipeline objects eliminate an entire class of bugs where forgotten state changes cause rendering errors.

Click each concept to see how WebGPU improves on WebGL's approach.

The shift is philosophical as much as technical. WebGL asks you to describe the sequence of state changes. WebGPU asks you to describe the final state you want. The API does not care how you got there—it only validates that what you asked for is coherent.

This explicitness has a cost: more code to write upfront. But it eliminates an entire category of bugs where forgotten state changes cause silent rendering errors. When something goes wrong in WebGPU, the error message tells you exactly what and where.

Interactive: State Models Compared

GL Context (Global)
Blend State
Depth State
Stencil State
Bound Textures
Bound Buffers
Active Program
Vertex Array
Viewport

Each draw call uses whatever state was last set globally.
Forgetting to reset state causes bugs.

WebGL's global state must be manually tracked and reset between draw calls.

Pipeline State Objects are the core abstraction. A pipeline bundles the shader modules, vertex layout, blend state, depth/stencil configuration—everything the GPU needs to execute a draw call. Create the pipeline once, at startup. At render time, simply bind it. No incremental state changes to track.

Compute Shaders

WebGL has no general-purpose compute capability. If you wanted to run parallel computation on the GPU—physics simulations, image processing, machine learning inference—you had to disguise it as rendering. Encode your data in textures. Write a fragment shader that performs your computation. Render a fullscreen quad to trigger execution. Read the results back from a framebuffer.

This worked, but it was awkward. You were fighting the abstraction, not working with it.

WebGPU treats compute as a first-class citizen. Compute shaders run outside the graphics pipeline entirely. You define workgroups, dispatch them, and the GPU executes your code in parallel across thousands of threads. No rendering scaffolding required.

This unlocks capabilities that were impractical before. Particle systems driven entirely by the GPU. Real-time fluid simulations. Neural network inference in the browser. Compute shaders are not just a nice feature—they represent a fundamental expansion of what browsers can do.

Pipeline State Objects

In WebGL, render state is scattered across dozens of function calls. To draw a mesh with a specific blend mode, depth test, and shader, you might write:

gl.useProgram(program);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LESS);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.bindVertexArray(vao);
gl.drawElements(gl.TRIANGLES, count, gl.UNSIGNED_SHORT, 0);
javascript

Each line mutates global state. If you forgot to disable blending from a previous draw call, this mesh will blend when it should not. The bug will be silent—the mesh just looks wrong.

In WebGPU, all of this state lives in a single pipeline object created at initialization:

const pipeline = device.createRenderPipeline({
  vertex: { module: shaderModule, entryPoint: 'vertex_main' },
  fragment: {
    module: shaderModule,
    entryPoint: 'fragment_main',
    targets: [{
      format: 'bgra8unorm',
      blend: {
        color: { srcFactor: 'src-alpha', dstFactor: 'one-minus-src-alpha', operation: 'add' },
        alpha: { srcFactor: 'one', dstFactor: 'one-minus-src-alpha', operation: 'add' },
      },
    }],
  },
  primitive: { topology: 'triangle-list' },
  depthStencil: { format: 'depth24plus', depthWriteEnabled: true, depthCompare: 'less' },
});
javascript

At draw time, you simply bind the pipeline. All state switches atomically. No risk of partial updates or forgotten changes.

The GPU benefits too. Because the driver knows the complete pipeline configuration at creation time, it can compile optimized machine code once. WebGL drivers must do this lazily, guessing at common state combinations and recompiling when surprised.

Command Buffers

WebGL operates in immediate mode. Each function call goes straight to the driver, which validates it, translates it, and queues it for the GPU. The CPU and GPU are tightly coupled—the CPU cannot race ahead while the GPU catches up.

Interactive: Immediate vs Buffered Execution

CPU
Set
Bind
Draw
Draw
Draw
End
CPU waits for GPU on each call
GPU

WebGL executes each command immediately—the CPU blocks waiting for the GPU.

WebGPU decouples command recording from command execution. You create a command encoder, record your rendering commands into it, then submit the finished buffer to the GPU queue. The CPU is free to start recording the next frame immediately, even while the GPU is still processing the previous one.

This matters for performance. The CPU and GPU can work in parallel, overlapping their work. It also matters for architecture. Command encoders can be created from any thread—you can record commands in parallel across multiple workers, then submit them from the main thread.

Command buffers also enable better validation. The driver can check an entire frame's worth of commands at once, catching errors earlier and providing more context in error messages.

Validation and Safety

WebGL inherits OpenGL's relaxed attitude toward errors. Invalid operations often produce undefined behavior rather than explicit failures. The gl.getError() function must be called manually to check for problems, and most code does not bother. Bugs manifest as visual glitches, silent data corruption, or crashes on specific hardware.

WebGPU validates aggressively. Every operation is checked. Out-of-bounds buffer access? Error. Mismatched bind group layout? Error. The validation layer catches mistakes immediately, with detailed messages explaining what went wrong.

This design reflects WebGPU's origins as a web API. Security is paramount. A malicious or buggy shader must not be able to read memory it should not access or crash the browser. The validation layer ensures that GPU operations remain sandboxed.

The validation does have a cost—extra CPU work on every operation. In production, some validation can be relaxed for performance. But during development, the aggressive checks catch bugs that would otherwise slip through to production.

WGSL vs GLSL

GLSL (OpenGL Shading Language) has served graphics programmers for two decades. But it carries legacy baggage: implicit type conversions, varying levels of precision guarantees, and version-dependent features. Different browsers and GPUs interpret edge cases differently.

WGSL (WebGPU Shading Language) was designed from scratch for WebGPU. The goals were portability, safety, and clarity.

WGSL has no implicit conversions—if you want to convert an i32 to an f32, you must say so explicitly. This catches bugs where accidental conversions produce incorrect results. The language defines behavior precisely, eliminating the "it works on my machine" problem.

The syntax is different from GLSL, which takes some adjustment. Attributes use @location(0) instead of layout(location = 0). Entry points are marked with @vertex or @fragment. Types look slightly different: vec4f instead of vec4.

@vertex
fn vs_main(@location(0) position: vec3f) -> @builtin(position) vec4f {
    return vec4f(position, 1.0);
}
 
@fragment
fn fs_main() -> @location(0) vec4f {
    return vec4f(1.0, 0.0, 0.0, 1.0);
}
wgsl

The adjustment is worth it. WGSL code behaves consistently across browsers and platforms. The compiler catches more errors at compile time. And the language is designed to evolve—new features can be added without the compatibility constraints that limited GLSL.

Feature Comparison

The differences between WebGL and WebGPU extend beyond API philosophy into raw capability.

Interactive: Feature Support Comparison

Full support
Partial/Extension
Not available

Click any feature to see detailed notes on support differences.

WebGPU brings capabilities that were simply impossible in WebGL: compute shaders, storage textures, multi-threaded command recording, and more. These are not incremental improvements—they represent new categories of what browsers can do with the GPU.

At the same time, WebGPU is newer and not yet universally supported. WebGL works on essentially every device with a GPU. WebGPU requires recent browsers and recent hardware. For applications that must reach everyone, WebGL remains relevant. For applications pushing the boundaries of what the web can do, WebGPU is the path forward.

Key Takeaways

  • WebGL is a thin wrapper over OpenGL ES, carrying decades of design decisions from a different era
  • WebGPU uses an explicit model where you describe desired state, not sequences of mutations
  • Pipeline State Objects bundle all render configuration into immutable, validated objects
  • Command buffers decouple recording from execution, enabling CPU-GPU parallelism
  • Compute shaders are first-class in WebGPU, enabling GPGPU without rendering hacks
  • WebGPU validates aggressively—errors are explicit, not undefined behavior
  • WGSL is a new shading language designed for safety and portability