Vertex Processing
Vertex buffers and vertex shaders
Where Rendering Begins
Every triangle you see on screen starts as data: positions, colors, texture coordinates. This data lives in vertex buffers—chunks of GPU memory containing everything the render pipeline needs to know about the geometry it will draw.
The vertex shader runs once for each vertex, transforming input attributes into output values that the rasterizer will interpolate across the triangle. Understanding how data flows from buffers through the vertex shader is fundamental to GPU graphics programming.
Vertex Buffers
A vertex buffer is simply a block of memory containing vertex data. At minimum, you need positions—three floats per vertex defining where each point sits in 3D space. But vertices typically carry much more: colors, normals for lighting, texture coordinates for mapping images onto surfaces.
// The simplest vertex: just a position
struct Vertex {
@location(0) position: vec3<f32>,
}
// A richer vertex with multiple attributes
struct RichVertex {
@location(0) position: vec3<f32>,
@location(1) color: vec4<f32>,
@location(2) normal: vec3<f32>,
@location(3) uv: vec2<f32>,
}On the JavaScript side, you create vertex buffers and describe their layout to WebGPU:
const vertices = new Float32Array([
// Position Color (RGBA) UV
-0.5, -0.5, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0,
0.5, -0.5, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0,
0.0, 0.5, 0.0, 0.0, 0.0, 1.0, 1.0, 0.5, 1.0,
]);
const vertexBuffer = device.createBuffer({
size: vertices.byteLength,
usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
});
device.queue.writeBuffer(vertexBuffer, 0, vertices);The crucial part is telling WebGPU how to interpret this raw data. You specify which bytes correspond to which attributes, how large each vertex is, and how to step through the buffer.
Buffer Layouts: Interleaved vs Separate
There are two fundamental ways to organize vertex data. In an interleaved layout, all attributes for one vertex are stored together, then all attributes for the next vertex, and so on. In a separate layout, each attribute type has its own buffer.
Interactive: Buffer Layout Comparison
Interleaved: All attributes for each vertex are stored together. Good cache locality when reading all attributes.
Interleaved buffers keep related data close together in memory, which can improve cache performance when the shader reads all attributes of a vertex at once. Separate buffers offer flexibility—you can update positions without touching colors, or share one position buffer across multiple render passes that use different additional attributes.
The choice depends on your use case. Most applications start with interleaved buffers for simplicity. Separate buffers become valuable in advanced scenarios like skeletal animation (where positions change but UVs stay fixed) or when different materials need different attribute combinations.
Vertex Attributes
Each piece of data attached to a vertex is an attribute. WebGPU identifies attributes by location numbers—integers that connect your buffer layout to your shader inputs.
@vertex
fn main(
@location(0) position: vec3<f32>,
@location(1) color: vec4<f32>,
@location(2) uv: vec2<f32>,
) -> @builtin(position) vec4<f32> {
// Transform and return clip-space position
return vec4(position, 1.0);
}The @location(n) attribute in WGSL must match the shaderLocation you specify in the vertex buffer layout descriptor on the JavaScript side. This matching is how WebGPU knows which bytes in your buffer correspond to which shader input.
Interactive: Vertex Attributes Explorer
@location(0) Position
vec3<f32>
The vertex location in 3D space (x, y, z)
struct Vertex {
@location(0) position: vec3<f32>,
@location(1) color: vec4<f32>,
@location(3) uv: vec2<f32>,
}Click attributes to toggle them. Position is always required. The vertex struct shows how attributes map to @location indices.
Common attributes include:
Position (vec3<f32> or vec4<f32>) defines where the vertex sits in model space. This is the only required attribute—without positions, there is nothing to draw.
Color (vec4<f32> or vec3<f32>) carries per-vertex colors. The rasterizer interpolates these across the triangle, creating smooth gradients.
Normal (vec3<f32>) specifies the surface direction at the vertex, essential for lighting calculations. Normals point perpendicular to the surface.
Texture Coordinates (vec2<f32>, often called UVs) map the vertex to a location in a texture image. The rasterizer interpolates UVs across the triangle, telling the fragment shader where to sample.
Index Buffers
Drawing a cube requires 8 vertices but 12 triangles (2 per face, 6 faces). Without index buffers, you would need to specify 36 vertices (3 per triangle × 12 triangles), duplicating each corner position multiple times.
Index buffers solve this wastefully by storing indices—integers that reference vertices in the vertex buffer. Instead of duplicating vertex data, you list which vertices form each triangle.
Interactive: Index Buffer Efficiency
Hover over triangles to see which vertices they reference. Index buffers let vertices be shared across multiple triangles.
Notice how index buffers dramatically reduce the amount of data. For a cube, 8 vertices plus 36 indices is far smaller than 36 complete vertex records. The savings scale with mesh complexity—a detailed character model might have thousands of shared vertices.
// 8 unique vertices for a quad (two triangles)
const vertices = new Float32Array([
-0.5, -0.5, 0.0, // 0: bottom-left
0.5, -0.5, 0.0, // 1: bottom-right
0.5, 0.5, 0.0, // 2: top-right
-0.5, 0.5, 0.0, // 3: top-left
]);
// 6 indices defining two triangles
const indices = new Uint16Array([
0, 1, 2, // first triangle
0, 2, 3, // second triangle
]);Index buffers use GPUBufferUsage.INDEX and are bound separately from vertex buffers. The draw call then specifies drawIndexed() instead of draw().
The Vertex Shader
The vertex shader is a programmable stage that runs once per vertex. Its primary job is transforming vertex positions from model space to clip space—the coordinate system the rasterizer expects.
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) color: vec4<f32>,
}
struct VertexOutput {
@builtin(position) position: vec4<f32>,
@location(0) color: vec4<f32>,
}
@group(0) @binding(0) var<uniform> mvp: mat4x4<f32>;
@vertex
fn main(input: VertexInput) -> VertexOutput {
var output: VertexOutput;
output.position = mvp * vec4(input.position, 1.0);
output.color = input.color;
return output;
}The output marked @builtin(position) is special—it must be a vec4<f32> in clip space. The GPU uses this position to determine where the vertex appears on screen and which pixels the triangle covers.
Other outputs (like color above) are interpolants. The vertex shader computes them at each vertex, and the rasterizer interpolates them across the triangle surface. The fragment shader receives these interpolated values.
Interactive: Vertex Shader Data Flow
@vertex
fn main(@location(0) pos: vec3<f32>,
@location(1) color: vec4<f32>) -> VertexOutput {
var out: VertexOutput;
out.position = mvp * vec4(pos, 1.0); // Transform
out.color = color; // Pass through
return out;
}Watch each vertex flow through the vertex shader. The shader transforms positions from model space to clip space while passing colors to the next stage.
The shader can do far more than simple transformation. Common vertex shader operations include:
Model-View-Projection transformation converts positions from model space through world space and view space into clip space—a single matrix multiply.
Skeletal animation blends multiple bone transformations weighted by per-vertex bone indices and weights, deforming a mesh according to an animated skeleton.
Vertex displacement modifies positions procedurally, creating effects like ocean waves or terrain deformation from heightmaps.
Passing Data to the Fragment Shader
Whatever you output from the vertex shader (besides the built-in position) becomes input to the fragment shader. The rasterizer interpolates these values across the triangle, so each fragment receives a blended value based on its position relative to the three vertices.
// Vertex shader output / Fragment shader input
struct Varyings {
@builtin(position) position: vec4<f32>,
@location(0) color: vec4<f32>,
@location(1) uv: vec2<f32>,
@location(2) worldNormal: vec3<f32>,
}
@vertex
fn vertexMain(input: VertexInput) -> Varyings {
var out: Varyings;
out.position = mvp * vec4(input.position, 1.0);
out.color = input.color;
out.uv = input.uv;
out.worldNormal = (model * vec4(input.normal, 0.0)).xyz;
return out;
}
@fragment
fn fragmentMain(in: Varyings) -> @location(0) vec4<f32> {
// in.color, in.uv, in.worldNormal are interpolated
return in.color;
}The @location numbers in the output struct must match the @location numbers in the fragment shader's input. This is how WebGPU connects the two stages.
Interpolation is perspective-correct by default, meaning values like texture coordinates remain accurate even when viewing triangles at an angle. This prevents the texture swimming artifacts that plagued early 3D graphics using affine texture mapping.
Built-in Inputs
Beyond your custom attributes, the vertex shader can access built-in values:
@vertex
fn main(
@builtin(vertex_index) vertexIndex: u32,
@builtin(instance_index) instanceIndex: u32,
) -> @builtin(position) vec4<f32> {
// vertexIndex: which vertex this invocation processes (0, 1, 2, ...)
// instanceIndex: which instance when using instanced drawing
// ...
}@builtin(vertex_index) provides the index of the current vertex. Useful for procedural geometry where you calculate positions from the index rather than reading from a buffer.
@builtin(instance_index) identifies which instance is being processed during instanced drawing—a technique for rendering many copies of the same mesh efficiently.
Key Takeaways
- Vertex buffers store per-vertex data (positions, colors, normals, UVs) in GPU memory
- Interleaved layouts pack all attributes for each vertex together; separate layouts use one buffer per attribute type
- Index buffers reference vertices by index, eliminating duplication and reducing memory usage
- The vertex shader transforms each vertex from model space to clip space and prepares data for interpolation
- Outputs from the vertex shader are interpolated across the triangle by the rasterizer
- The
@builtin(position)output is required and determines where pixels appear; other outputs become fragment shader inputs - Vertex shaders enable effects like skeletal animation, procedural displacement, and per-vertex lighting