Your First Triangle
The classic GPU hello world—drawing a triangle from scratch
Every graphics programmer's journey begins the same way: with a triangle. Not because triangles are inherently interesting, but because they are the fundamental unit of everything you will ever render. Every mesh, every character, every landscape is made of triangles. The GPU is a machine optimized for one thing above all else: drawing triangles extremely fast.
Your first WebGPU triangle
WebGPU not available—using Canvas 2D fallback
This chapter walks through the complete code to render that red triangle. By the end, you will understand each piece—the shaders, the pipeline, the draw call—and how they fit together.
The Triangle
Why start with a triangle? A point is simpler, and a line requires only two vertices. But a triangle is the simplest closed shape, the minimal polygon. It defines an interior that can be filled. This matters because most rendering is about filling surfaces, and surfaces are built from triangles.
A triangle has three vertices. Each vertex is a point in space. For our first triangle, we will define these positions directly in the shader rather than passing them from JavaScript. This keeps the code minimal and focuses attention on what matters: the rendering pipeline.
Our three vertices sit in clip space, where coordinates range from -1 to 1 in both x and y. The center of the screen is (0, 0). The top of the screen is y = 1, the bottom is y = -1.
(0, 0.5)
/\
/ \
/ \
/ \
/________\
(-0.5, -0.5) (0.5, -0.5)These three points—(0, 0.5), (-0.5, -0.5), and (0.5, -0.5)—form an upward-pointing triangle centered on screen.
The Vertex Shader
A vertex shader runs once per vertex. Its job is to determine where that vertex appears on screen. The input is vertex data—positions, colors, texture coordinates, whatever you provide. The output must include a position in clip space.
Here is the complete vertex shader for our triangle:
@vertex
fn vertexMain(@builtin(vertex_index) vertexIndex: u32) -> @builtin(position) vec4f {
var pos = array<vec2f, 3>(
vec2f(0.0, 0.5),
vec2f(-0.5, -0.5),
vec2f(0.5, -0.5)
);
return vec4f(pos[vertexIndex], 0.0, 1.0);
}The @vertex attribute marks this as a vertex shader entry point. The function takes one input: vertexIndex, a built-in value that tells us which vertex we are processing (0, 1, or 2). We look up that vertex's position from a hardcoded array and return it as a 4D vector.
The return type vec4f has four components: x, y, z, and w. We set z to 0 (our triangle is flat) and w to 1 (required for proper perspective division, though it does not matter for 2D). The @builtin(position) annotation tells the GPU this output is the clip-space position.
This shader is minimal, but it demonstrates the pattern: vertex shaders transform positions. In later chapters, we will pass vertex data through buffers, apply transformations, and compute per-vertex lighting. For now, the shader simply returns fixed positions.
The Fragment Shader
After the vertex shader processes all vertices, the GPU rasterizes the triangle—it determines which pixels fall inside the triangle's edges. For each of these pixels (called fragments), the fragment shader runs to determine the final color.
Here is our fragment shader:
@fragment
fn fragmentMain() -> @location(0) vec4f {
return vec4f(1.0, 0.0, 0.0, 1.0);
}This shader is almost trivial. It takes no inputs and returns a single value: the color red. The @location(0) annotation means this output goes to the first color attachment—in our case, the canvas itself.
Colors are four-component vectors: red, green, blue, and alpha. Values range from 0 to 1. So vec4f(1.0, 0.0, 0.0, 1.0) is fully red, no green, no blue, fully opaque.
Every fragment inside the triangle runs this shader and gets assigned the same red color. The result: a solid red triangle.
Interactive: Edit the fragment color
The fragment shader returns this vec4 color for every pixel inside the triangle.
The Render Pipeline
Shaders alone do not render anything. They need to be compiled, configured, and assembled into a render pipeline. The pipeline defines the complete sequence of operations from vertex input to pixel output.
The pipeline stages
Hover over each stage to learn what it does in the rendering pipeline.
Creating a pipeline involves several steps. First, we compile the shader code into shader modules:
const shaderModule = device.createShaderModule({
code: `
@vertex
fn vertexMain(@builtin(vertex_index) vertexIndex: u32) -> @builtin(position) vec4f {
var pos = array<vec2f, 3>(
vec2f(0.0, 0.5),
vec2f(-0.5, -0.5),
vec2f(0.5, -0.5)
);
return vec4f(pos[vertexIndex], 0.0, 1.0);
}
@fragment
fn fragmentMain() -> @location(0) vec4f {
return vec4f(1.0, 0.0, 0.0, 1.0);
}
`
});Both shaders live in the same module here. In larger projects, you might split them across files.
Next, we create the pipeline itself:
const pipeline = device.createRenderPipeline({
layout: 'auto',
vertex: {
module: shaderModule,
entryPoint: 'vertexMain'
},
fragment: {
module: shaderModule,
entryPoint: 'fragmentMain',
targets: [{ format: canvasFormat }]
},
primitive: {
topology: 'triangle-list'
}
});The pipeline configuration specifies:
The vertex stage points to our vertex shader entry point. It could also define vertex buffer layouts if we were passing geometry data from JavaScript, but our hardcoded triangle needs no external data.
The fragment stage points to our fragment shader and declares one output target. The format must match the canvas format—usually bgra8unorm or rgba8unorm depending on the platform.
The primitive topology tells the GPU how to interpret vertices. 'triangle-list' means every three vertices form an independent triangle. Other options include 'triangle-strip' (shared edges) and 'line-list' (for wireframes).
The Draw Call
With a pipeline created, we can finally draw. Rendering in WebGPU happens inside render passes, which are recorded into command encoders and submitted to the GPU queue.
const commandEncoder = device.createCommandEncoder();
const renderPass = commandEncoder.beginRenderPass({
colorAttachments: [{
view: context.getCurrentTexture().createView(),
clearValue: { r: 0, g: 0, b: 0, a: 1 },
loadOp: 'clear',
storeOp: 'store'
}]
});
renderPass.setPipeline(pipeline);
renderPass.draw(3);
renderPass.end();
device.queue.submit([commandEncoder.finish()]);The render pass begins with configuration: we specify what to render to (the canvas texture) and what to do at the start (clear it to black). The loadOp: 'clear' ensures we start with a blank slate. The storeOp: 'store' ensures our results are written back.
Inside the pass, we set our pipeline and issue a draw call. draw(3) tells the GPU to invoke the vertex shader three times, with vertexIndex values 0, 1, and 2. The GPU then rasterizes the resulting triangle and runs the fragment shader for each covered pixel.
Finally, we end the pass, finish recording the command buffer, and submit it. The draw happens asynchronously—submit() returns immediately while the GPU works.
Putting It Together
Here is the complete initialization and rendering code:
async function initWebGPU() {
// Request adapter and device
const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();
// Configure the canvas
const canvas = document.querySelector('canvas');
const context = canvas.getContext('webgpu');
const format = navigator.gpu.getPreferredCanvasFormat();
context.configure({ device, format });
// Create shader module
const shaderModule = device.createShaderModule({
code: `
@vertex
fn vertexMain(@builtin(vertex_index) vertexIndex: u32) -> @builtin(position) vec4f {
var pos = array<vec2f, 3>(
vec2f(0.0, 0.5),
vec2f(-0.5, -0.5),
vec2f(0.5, -0.5)
);
return vec4f(pos[vertexIndex], 0.0, 1.0);
}
@fragment
fn fragmentMain() -> @location(0) vec4f {
return vec4f(1.0, 0.0, 0.0, 1.0);
}
`
});
// Create pipeline
const pipeline = device.createRenderPipeline({
layout: 'auto',
vertex: {
module: shaderModule,
entryPoint: 'vertexMain'
},
fragment: {
module: shaderModule,
entryPoint: 'fragmentMain',
targets: [{ format }]
},
primitive: {
topology: 'triangle-list'
}
});
// Render
const commandEncoder = device.createCommandEncoder();
const renderPass = commandEncoder.beginRenderPass({
colorAttachments: [{
view: context.getCurrentTexture().createView(),
clearValue: { r: 0, g: 0, b: 0, a: 1 },
loadOp: 'clear',
storeOp: 'store'
}]
});
renderPass.setPipeline(pipeline);
renderPass.draw(3);
renderPass.end();
device.queue.submit([commandEncoder.finish()]);
}Around 50 lines of JavaScript to draw a triangle. It feels like a lot for such a simple result. But consider what we have accomplished: we have established direct communication with the GPU, compiled shader programs, configured a rendering pipeline, and issued draw commands that execute in parallel across thousands of cores.
This foundation scales. The same pattern—shader modules, pipelines, render passes, draw calls—renders everything from simple triangles to photorealistic scenes. The complexity comes from what you put in the shaders and how many draw calls you issue, not from a different architecture.
Interactive: Move the vertices
Drag the vertex handles or use the sliders to reshape the triangle. Coordinates are in clip space (-1 to 1).
Key Takeaways
- A triangle is the atomic unit of GPU rendering—all meshes decompose into triangles
- The vertex shader runs once per vertex and must output a clip-space position
- The fragment shader runs once per pixel inside the rasterized primitive and outputs a color
- The render pipeline connects shaders with configuration for how primitives are assembled and rendered
- Drawing happens by recording commands into a command encoder and submitting to the device queue
- This pattern—create pipeline, begin render pass, set pipeline, draw, end pass, submit—is the skeleton of all WebGPU rendering