The Raymarching Algorithm
Sphere tracing through distance fields
Everything we have learned so far has been building to this moment. Signed distance functions, coordinate systems, shaping functions: they all converge here, in an algorithm that renders 3D scenes using nothing but math. No triangles. No meshes. Just a function that tells us how far away things are.
Raymarching is how we turn those distance functions into images.
The Core Idea
Imagine standing at a camera position in a 3D world. For each pixel on your screen, you shoot a ray outward into the scene. The question is: does this ray hit anything?
If we had triangle meshes, we would compute ray-triangle intersections. But we have something different: a function that, given any point in space, tells us the distance to the nearest surface. This distance function changes everything.
The Raymarching Concept
Each blue circle shows the safe distance from that point. The ray steps forward by exactly that amount each iteration until it reaches the surface.
Here is the insight that makes raymarching work. If the SDF tells us we are 0.5 units from the nearest surface, then we can safely take a step of 0.5 units in any direction without hitting anything. The sphere centered at our position with that radius is guaranteed to be empty.
So we step forward by exactly that distance. Then we ask the SDF again. It gives us a new distance, we step that far, and repeat. Each step brings us closer to the surface until we either hit it or decide the ray has gone too far.
Setting Up the Ray
Every ray needs two things: an origin (where the camera is) and a direction (where this particular pixel is looking).
The origin is simple: it is the camera position in world space. The direction requires a bit more work. We need to transform each pixel's UV coordinate into a 3D direction vector.
vec3 getRayDir(vec2 uv, vec3 camPos, vec3 lookAt, float zoom) {
vec3 forward = normalize(lookAt - camPos);
vec3 right = normalize(cross(vec3(0, 1, 0), forward));
vec3 up = cross(forward, right);
return normalize(uv.x * right + uv.y * up + zoom * forward);
}The zoom parameter controls the field of view. Higher values create a narrower view (more telephoto). Lower values create a wider view.
Interactive: Camera Ray Setup
Each pixel maps to a ray direction. Center pixels point forward (white dot). Edge pixels point outward. Higher zoom narrows the field of view.
Each pixel gets its own ray direction based on its UV coordinate. The center pixel points straight forward. Pixels at the edges point outward at an angle. This creates the perspective projection we expect from a camera.
The Raymarching Loop
The algorithm itself is remarkably simple. We maintain a total distance traveled along the ray. At each step, we sample the SDF at our current position, then advance by that amount.
float rayMarch(vec3 ro, vec3 rd) {
float t = 0.0;
for (int i = 0; i < 100; i++) {
vec3 p = ro + rd * t;
float d = sceneSDF(p);
if (d < 0.001) break;
if (t > 100.0) break;
t += d;
}
return t;
}Let us trace through this step by step:
- We start at
t = 0, meaning we are at the camera origin - We calculate our current position:
p = ro + rd * t - We query the scene SDF to find the distance to the nearest surface
- If the distance is tiny (less than 0.001), we have hit something
- If we have traveled too far (more than 100 units), we give up
- Otherwise, we advance by the safe distance and repeat
Interactive: Sphere Tracing in Action
Step 1 of 2 | Distance: 3.000
Watch how the steps get smaller as the ray approaches the surface. In open space, we take large leaps. Near geometry, we take cautious tiny steps. This adaptive stepping is what makes raymarching efficient.
Why This Works: The Sphere Tracing Guarantee
The algorithm is sometimes called sphere tracing because of its geometric guarantee. At any point, the SDF value defines a sphere around that point where no surface exists. We can safely jump to the edge of that sphere without missing anything.
This guarantee is why signed distance functions must be exact or conservative. If your SDF ever reports a distance larger than the actual nearest surface, the ray could skip right through geometry. Underestimating is safe (just slower). Overestimating breaks the algorithm.
Termination Conditions
The loop terminates in three ways:
Hit: The distance is smaller than some threshold (commonly 0.001). We have found a surface.
Miss: The total distance exceeds some maximum (commonly 100). The ray has escaped into empty space.
Iteration limit: We ran out of steps (commonly 100 iterations). This can indicate either a successful trace that needed many steps or an edge case like grazing along a surface.
Interactive: Hit vs Miss
Distance drops below threshold (0.001) - surface found
The thresholds matter. Too large a hit threshold and surfaces look rough. Too small and you waste iterations. The miss threshold depends on your scene scale. The iteration count is a performance budget.
Performance Considerations
Raymarching is not free. Every pixel runs the loop, and the loop can iterate many times. Understanding the cost structure helps write efficient shaders.
Step count is everything. Fewer steps means faster rendering. Wide-open scenes where rays travel far before hitting anything require many steps. Tight scenes with nearby geometry resolve quickly.
The SDF complexity matters too. Every step evaluates the scene SDF. If your SDF is expensive (many shapes, deep CSG trees), each step costs more.
Visualizing Step Count
Green areas resolve quickly with few steps. Orange/red areas require many iterations, typically at silhouette edges where rays graze the surface.
Notice how the edges of objects and the space behind them require the most steps. Rays grazing along surfaces take tiny steps, accumulating many iterations. Direct hits resolve quickly.
A Complete Example
Here is a minimal but complete raymarcher. It renders a sphere at the origin with basic shading:
float sphereSDF(vec3 p, float r) {
return length(p) - r;
}
float sceneSDF(vec3 p) {
return sphereSDF(p, 1.0);
}
float rayMarch(vec3 ro, vec3 rd) {
float t = 0.0;
for (int i = 0; i < 100; i++) {
vec3 p = ro + rd * t;
float d = sceneSDF(p);
if (d < 0.001) break;
if (t > 100.0) break;
t += d;
}
return t;
}
void main() {
vec2 uv = (gl_FragCoord.xy - 0.5 * u_resolution) / u_resolution.y;
vec3 ro = vec3(0, 0, -3);
vec3 rd = normalize(vec3(uv, 1.0));
float t = rayMarch(ro, rd);
vec3 color = vec3(0.0);
if (t < 100.0) {
color = vec3(1.0 - t * 0.1);
}
gl_FragColor = vec4(color, 1.0);
}Live Raymarcher
A complete raymarched scene: sphere SDF, raymarching loop, normal calculation, and basic diffuse + specular lighting.
This is it. This is how you render 3D scenes in a fragment shader. The sphere exists not as geometry but as a mathematical function. The light exists not as a light source but as a simple distance-based falloff. Yet the result is unmistakably three-dimensional.
What Comes Next
We have the algorithm. Now we need more interesting scenes. In the next chapter, we will build a library of 3D signed distance functions: spheres, boxes, planes, and the operations that combine them. Then we will add proper lighting with normals, diffuse shading, and shadows.
The foundation is laid. Everything from here builds upon this simple loop.
Key Takeaways
- Raymarching renders 3D scenes by iteratively stepping along rays using SDF distances
- Each step advances by exactly the SDF value, which is the safe maximum distance
- The algorithm terminates when the distance is tiny (hit) or the ray travels too far (miss)
- Camera setup transforms 2D UV coordinates into 3D ray directions
- Step count determines performance; grazing rays take many small steps
- The core loop is simple: sample SDF, check termination, advance, repeat