Resource Management

Pooling and cleanup strategies

GPU resources—buffers, textures, pipelines—are expensive to create. In performance-critical applications, the overhead of creating and destroying resources every frame adds up. Effective resource management means creating resources once, reusing them efficiently, and cleaning them up at the right time.

The Cost of Creation

Creating a buffer or texture involves several steps: allocating GPU memory, initializing state, and synchronizing with the driver. For small resources, this overhead can exceed the time spent actually using them.

// Expensive: creating a new buffer every frame
function renderFrame(data: Float32Array) {
  const buffer = device.createBuffer({
    size: data.byteLength,
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
  });
  device.queue.writeBuffer(buffer, 0, data);
  
  // ... render with buffer ...
  
  buffer.destroy();
}
typescript

This pattern creates and destroys buffers at 60+ times per second. Even if each creation takes only 0.1ms, that's 6ms per second wasted on allocation overhead.

Better: create the buffer once, reuse it:

let vertexBuffer: GPUBuffer | null = null;
 
function renderFrame(data: Float32Array) {
  if (!vertexBuffer || vertexBuffer.size < data.byteLength) {
    vertexBuffer?.destroy();
    vertexBuffer = device.createBuffer({
      size: data.byteLength,
      usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    });
  }
  
  device.queue.writeBuffer(vertexBuffer, 0, data);
  // ... render ...
}
typescript

Interactive: Resource lifecycle

Buffer
0
Frames
0
Creations
Low
Overhead

Reuse: Create once, update each frame. Minimal allocation overhead.

Resource Pooling

When you need many similar resources, maintain a pool. Request from the pool when needed, return when done:

class BufferPool {
  private available: GPUBuffer[] = [];
  private inUse = new Set<GPUBuffer>();
  
  constructor(
    private device: GPUDevice,
    private size: number,
    private usage: GPUBufferUsageFlags
  ) {}
  
  acquire(): GPUBuffer {
    let buffer = this.available.pop();
    if (!buffer) {
      buffer = this.device.createBuffer({
        size: this.size,
        usage: this.usage,
      });
    }
    this.inUse.add(buffer);
    return buffer;
  }
  
  release(buffer: GPUBuffer) {
    if (this.inUse.delete(buffer)) {
      this.available.push(buffer);
    }
  }
  
  destroy() {
    for (const buffer of this.available) {
      buffer.destroy();
    }
    for (const buffer of this.inUse) {
      buffer.destroy();
    }
    this.available = [];
    this.inUse.clear();
  }
}
typescript

Interactive: How resource pools work

Buffer Pool0 total (0 in use, 0 available)
Pool empty
In use
Available
Activity log...

Acquire reuses available buffers before creating new ones. Release returns buffers to the pool instead of destroying them.

Pools work well for:

  • Uniform buffers (often same size, updated every frame)
  • Staging buffers (temporary buffers for CPU-GPU transfers)
  • Render targets of fixed sizes
  • Command encoders (though these are cheap to create)

Pools are less useful for:

  • Resources with varying sizes (textures of different dimensions)
  • Resources that live for the entire application lifetime

Lazy Creation

Not every resource needs to exist at startup. Create resources on first use:

class MaterialSystem {
  private pipelineCache = new Map<string, GPURenderPipeline>();
  
  getPipeline(materialId: string): GPURenderPipeline {
    let pipeline = this.pipelineCache.get(materialId);
    if (!pipeline) {
      pipeline = this.createPipeline(materialId);
      this.pipelineCache.set(materialId, pipeline);
    }
    return pipeline;
  }
  
  private createPipeline(materialId: string): GPURenderPipeline {
    // Expensive creation happens only on first use
    return this.device.createRenderPipeline({
      // ... configuration based on materialId ...
    });
  }
}
typescript

Lazy creation spreads initialization cost over time. The first frame using a new material might stutter, but subsequent frames run smoothly. For smoother startup, consider:

  • Pre-warming common pipelines during loading screens
  • Creating pipelines asynchronously with createRenderPipelineAsync()
// Async pipeline creation doesn't block the main thread
const pipelinePromise = device.createRenderPipelineAsync({
  // ... configuration ...
});
 
// Later, when you need the pipeline:
const pipeline = await pipelinePromise;
typescript

Cleanup Strategies

When should you destroy resources? Several strategies exist:

Interactive: Cleanup strategy comparison

Immediate Cleanup

Destroy resources as soon as they're no longer needed. Requires careful tracking of resource lifetimes.

Memory Usage
low
Code Complexity
high
Cleanup Latency
medium
Best For
Memory-constrained environments, resources with clear ownership

Choose based on your constraints. Most applications do well with deferred cleanup.

Immediate cleanup: Destroy resources as soon as they're no longer needed. Minimizes memory usage but requires careful tracking.

function processOnce(data: Float32Array) {
  const buffer = device.createBuffer({ ... });
  // ... use buffer ...
  buffer.destroy(); // Clean up immediately
}
typescript

Deferred cleanup: Batch destruction at frame boundaries. Simpler to implement, may hold memory longer than necessary.

const pendingDestruction: GPUBuffer[] = [];
 
function markForDestruction(buffer: GPUBuffer) {
  pendingDestruction.push(buffer);
}
 
function endFrame() {
  for (const buffer of pendingDestruction) {
    buffer.destroy();
  }
  pendingDestruction.length = 0;
}
typescript

Reference counting: Track how many systems use each resource. Destroy when count reaches zero.

class RefCountedBuffer {
  private refCount = 1;
  
  constructor(public buffer: GPUBuffer) {}
  
  addRef() {
    this.refCount++;
  }
  
  release() {
    this.refCount--;
    if (this.refCount === 0) {
      this.buffer.destroy();
    }
  }
}
typescript

Generation-based: Assign resources to "generations" (e.g., frames). Destroy all resources from old generations.

class GenerationalPool {
  private current = new Map<number, GPUBuffer[]>();
  private generation = 0;
  
  allocate(): GPUBuffer {
    const buffer = this.device.createBuffer({ ... });
    const gen = this.current.get(this.generation) ?? [];
    gen.push(buffer);
    this.current.set(this.generation, gen);
    return buffer;
  }
  
  advanceGeneration(keepGenerations = 2) {
    this.generation++;
    for (const [gen, buffers] of this.current) {
      if (gen < this.generation - keepGenerations) {
        for (const buffer of buffers) {
          buffer.destroy();
        }
        this.current.delete(gen);
      }
    }
  }
}
typescript

Device Lost Handling

The GPU device can be lost at any time—driver crash, GPU reset, system sleep/wake. When this happens, all resources become invalid. Your application must handle this gracefully.

device.lost.then((info) => {
  console.log(`Device lost: ${info.reason}, ${info.message}`);
  
  // All GPU resources are now invalid
  // Must reinitialize everything
  initializeWebGPU().then(newDevice => {
    device = newDevice;
    recreateAllResources();
  });
});
typescript

Interactive: Device lost recovery flow

Application Running

Device loss can happen anytime. Applications must detect it, reinitialize WebGPU, and recreate all resources from stored data.

Design for recovery from the start:

  • Keep original data (textures, meshes) accessible, not just GPU copies
  • Structure resource creation as idempotent functions that can be re-called
  • Store configuration separate from GPU objects
class Mesh {
  private vertexBuffer: GPUBuffer | null = null;
  
  constructor(
    private vertices: Float32Array,
    private device: GPUDevice
  ) {
    this.createGPUResources();
  }
  
  private createGPUResources() {
    this.vertexBuffer = this.device.createBuffer({
      size: this.vertices.byteLength,
      usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    });
    this.device.queue.writeBuffer(this.vertexBuffer, 0, this.vertices);
  }
  
  // Called after device loss recovery
  recreate(newDevice: GPUDevice) {
    this.device = newDevice;
    this.createGPUResources();
  }
  
  getVertexBuffer(): GPUBuffer {
    return this.vertexBuffer!;
  }
}
typescript

Memory Budgets

GPU memory is finite. Track your usage and stay within limits:

class MemoryTracker {
  private allocated = 0;
  private budget: number;
  
  constructor(budget: number) {
    this.budget = budget;
  }
  
  allocate(size: number): boolean {
    if (this.allocated + size > this.budget) {
      console.warn('Memory budget exceeded');
      return false;
    }
    this.allocated += size;
    return true;
  }
  
  free(size: number) {
    this.allocated -= size;
  }
  
  getUsage() {
    return {
      allocated: this.allocated,
      budget: this.budget,
      percent: (this.allocated / this.budget) * 100,
    };
  }
}
typescript

When approaching limits:

  • Reduce texture resolution
  • Unload distant or unused assets
  • Use compressed texture formats
  • Stream resources on demand

Key Takeaways

  • Resource creation has overhead—reuse resources instead of creating per-frame
  • Pools efficiently manage many similar resources with acquire/release patterns
  • Lazy creation spreads initialization cost; use async pipeline creation for smoothness
  • Choose cleanup strategy based on memory vs complexity tradeoffs
  • Design for device loss recovery from the start—keep source data and creation logic accessible
  • Track memory usage and implement fallbacks when approaching limits