WebGPU in Next.js Part 2: TypeGPU
While working on the WebGPU in Next.js series, I discovered the recently released TypeGPU library. TypeGPU wraps the raw WebGPU API with a type-safe, ergonomic layer that lets you focus on GPU concepts instead of plumbing. You still have access to every WebGPU feature, but the library keeps resource management, pipeline creation, and shader authoring manageable. In this post we will replace the boilerplate from part 1 with TypeGPU so you can see how much the experience improves. If you want to compare both approaches side-by-side, the original code lives in the pre-typegpu branch of the repo.
1. GPUProvider
Everything in TypeGPU starts with the TgpuRoot. This object initializes the GPU device once, exposes helper methods for creating resources and pipelines, and frees you from juggling the raw GPUDevice instance in every file. Conceptually, you can treat it as a friendlier GPUDevice wrapper that enforces type safety.
We'll wrap the root instance inside a React provider so any client component can access it through context. This keeps GPU initialization co-located and avoids duplicate setup logic.
src/providers/gpu-provider.tsx
1"use client";
2
3import { createContext, use, useEffect, useState } from "react";
4import tgpu, { TgpuRoot } from "typegpu";
5
6export const GpuContext = createContext<{
7 root?: TgpuRoot;
8 error?: string;
9}>({});
10
11export function GpuProvider({ children }: React.PropsWithChildren) {
12 const [root, setRoot] = useState<TgpuRoot>();
13 const [error, setError] = useState<string>();
14
15 useEffect(() => {
16 let cancelled = false;
17 (async () => {
18 try {
19 const root = await tgpu.init();
20 if (!cancelled) {
21 setRoot(root);
22 }
23 } catch (e: unknown) {
24 if (!cancelled) setError(e instanceof Error ? e.message : String(e));
25 }
26 })();
27 return () => {
28 cancelled = true;
29 };
30 }, []);
31
32 useEffect(() => {
33 return () => {
34 root?.destroy();
35 };
36 }, [root]);
37
38 return (
39 <GpuContext.Provider value={{ root, error }}>
40 {children}
41 </GpuContext.Provider>
42 );
43}
44
45export const useGpu = () => use(GpuContext);
Note: The provider currently initializes a single
TgpuRootper app lifecycle and cannot accept an already-createdGPUDevice. If you need to pass custom device options, adapt the provider accordingly— the TypeGPU root docs outline how to do that.
With the provider ready, import it into the root layout so every page down the tree receives the GPU context.
src/app/layout.tsx
1import { GpuProvider } from "@/providers/gpu-provider";
2
3import "./globals.css";
4
5export const metadata = { title: "WebGPU Compute Playground" };
6
7export default function RootLayout({
8 children,
9}: {
10 children: React.ReactNode;
11}) {
12 return (
13 <html lang="en" suppressHydrationWarning>
14 <body className="font-sans">
15 <GpuProvider>
16 <main className="m-3 rounded border border-gray-300 p-3">
17 <h1 className="text-4xl font-bold">
18 Welcome to the WebGPU Playground
19 </h1>
20 <p>Explore the power of GPU computing in your browser.</p>
21 {children}
22 </main>
23 </GpuProvider>
24 </body>
25 </html>
26 );
27}
2. Writing simple render shaders with TypeGPU
TypeGPU lets you author shaders without constantly swapping files or toolchains. You have two main options:
- Install the unplugin-typegpu compiler plugin, which transforms TypeScript expressions into WGSL.
- Embed WGSL directly inside template literals.
The plugin works with Next.js if you opt into Babel, but to keep this walkthrough focused we will stick with WGSL template strings.
For a detailed comparison of both approaches, see the TypeGPU functions guide.
Start with a minimalist vertex shader that draws a full-screen quad:
src/shaders/main-vertex.ts
1import tgpu from "typegpu";
2import * as d from "typegpu/data";
3
4export const mainVertex = tgpu["~unstable"].vertexFn({
5 in: { vertexIndex: d.builtin.vertexIndex },
6 out: { outPos: d.builtin.position, uv: d.vec2f },
7}) /* wgsl */ `{
8 var pos = array<vec2<f32>, 6>(
9 vec2<f32>(-1.0, 1.0),
10 vec2<f32>(-1.0, -1.0),
11 vec2<f32>(1.0, -1.0),
12 vec2<f32>(-1.0, 1.0),
13 vec2<f32>(1.0, -1.0),
14 vec2<f32>(1.0, 1.0)
15 );
16
17 var uv = array<vec2<f32>, 6>(
18 vec2<f32>(0.0, 1.0),
19 vec2<f32>(0.0, 0.0),
20 vec2<f32>(1.0, 0.0),
21 vec2<f32>(0.0, 1.0),
22 vec2<f32>(1.0, 0.0),
23 vec2<f32>(1.0, 1.0)
24 );
25
26 return Out(vec4f(pos[in.vertexIndex], 0.0, 1.0), uv[in.vertexIndex]);
27}
28`;
This shader emits the positions and UVs for two triangles (six vertices) that cover the canvas, so you can reuse it across multiple pipelines and treat it as a blank canvas for any fragment shader.
Note:
tgpu["~unstable"]exposes experimental helpers such asvertexFn. They are safe to use today, but keep an eye on release notes because the API may evolve.
Note: The
/* wgsl */comment only aids syntax highlighting; it does not change runtime behavior.
Next, add a fragment shader:
src/shaders/example-fragment1.ts
1import tgpu from "typegpu";
2import * as d from "typegpu/data";
3
4export const exampleFragment1 = tgpu["~unstable"].fragmentFn({
5 in: { uv: d.vec2f },
6 out: d.vec4f,
7}) /* wgsl */ `{
8 return vec4f(in.uv.x, in.uv.y, 0.0, 1.0);
9}
10`;
It samples the interpolated UV coordinates to build a color gradient from black to yellow.
3. Creating a render pipeline with TypeGPU
With the provider and shaders in place, we can build a small RenderPipeline component. The goal is to accept any compatible fragment shader, wire it to the shared vertex shader, and render onto a canvas.
Note: Remember to mark WebGPU components with
"use client";since the API is only available in the browser.
src/components/render-pipeline.tsx
1"use client";
2
3import { useCallback, useEffect, useRef } from "react";
4import { TgpuFragmentFn, TgpuRenderPipeline, TgpuVertexFn } from "typegpu";
5import * as d from "typegpu/data";
6
7import { useGpu } from "@/providers/gpu-provider";
8
9export type RenderPipelineProps = {
10 vertexShader: TgpuVertexFn<
11 Record<never, never>,
12 {
13 uv: d.Vec2f;
14 }
15 >;
16
17 fragmentShader: TgpuFragmentFn<
18 {
19 uv: d.Vec2f;
20 },
21 d.Vec4f
22 >;
23};
24
25export function RenderPipeline({
26 vertexShader,
27 fragmentShader,
28 ...rest
29}: RenderPipelineProps &
30 React.DetailedHTMLProps<
31 React.CanvasHTMLAttributes<HTMLCanvasElement>,
32 HTMLCanvasElement
33 >) {
34 const { root, error } = useGpu();
35 const canvasRef = useRef<HTMLCanvasElement>(null);
36 const pipelineRef = useRef<TgpuRenderPipeline>(null);
37
38 // Redraw canvas
39 const redraw = useCallback(() => {
40 const canvas = canvasRef.current;
41 const context = canvas?.getContext("webgpu");
42 const pipeline = pipelineRef.current;
43
44 if (!root || !canvas || !context || !pipeline) return;
45
46 const dpr = window.devicePixelRatio || 1;
47 const pixelWidth = Math.max(1, Math.floor(canvas.clientWidth * dpr));
48 const pixelHeight = Math.max(1, Math.floor(canvas.clientHeight * dpr));
49
50 canvas.width = pixelWidth;
51 canvas.height = pixelHeight;
52
53 context.configure({
54 device: root.device,
55 format: navigator.gpu.getPreferredCanvasFormat(),
56 alphaMode: "premultiplied",
57 });
58
59 pipeline
60 .withColorAttachment({
61 view: context.getCurrentTexture().createView(),
62 clearValue: [0, 0, 0, 1],
63 loadOp: "clear",
64 storeOp: "store",
65 })
66 .draw(6);
67 }, [root]);
68
69 // Create pipeline
70 useEffect(() => {
71 if (!root) return;
72
73 pipelineRef.current = root["~unstable"]
74 .withVertex(vertexShader, {})
75 .withFragment(fragmentShader, {
76 format: navigator.gpu.getPreferredCanvasFormat(),
77 })
78 .createPipeline();
79
80 redraw();
81 }, [fragmentShader, redraw, root, vertexShader]);
82
83 if (error) {
84 return (
85 <div className={rest.className}>Error initializing WebGPU: {error}</div>
86 );
87 }
88
89 if (!root) {
90 return <div className={rest.className}>Initializing WebGPU...</div>;
91 }
92
93 return (
94 <canvas
95 {...rest}
96 ref={canvasRef}
97 onClick={(e) => {
98 rest.onClick?.(e);
99 redraw();
100 }}
101 />
102 );
103}
Here is what happens inside the component:
RenderPipelinereceives a vertex shader, a fragment shader, and anycanvasprops. Strong TypeScript generics ensure the shaders exchange the expected data—in this case, UV coordinates.useGpuprovides access to the sharedTgpuRoot. We keep refs for the canvas element and the created pipeline.- The
redrawcallback configures the WebGPU context, accounts for device pixel ratio, and issues a draw call that renders the six-vertex quad. - When the component mounts (or when shaders change), an effect builds the pipeline via TypeGPU’s fluent API and immediately triggers a draw.
- Basic error and loading states keep the UI informative, and clicking the canvas forces a redraw so you can experiment interactively.
4. Using the RenderPipeline component
To see everything in action, render the pipeline inside a page and pass the shaders we created earlier.
src/app/typegpu-render-example1/page.tsx
1"use client";
2
3import { RenderPipeline } from "@/components/render-pipeline";
4import { exampleFragment1 } from "@/shaders/example-fragment1";
5import { mainVertex } from "@/shaders/main-vertex";
6
7export default function TypegpuRenderExample1Page() {
8 return (
9 <div>
10 <p>Click the canvas below to redraw</p>
11 <div className="my-4">
12 <RenderPipeline
13 className="block aspect-square w-[420px] rounded border border-gray-300"
14 vertexShader={mainVertex}
15 fragmentShader={exampleFragment1}
16 />
17 </div>
18 </div>
19 );
20}
Note: Shaders are a client-only concern, which is why the
"use client";directive is required here. In larger projects, you may prefer to encapsulate the shaders inside child client components.
This page simply instantiates RenderPipeline with the reusable vertex shader and the gradient fragment shader. When you visit the route you should see the black-to-yellow gradient, and clicking the canvas will trigger a redraw via the handler we wired up earlier.
That wraps up part two! You now have a reusable WebGPU provider, a concise shader authoring workflow, and a pipeline component you can plug into any page. Next we’ll layer on uniforms, animations, and other TypeGPU niceties so you can build richer visualizations. Then we'll start exploring compute shaders. See you there.