WebGPU

Today's Material

https://austin-eng.com/webgpu-samples/samples/helloTriangle 

Thank you, Austin Eng san!


WebGPU Initialization

1. Get WebGPU Device

  1. Get GPUAdapter (Physical Device) at first: A Comparison of Modern Graphics APIs 
  2. Get GPUDevice (Logical Device) from the GPUAdapter: A Comparison of Modern Graphics APIs 
// Get a Physical Device // [A Comparison of Modern Graphics APIs](https://alain.xyz/blog/comparison-of-modern-graphics-apis#physical-device) const adapter = await navigator.gpu.requestAdapter(); // Get a Locial Device // [A Comparison of Modern Graphics APIs](https://alain.xyz/blog/comparison-of-modern-graphics-apis#logical-device) const device = await adapter.requestDevice();

2. Get WebGPU Context

// get WebGPU context const context = canvas.getContext('webgpu'); // get canvas size const devicePixelRatio = window.devicePixelRatio || 1; const presentationSize = [ canvas.clientWidth * devicePixelRatio, canvas.clientHeight * devicePixelRatio, ]; // get preferred format const presentationFormat = context.getPreferredFormat(adapter); // configure of WebGPU context context.configure({ device, // got logical device format: presentationFormat, // got preferred format size: presentationSize, // canvas size });

GPURenderPipeline

// Minimum RenderPipeline Definition const pipeline = device.createRenderPipeline({ vertex: { module: device.createShaderModule({ code: triangleVertWGSL, }), entryPoint: 'main', }, fragment: { module: device.createShaderModule({ code: redFragWGSL, }), entryPoint: 'main', targets: [ { format: presentationFormat, }, ], }, primitive: { topology: 'triangle-list', }, });

VAOの特異なところ

  • VBO(InputLayoutだけでなく、実際の頂点データも含んでいる)と癒着している
  • IBO(インデックスバッファ)とも癒着している(RenderPipelineStateはインデックスバッファは扱わない)

Direct3D (12含む)では、Input Layoutにセマンティクス(その範囲が頂点座標なのかUV座標なのかなどの意味)も含んでいる。


WebGLRenderingContext.drawElements() - Web APIs | MDN 

GL系のつらいところ

RenderPipeline相当の情報の描画命令発行時にようやく確定する

GL系ではプリミティブステート(TriangleListなどのmode等の情報)を描画関数で指定するので、
現在のGPUでいうRenderPipelineの情報が描画命令発効時にようやく確定する(ドライバーが最適化しにくい)

void gl.drawElements(mode, count, type, offset);

~~~objectによるBindモデル

GL Object Texture Object Sample Object Framebuffer Object

bind状態がコード的なブロック構造を超えて動的に存在し続け、コードブロックとは別にその状態を制御する必要がある。

const textureObj = gl.createTexture(); gl.bindTexture(textureObj) gl.bindTexture(0);

GL系以外の3DAPIはBindlessなモデルである。
※OpenGL4.5からはOpenGL DSA (Direct State Access)というBindlessモデルもサポートされている。


WebGPU Command

function frame() { // Sample is no longer the active page. if (!canvasRef.current) return; // [A Comparison of Modern Graphics APIs](https://alain.xyz/blog/comparison-of-modern-graphics-apis#command-buffer) const commandEncoder = device.createCommandEncoder(); const textureView = context.getCurrentTexture().createView(); const renderPassDescriptor: GPURenderPassDescriptor = { colorAttachments: [ { view: textureView, loadValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 }, storeOp: 'store', }, ], }; const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor); // passEncoder.setPipeline(pipeline); passEncoder.draw(3, 1, 0, 0); passEncoder.end(); device.queue.submit([commandEncoder.finish()]); requestAnimationFrame(frame); }
@stage(vertex) fn main(@builtin(vertex_index) VertexIndex : u32) -> @builtin(position) vec4<f32> { var pos = array<vec2<f32>, 3> ( vec2<f32>(0.0, 0.5), vec2<f32>(-0.5, -0.5), vec2<f32>(0.5, -0.5) ); return vec4<f32>(pos[VertexIndex], 0.0, 1.0); }
[[stage(fragment)]] fn main() -> [[location(0)]] vec4<f32> { return vec4<f32>(1.0, 0.0, 0.0, 1.0); }

Input Layouts

GPUVertexBufferLayout[]

Information about the VertexBuffer (Real Vertex Data)

// Create some common descriptors used for both the shadow pipeline // and the color rendering pipeline. const vertexBuffers: Iterable<GPUVertexBufferLayout> = [ { // No.0 of GPUVertexBufferLayouts arrayStride: Float32Array.BYTES_PER_ELEMENT * 6, attributes: [ { // position shaderLocation: 0, offset: 0, format: 'float32x3', }, { // normal shaderLocation: 1, offset: Float32Array.BYTES_PER_ELEMENT * 3, format: 'float32x3', }, ], }, { // No.1 of GPUVertexBufferLayouts ... } ];

GPUPipelineLayout

const layout = device.createPipelineLayout({ bindGroupLayouts: [bglForRender, uniformBufferBindGroupLayout], }),

GPUPrimitiveState

const primitive: GPUPrimitiveState = { topology: 'triangle-list', cullMode: 'back', };

Set them to GPURenderPipeline

const pipeline = device.createRenderPipeline({ layout: layout, vertex: { module: device.createShaderModule({ code: vertexWGSL, }), entryPoint: 'main', buffers: vertexBuffers, <-- set VertexBuffer }, fragment: { module: device.createShaderModule({ code: fragmentWGSL, }), entryPoint: 'main', targets: [ { format: presentationFormat, }, ], }, depthStencil: { depthWriteEnabled: true, depthCompare: 'less', format: 'depth24plus-stencil8', }, primitive, });

Vertex Buffer (Real Vertex Data)

// Create the model vertex buffer. const vertexBuffer = device.createBuffer({ size: mesh.positions.length * 3 * 2 * Float32Array.BYTES_PER_ELEMENT, usage: GPUBufferUsage.VERTEX, mappedAtCreation: true, }); { const mapping = new Float32Array(vertexBuffer.getMappedRange()); for (let i = 0; i < mesh.positions.length; ++i) { mapping.set(mesh.positions[i], 6 * i); mapping.set(mesh.normals[i], 6 * i + 3); } vertexBuffer.unmap(); }

set VertexBuffer to RenderPass

const commandEncoder = device.createCommandEncoder(); { const shadowPass = commandEncoder.beginRenderPass(shadowPassDescriptor); shadowPass.setPipeline(shadowPipeline); shadowPass.setBindGroup(0, sceneBindGroupForShadow); shadowPass.setBindGroup(1, modelBindGroup); shadowPass.setVertexBuffer(0, vertexBuffer); shadowPass.setIndexBuffer(indexBuffer, 'uint16'); shadowPass.drawIndexed(indexCount); shadowPass.end(); }

GL系のVBOは頂点データと入力レイアウトが一緒くただったが、WebGPUでは分離して扱える。

glDrawElements()


BindingGroup

Shader Bindings

// [Shadow Mapping - WebGPU Samples](https://austin-eng.com/webgpu-samples/samples/shadowMapping#./fragment.wgsl) @group(0) @binding(0) var<uniform> scene : Scene; @group(0) @binding(1) var shadowMap: texture_depth_2d; @group(0) @binding(2) var shadowSampler: sampler_comparison;
// Create a bind group layout which holds the scene uniforms and // the texture+sampler for depth. We create it manually because the WebPU // implementation doesn't infer this from the shader (yet). const bglForRender = device.createBindGroupLayout({ entries: [ { binding: 0, visibility: GPUShaderStage.VERTEX | GPUShaderStage.FRAGMENT, buffer: { type: 'uniform', }, }, { binding: 1, visibility: GPUShaderStage.VERTEX | GPUShaderStage.FRAGMENT, texture: { sampleType: 'depth', }, }, { binding: 2, visibility: GPUShaderStage.VERTEX | GPUShaderStage.FRAGMENT, sampler: { type: 'comparison', }, }, ], });
[[stage(fragment)]] fn main(input : FragmentInput) -> @location(0) vec4<f32> { // Percentage-closer filtering. Sample texels in the region // to smooth the result. var visibility : f32 = 0.0; let oneOverShadowDepthTextureSize = 1.0 / shadowDepthTextureSize; for (var y : i32 = -1 ; y <= 1 ; y = y + 1) { for (var x : i32 = -1 ; x <= 1 ; x = x + 1) { let offset : vec2<f32> = vec2<f32>( f32(x) * oneOverShadowDepthTextureSize, f32(y) * oneOverShadowDepthTextureSize); visibility = visibility + textureSampleCompare( shadowMap, shadowSampler, // <-- Specify shadowMap and shadowSampler Both input.shadowPos.xy + offset, input.shadowPos.z - 0.007); } }
@group(0) binding(0) var<uniform> scene : Scene; @group(1) binding(0) var<uniform> model : Model;
graph LR BindGroupLayout --> BindGroup