From our partner: The AI visual builder for Next.js & Tailwind: Generate UI with AI. Customize it with a visual editor.
Today we’re sharing a creative demo that showcases a scanning light effect using a depth map. The scene is rendered with WebGPU via Three.js and react-three-fiber
, and enhanced using TSL shaders for dynamic visual depth.
This demo includes three variations of the effect, each showing a different visual direction. The goal is to demonstrate how combining depth information with post-processing can create a fun scanning effect that makes everything feel alive:
Behind the Effect
Image and Depth Map
The visual effect is based on a combination of a base image and a corresponding depth map. The depth information is used to introduce a slight displacement in the image’s UV coordinates, creating a parallax-like distortion that adds a sense of spatial depth.
const [rawMap, depthMap] = useTexture(
[TEXTUREMAP.src, DEPTHMAP.src],
() => {
setIsLoading(false);
rawMap.colorSpace = THREE.SRGBColorSpace;
}
);
Procedural Grid and Masking
A dot grid is generated procedurally using cell noise. The position of the scan line determines which areas of the grid are revealed or hidden. This is done by calculating distance from the scan line and combining that with brightness values derived from the noise pattern. The result is a mask that fades in and out smoothly as the scan progresses.
const aspect = float(WIDTH).div(HEIGHT);
const tUv = vec2(uv().x.mul(aspect), uv().y);
const tiling = vec2(120.0);
const tiledUv = mod(tUv.mul(tiling), 2.0).sub(1.0);
const brightness = mx_cell_noise_float(tUv.mul(tiling).div(2));
const dist = float(tiledUv.length());
const dot = float(smoothstep(0.5, 0.49, dist)).mul(brightness);
Scan Animation
The scanning motion is controlled by a uniform that animates from 0 to 1 over time using GSAP. This value is used within the shader to compare against the scene’s depth and calculate the flow of the effect. The scan loops continuously, creating a consistent motion across the image. Additionally, pointer input is tracked and used to adjust the displacement, introducing a subtle interactive element.
useGSAP(() => {
gsap.to(uniforms.uProgress, {
value: 1,
repeat: -1,
duration: 3,
ease: 'power1.out',
});
}, [uniforms.uProgress]);
Pointer position is tracked in real-time and passed into the shader.
useFrame(({ pointer }) => { uniforms.uPointer.value = pointer; });
Shader Composition
The shader is constructed using TSL (The Shader Language), which allows for a modular, readable approach to building GLSL-like logic in JavaScript. Components such as smoothstep
, mod
, and blendScreen
are used to define how different visual layers interact. The final composition blends the displaced image with the animated dot mask using screen blending.
const depth = tDepthMap;
const flow = oneMinus(smoothstep(0, 0.02, abs(depth.sub(uProgress))));
const mask = dot.mul(flow).mul(vec3(10, 0, 0));
const final = blendScreen(tMap, mask);
const material = new THREE.MeshBasicNodeMaterial({
colorNode: final,
});
Rendering and Layout
Rendering is handled with react-three-fiber
and Three.js’s WebGPU renderer. The canvas maintains aspect ratio using useAspect
from @react-three/drei
, ensuring the image scales consistently. Post-processing passes are layered on top via a separate component, allowing for additional visual refinement without complicating the core shader logic.
const [w, h] = useAspect(WIDTH, HEIGHT);
return (
<mesh scale={[w, h, 1]} material={material}>
<planeGeometry />
</mesh>
);
Variations
Three visual variations are included, each using a different base image and depth map. While the core logic remains the same, these variations demonstrate how different source materials and depth data influence the final look of the effect.