Free course recommendation: Master JavaScript animation with GSAP through 34 free video lessons, step-by-step projects, and hands-on demos. Enroll now →
From the outset, we knew we wanted something that subverted any conventional agency website formulas. Instead, inspired by the unseen energy that drives creativity, connection and transformation, we arrived at the idea of invisible forces . Could we take the powerful yet intangible elements that shape our world—motion, emotion, intuition, and inspiration—and manifest them in a digital space?
We were excited about creating something that included many custom interactions and a very experiential feel. However, our concern was picking a set of tools that would allow most of our developers to contribute to and maintain the site after launch.
We chose to start from a Next / React base, as we often do at Phantom. React also has the advantage of being compatible with the excellent React Three Fiber library, which we used to seamlessly bridge the gap between our DOM components and the WebGL contexts used across the site. For styles, we are using our very own CSS components as well as SASS.
For interactive behaviours and animation, we chose to use GSAP for two main reasons. Firstly, it contains a lot of plugins we know and love, such as SplitText, CustomEase and ScrollTrigger. Secondly, GSAP allows us to use a single animation framework across DOM and WebGL components.
We could go on and on talking about the details behind every single animation and micro-interaction on the site, but for this piece we have chosen to focus our attention on two of the most unique components of our site: the homepage grid and the scrollable employee face particle carousel.
The Homepage Grid
It took us a very long time to get this view to perform and feel just how we wanted it to. In this article, we will focus on the interactive part. For more info on how we made things performant, head to our previous article: Welcome back to Phantomland
Grid View
The project’s grid view is integrated into the homepage by incorporating a primitive Three.js object into a React Three Fiber scene.
//GridView.tsx
const GridView = () => {
return (
<Canvas>
...
<ProjectsGrid />
<Postprocessing />
</Canvas>
);
}
//ProjectsGrid.tsx
const ProjectsGrid = ({atlases, tiles}: Props) => {
const {canvas, camera} = useThree();
const grid = useMemo(() => {
return new Grid(canvas, camera, atlases, tiles);
}, [canvas, camera, atlases, tiles]);
if(!grid) return null;
return (
<primitive object={grid} />
);
}
We initially wanted to write all the code for the grid using React Three Fiber but realised that, due to the complexity of our grid component, a vanilla Three.js class would be easier to maintain.
One of the key elements that gives our grid its iconic feel is our post-processing distortion effect. We implemented this feature by creating a custom shader pass within our post-processing pipeline:
// Postprocessing.tsx
const Postprocessing = () => {
const {gl, scene, camera} = useThree();
// Create Effect composer
const {effectComposer, distortionShader} = useMemo(() => {
const renderPass = new RenderPass(scene, camera);
const distortionShader = new DistortionShader();
const distortionPass = new ShaderPass(distortionShader);
const outputPass = new OutputPass();
const effectComposer = new EffectComposer(gl);
effectComposer.addPass(renderPass);
effectComposer.addPass(distortionPass);
effectComposer.addPass(outputPass);
return {effectComposer, distortionShader};
}, []);
// Update distortion intensity
useEffect(() => {
if (workgridState === WorkgridState.INTRO) {
distortionShader.setDistortion(CONFIG.distortion.flat);
} else {
distortionShader.setDistortion(CONFIG.distortion.curved);
}
}, [workgridState, distortionShader]);
// Update distortion intensity
useFrame(() => {
effectComposer.render();
}, 1);
return null;
}
When the grid transitions in and out on the site, the distortion intensity changes to make the transition feel
natural. This animation is done through a simple tween in our
DistortionShader
class:
class DistortionShader extends ShaderMaterial {
private distortionIntensity = 0;
super({
name: 'DistortionShader',
uniforms: {
distortionIntensity: {value: new Vector2()},
...
},
vertexShader,
fragmentShader,
});
update() {
const ratio = window.innerWidth, window.innerHeight;
this.uniforms[DistortionShaderUniforms.DISTORTION].value.set(
this.distortionIntensity * ratio,
this.distortionIntensity * ratio,
);
}
setDistortion(value: number) {
gsap.to(this, {
distortionIntensity: value,
duration: 1,
ease: 'power2.out',
onUpdate: () => this.update() }
}
}
Then the distortion is applied through our custom shader:
// fragment.ts
export const fragmentShader = /* glsl */ `
uniform sampler2D tDiffuse;
uniform vec2 distortion;
uniform float vignetteOffset;
uniform float vignetteDarkness;
varying vec2 vUv;
// convert uv range from 0 -> 1 to -1 -> 1
vec2 getShiftedUv(vec2 uv) {
return 2. * (uv - .5);
}
// convert uv range from -1 -> 1 to 0 -> 1
vec2 getUnshiftedUv(vec2 shiftedUv) {
return shiftedUv * 0.5 + 0.5;
}
void main() {
vec2 shiftedUv = getShiftedUv(vUv);
float distanceToCenter = length(shiftedUv);
// Lens distortion effect
shiftedUv *= (0.88 + distortion * dot(shiftedUv));
vec2 transformedUv = getUnshiftedUv(shiftedUv);
// Vignette effect
float vignetteIntensity = smoothstep(0.8, vignetteOffset * 0.799, (vignetteDarkness + vignetteOffset) * distanceToCenter);
// Sample render texture and output fragment
color = texture2D( tDiffuse, distortedUV ).rgb * vignetteIntensity;
gl_FragColor = vec4(color, 1.);
}
We also added a vignette effect to our post-processing shader to darken the corners of the viewport, focusing the user’s attention toward the center of the screen.
In order to make our home view as smooth as possible, we also spent a fair amount of time crafting the micro-interactions and transitions of the grid.
Ambient mouse offset
When the user moves their cursor around the grid, the grid moves slightly in the opposite direction, creating a very subtle ambient floating effect. This was simply achieved by calculating the mouse position on the grid and moving the grid mesh accordingly:
getAmbientCursorOffset() {
// Get the pointer coordinates in UV space ( 0 - 1 ) range
const uv = this.navigation.pointerUv;
const offset = uv.subScalar(0.5).multiplyScalar(0.2);
return offset;
}
update() {
...
// Apply cursor offset to grid position
const cursorOffset = getAmbientCursorOffset();
this.mesh.position.x += cursorOffset.x;
this.mesh.position.y += cursorOffset.y;
}
Drag Zoom
When the grid is dragged around, a zoom-out effect occurs and the camera seems to pan away from the grid. We created this effect by detecting when the user starts and stops dragging their cursor, then using that to trigger a GSAP animation with a custom ease for extra control.
onPressStart = () => {
this.animateCameraZ(0.5, 1);
}
onPressEnd = (isDrag: boolean) => {
if(isDrag) {
this.animateCameraZ(0, 1);
}
}
animateCameraZ(distance: number, duration: number) {
gsap.to(this.camera.position, {
z: distance,
duration,
ease: CustomEase.create('cameraZoom', '.23,1,0.32,1'),
});
}
Drag Movement
Last but not least, when the user drags across the grid and releases their cursor, the grid slides through with a certain amount of inertia.
drag(offset: Vector2) {
this.dragAction = offset;
// Gradually increase velocity with drag time and distance
this.velocity.lerp(offset, 0.8);
}
// Every frame
update() {
// positionOffset is later used to move the grid mesh
if(this.isDragAction) {
// if the user is dragging their cursor, add the drag value to offset
this.positionOffset.add(this.dragAction.clone());
} else {
// if the user is not dragging, add the velocity to the offset
this.positionOffset.add(this.velocity);
}
this.dragAction.set(0, 0);
// Attenuate velocity with time
this.velocity.lerp(new Vector2(), 0.1);
}
Face Particles
The second major component we want to highlight is our employee face carousel, which presents team members through a
dynamic 3D particle system. Built with React Three Fiber’s
BufferGeometry
and custom GLSL shaders, this implementation leverages custom shader materials for lightweight performance and
flexibility, allowing us to generate entire 3D face representations using only a 2D colour photograph and its
corresponding depth map—no 3D models required.
Core Concept: Depth-Driven Particle Generation
The foundation of our face particle system lies in converting 2D imagery into volumetric 3D representations. We’ve kept things efficient, with each face using only two optimized 256×256 WebP images (under 15KB each).
To capture the images, each member of the Phantom team was 3D scanned using RealityScan from Unreal Engine on iPhone, creating a 3D model of their face.
These scans were cleaned up and then rendered from Cinema4D with a position and colour pass.
The position pass was converted into a greyscale depth map in Photoshop, and this—along with the colour pass—was retouched where needed, cropped, and then exported from Photoshop to share with the dev team.
Each face is constructed from approximately 78,400 particles (280×280 grid), where each particle’s position and appearance is determined by sampling data from our two source textures.
/* generate positions attributes array */
const POINT_AMOUNT = 280;
const points = useMemo(() => {
const length = POINT_AMOUNT * POINT_AMOUNT;
const vPositions = new Float32Array(length * 3);
const vIndex = new Float32Array(length * 2);
const vRandom = new Float32Array(length * 4);
for (let i = 0; i < length; i++) {
const i2 = i * 2;
vIndex[i2] = (i % POINT_AMOUNT) / POINT_AMOUNT;
vIndex[i2 + 1] = i / POINT_AMOUNT / POINT_AMOUNT;
const i3 = i * 3;
const theta = Math.random() * 360;
const phi = Math.random() * 360;
vPositions[i3] = 1 * Math.sin(theta) * Math.cos(phi);
vPositions[i3 + 1] = 1 * Math.sin(theta) * Math.sin(phi);
vPositions[i3 + 2] = 1 * Math.cos(theta);
const i4 = i * 4;
vRandom.set(
Array(4)
.fill(0)
.map(() => Math.random()),
i4,
);
}
return {vPositions, vRandom, vIndex};
}, []);
// React Three Fiber component structure
const FaceParticleSystem = ({ particlesData, currentDataIndex }) => {
return (
<points ref={pointsRef} position={pointsPosition}>
<bufferGeometry>
<bufferAttribute attach="attributes-vIndex"
args={[points.vIndex, 2]} />
<bufferAttribute attach="attributes-position"
args={[points.vPositions, 3]} />
<bufferAttribute attach="attributes-vRandom"
args={[points.vRandom, 4]} />
</bufferGeometry>
<shaderMaterial
blending={NormalBlending}
transparent={true}
fragmentShader={faceFrag}
vertexShader={faceVert}
uniforms={uniforms}
/>
</points>
);
};
The depth map provides normalized values (0–1) that directly translate to Z-depth positioning. A value of 0 represents the furthest point (background), while 1 represents the closest point (typically the nose tip).
/* vertex shader */
// sample depth and color data for each particle
vec3 depthTexture1 = texture2D(depthMap1, vIndex.xy).xyz;
// convert depth to Z-position
float zDepth = (1. - depthValue.z);
pos.z = (zDepth * 2.0 - 1.0) * zScale;
Dynamic Particle Scaling Through Colour Analysis
One of the key methods that brings our faces to life is utilizing colour data to influence particle scale. In our vertex shader, rather than using uniform particle sizes, we analyze the colour density of each pixel so that brighter, more colourful areas of the face (like eyes, lips, or well-lit cheeks) generate larger, more prominent particles, while darker areas (shadows, hair) create smaller, subtler particles. The result is a more organic, lifelike representation that emphasizes facial features naturally.
/* vertex shader */
vec3 colorTexture1 = texture2D(colorMap1, vIndex.xy).xyz;
// calculate color density
float density = (mainColorTexture.x + mainColorTexture.y + mainColorTexture.z) / 3.;
// map density to particle scale
float pScale = mix(pScaleMin, pScaleMax, density);
The calibration below demonstrates the influence of colour (contrast, brightness, etc.) on the final 3D particle formation.

Ambient Noise Animation
To prevent static appearances and maintain visual interest, we apply continuous noise-based animation to all particles. This ambient animation system uses curl noise to create subtle, flowing movement across the entire face structure.
/* vertex shader */
// primary curl noise for overall movement
pos += curlNoise(pos * curlFreq1 + time) * noiseScale * 0.1;
// animation updates in React Three Fiber
useFrame((state, delta) => {
if (!materialRef.current) return;
materialRef.current.uniforms.time.value = state.clock.elapsedTime * NOISE_SPEED;
// update rotation based on mouse interaction
easing.damp(pointsRef.current.rotation, 'y', state.mouse.x * 0.12 * Math.PI, 0.25, delta);
easing.damp(pointsRef.current.rotation, 'x', -state.pointer.y * 0.05 * Math.PI, 0.25, delta);
});
Face Transition Animation
When transitioning between different team members, we combine timeline-based interpolation with visual effects written in shader materials.
GSAP-Driven Lerp Method
The transition foundation uses GSAP timelines to animate multiple shader parameters simultaneously:
timelineRef.current = gsap
.timeline()
.fromTo(uniforms.transition, {value: 0}, {value: 1.3, duration: 1.6})
.to(uniforms.posZ, {value: particlesParams.offset_z, duration: 1.6}, 0)
.to(uniforms.zScale, {value: particlesParams.face_scale_z, duration: 1.6}, 0);
And the shader handles the visual blending between two face states:
/* vertex shader */
// smooth transition curve
float speed = clamp(transition * mix(0.8, .9, transition), 0., 1.0);
speed = smoothstep(0.0, 1.0, speed);
// blend textures
vec3 mainColorTexture = mix(colorTexture1, colorTexture2, speed);
vec3 depthValue =mix(depthTexture1, depthTexture2, speed);
To add visual interest during transitions, we further inject additional noise that’s strongest at the midpoint of the transition. This creates a subtle “disturbance” effect where particles temporarily deviate from their target positions, making transitions feel more dynamic and organic.
/* vertex shader */
// secondary noise movement applied for transition
float randomZ = vRandom.y + cnoise(pos * curlFreq2 + t2) * noiseScale2;
float smoothTransition = abs(sin(speed * PI));
pos.x += nxScale * randomZ * 0.1 * smoothTransition;
pos.y += nyScale *randomZ * 0.1 * smoothTransition;
pos.z += nzScale * randomZ * 0.1 * smoothTransition;
Custom Depth of Field Effect
To enhance the three-dimensional perception, we implemented a custom depth of field effect directly in our shader material. It calculates view-space distance for each particle and modulates both opacity and size based on proximity to a configurable focus plane.
/* vertex shader - calculate view distance */
vec4 viewPosition = viewMatrix * modelPosition;
vDistance = abs(focus +viewPosition.z);
// apply distance to point size for blur effect
gl_PointSize = pointSize * pScale * vDistance * blur * totalScale;
/* fragment shader - calculate distance-based alpha for DOF */
float alpha = (1.04 - clamp(vDistance * 1.5, 0.0, 1.0));
gl_FragColor = vec4(color, alpha);
Challenges: Unifying Face Scales
One of the challenges we faced was achieving visual consistency across different team members’ photos. Each photograph was captured under slightly different conditions—varying lighting, camera distances, and facial proportions. Therefore, we went through each face to calibrate multiple scaling factors:
- Depth scale calibration to ensure no nose protrudes too aggressively
- Colour density balancing to maintain consistent particle size relationships
- Focus plane optimization to prevent excessive blur on any individual face
// individual face parameters requiring manual tuning
particle_params: {
offset_z: 0, // overall Z-position
z_depth_scale: 0, // depth map scaling factor
face_size: 0, // overall face scale
}

Final Words
Our face particle system demonstrates how simple yet careful technical implementation can create fun visual experiences from minimal assets. By combining lightweight WebP textures, custom shader materials, and animations, we’ve created a system that transforms simple 2D portraits into interactive 3D figures.
Check out the full site.
Curious about what we’re up to in the Phantom studio? Or have a project you think we’d be interested in? Get in touch.