Building Efficient Three.js Scenes: Optimize Performance While Maintaining Quality

A comprehensive look at how to optimize Three.js scenes using Fiber, Drei, and advanced tools, ensuring smooth performance while retaining high-quality visuals.

In this article, we’ll explore strategies to improve rendering performance without sacrificing graphical quality, based on the SINGULARITY demo I built with Fiber + Drei. (If you’re using Vanilla Three.js, don’t worry—you might still find something useful!)

Model Planning and Selection

Not all objects in a scene require the same level of detail. Distant objects can have simplified models and low-resolution textures without significantly impacting the visual experience. Therefore, it is crucial to first understand how objects will be positioned relative to the camera.

Low-poly models reduce the number of triangles processed, decreasing the load on the CPU and GPU. It is essential to strike a balance between detail and performance by choosing optimized assets. Ideally, your scene should be as lightweight as possible and minimize the number of on-screen polygons.

Where can you find low-poly assets? On sites like Sketchfab, you can inspect the weight and number of triangles of various assets, so optimization starts here (I recommend downloading in GLB format). If you use assets specifically created for your project, keep the previously mentioned factors in mind.

Another thing to consider is the size of the textures: smaller is better, but it depends on where the mesh is placed relative to the camera and how detailed the texture needs to be. I suggest using textures with resolutions that are multiples of 2 (128, 256, …, 1024) for optimal memory management.

Asset Pre-Optimization

For this step, we’ll be using Blender, which offers a variety of essential tools for model optimization.

  • In the editor, you can view the wireframe of your models and remove any unused parts.
  • Decimate Modifier: This tool reduces the number of polygons while maintaining the overall shape. You can find it under Modifiers > Generate > Decimate.

Here’s a comparison showing the before and after results of these two simple steps.

If the objects are static and the lighting does not change, you can use a process called texture baking to pre-render lights and shadows into the textures. I recommend plugins like Simplebake, but there are also free alternatives. Here’s an example:

Exporting the Asset

I usually prefer exporting in .glb or .gltf. The difference between the two is that .glb includes all textures and the model in a single file, while .gltf keeps them separate, allowing for greater flexibility if they need to be modified or compressed later.

Blender’s export options offer a texture compression tool (under the Textures option). If you export in .gltf, you can further compress the textures using free tools like compresspng.com or compressjpeg.com.

Using gltfjsx

The gltfjsx package allows for further compression of models and generates a React component to import them into your scene. For the demo, I used the following command:

npx gltfjsx model.glb -S -T -t
  • -S enables mesh simplification.
  • -T transforms the asset for the web (Draco compression, pruning, resizing).
  • -t adds TypeScript definitions.

In my case, this step reduced the asset’s size by 90%!

Three.js Scene Optimization

There are several tools you can use to monitor the performance and metrics of a Three.js scene. Here’s a list of the tools I used to analyze the impact of various optimization techniques:

  • r3f-perf for react-three-fiber (my favorite) displays statistics on shaders, textures, and the number of vertices.
  • stats.js, similar to r3f-perf, is compatible with vanilla Three.js.
  • spector.js is a Chrome and Firefox extension for monitoring WebGL applications. It records each draw call by taking a snapshot of the data and generating a screenshot. Compatible with vanilla Three.js and Fiber, it’s extremely useful for understanding what happens in every single frame of your scene.
  • Chrome DevTools Performance Monitoring: This tool records short sessions and allows you to analyze memory, GPU, and CPU usage. It’s particularly helpful for monitoring performance during key moments, such as user interactions.

Now, we’re ready to implement some optimizations in our scene.

Canvas and Pixel Ratio

Imagine playing a video game on a PC: your screen has a set resolution, but the game can render at a different one. By lowering the rendering resolution and disabling certain features, we can improve performance. Therefore, it’s important to understand the limits to set in order to avoid performance issues, especially for users with high-resolution screens.

The Pixel Ratio indicates the ratio between physical pixels and CSS pixels (read more here). It can be obtained by calling window.devicePixelRatio and varies based on the type and resolution of the screen.

Here’s how to set the dpr (device pixel ratio):

const [dpr, setDpr] = useState(getDevicePixelRatio());
<Canvas dpr={dpr} />;

If you decide to limit DPR, you can use this approach:

const getDevicePixelRatio = () => {
	const desktopMaxPixelRatio = 1;
	const mobileMaxPixelRatio = 1.5;
	
	// here you can implement your device type detection logic
	if (isMobile()) {
		return Math.min(mobileMaxPixelRatio, window.devicePixelRatio);
	}
	
	return Math.min(desktopMaxPixelRatio, window.devicePixelRatio);
}

Keep in mind that the DPR can dynamically change over time (e.g., if the user moves the tab to a second screen), so it’s better to listen for changes with an event listener!

Using the PerformanceMonitor, you can monitor your application and dynamically adjust parameters as performance fluctuates. Here’s how I implemented it in the demo:

<PerformanceMonitor
	bounds={() => [30, 500]} // frame/second limit to trigger functions
	flipflops={1} // maximum changes before onFallback
	onDecline={() => {
		setDpr(dpr * 0.8); // lower dpr by 20%
		setIsPPEnabled(false); // disable post processing
	}}
	onFallback={() => setLowSetting(true)}
/>

In this case, the behavior is quite aggressive: I first try to adjust the DPR and disable post-processing. If that’s not enough, I remove some objects from the scene using setLowSetting.

Setting antialias: false on the canvas can also improve performance, but it compromises graphic quality, especially at low dpr. I recommend disabling it when performance drops or if you are using post-processing. We’ll delve deeper into this topic later.

Suspending Rendering When Not Needed

To prevent the application from continuing to render when it is not visible on the screen, you can dynamically adjust the canvas frameloop. This also prevents the Performance Monitor from triggering unnecessarily, as browsers limit resources allocated to inactive tabs after a few seconds.

const [frameloop, setFrameloop] = useState<'always' | 'never'>('always');

useEffect(() => {
  const handleVisibilityChange = () => setFrameloop(document.hidden ? 'never' : 'always');
  document.addEventListener('visibilitychange', handleVisibilityChange);
  return () => document.removeEventListener('visibilitychange', handleVisibilityChange);
}, []);

<Canvas frameloop={frameloop} />;

Instancing

Quoting the Three.js documentation: “If you need to render a large number of objects with the same geometry and materials, but with different world transformations, this will help you reduce the number of draw calls and thus improve overall rendering performance.

The challenge, however, lies in this part: “with the same geometry and materials.” In my case, I have many meshes that repeat but all have different textures. To address this, I created a small component that instances the repeated meshes and, on each frame, applies the world transformations to the non-instanced meshes containing the textures. This allows the creation of instances for the sections that are common between different meshes—in this case, the plastic cover of the CD.

Here is a basic version of what I implemented:

// imported 3D model
const { nodes } = useGLTF('/models-transformed/cd-transformed.glb') as GLTFResult;
const textures = useLoader(THREE.TextureLoader, texturesSrc);

const instances = useRef<THREE.InstancedMesh>(null);
const meshRefs = useMemo(() => texturesSrc.map(() => React.createRef<THREE.Group>()), [texturesSrc]);

// We will apply the texture to a plane
const geometry = useMemo(() => new THREE.PlaneGeometry(1.05, 1.01), []);

// Here we synchronize the world transformations of the instance with the texture plane
useFrame(() => {
	if (!instances.current) return;
	
	instances.current.children
		.filter(instance => !!instance.instance)
		.forEach((instance, i) => {
		
			const p = new THREE.Vector3();
			const r = new THREE.Quaternion();
			
			if (meshRefs[i]?.current) {
				meshRefs[i].current?.getWorldPosition(p);
				meshRefs[i].current?.getWorldQuaternion(r);
			}
			
			instance.setRotationFromQuaternion(r);
			instance.position.set(p.x, p.y, p.z);
		});
});  

return (
	<Instances ref={instances} >
		<bufferGeometry {...nodes.object.geometry} />
		<meshStandardMaterial />
		{textures.map((texture: THREE.Texture, i: number) => (
			<React.Fragment key={`cd-fragment-${i}`}>
				<Instance key={`cd-i-${i}`} />
				
				<mesh key={`mesh-${i}`} geometry={geometry} >
					<meshBasicMaterial map={texture} side={THREE.DoubleSide} />
				</mesh>
			</React.Fragment>
		))}
	</Instances>
);

This code allows all duplicated meshes to be rendered in a single draw call:

As you can see, the plastic section of the CD case is rendered in a single draw call. These images were captured using the Spector.js extension.

Keep in mind that this method can have synchronization issues, especially when used with moving objects. If you know a better way to handle this, let me know in the comments!

Physics

I decided to use Rapier, as it’s easy to implement thanks to the react-three-rapier package.

Efficient Collider Choice

I used simple shapes (box, sphere) for colliders instead of letting the engine generate them automatically. This helps lighten the simulation, especially when there are many objects on screen

<CuboidCollider position={[0, -3, 0]} args={[1000, 3, 1000]} />

Reducing the physics update frequency can further decrease the computational load. However, be cautious—this may alter the behavior of the simulation!

<Physics timeStep={1 / 30}/>

To achieve the springy drag-and-drop effect on individual meshes, I created a component that integrates DragControls with Rapier’s RigidBody. Link to code

In practice, when you click on a mesh, it transforms into a static object that is programmatically updated by the DragControl. Upon release, it returns to being dynamic. This approach allows you to maintain physics even during the drag.

Lights and Post-Processing

Dynamic lights are performance-intensive. Once again, less is better. I prefer using environment maps to achieve realistic lighting without significantly impacting the scene’s performance.

<Environment
	files="https://dl.polyhaven.org/file/ph-assets/HDRIs/hdr/1k/hanger_exterior_cloudy_1k.hdr"
	ground={{ height: 50, radius: 150, scale: 50 }}
/>

In this case, I also enabled ground projection to create an environment for the scene. It’s possible to use the environment solely for lighting without displaying it in the scene.

You can add static or dynamic lighting with excellent graphic quality using Lightformer. Here’s a guide on how to implement it in React and vanilla Three.js.

Post-Processing

To enhance the graphic quality, I used post-processing effects via the react-postprocessing library, which is also available for vanilla Three.js. These are the effects I applied:

  • Tone mapping for more realistic color management.
  • Hue and saturation adjustments to enhance colors.
  • Depth of field to add a subtle blur effect.
  • N8AO Ambient Occlusion by @N8Programs.

I also applied this configuration to the canvas, as recommended in the library’s documentation:

<Canvas
	gl={{
		powerPreference: "high-performance",
		alpha: false,
		antialias: false,
		stencil: false,
		depth: false,
	}}
/>

Keep in mind that some post-processing effects can be resource-intensive. As mentioned earlier in the Canvas and Pixel Ratio section, if performance isn’t optimal, you can dynamically disable these effects and/or adjust specific parameters to reduce the load.

Conclusions

With all of these steps implemented, we can confidently display:

  • 27 meshes
  • 184 textures
  • 49 shaders
  • 40k triangles
  • Physics simulations
  • High-quality lighting
  • Post-processing effects

All running at a stable frame rate, with a total asset size of just 2.1 MB!

In this article, I’ve shared all the techniques I used to create this demo. If you’re interested in diving deeper into the topic, I recommend reading these two articles:

I hope you found this helpful! If you have suggestions, feel free to send me a DM. To stay updated on my work, you can find me on X/Twitter. Have a great day!

Niccolò Fanton

Creative/web developer based in Italy. I have a profound love for design and I consider my work as an extension of myself. I'm an explorer of hidden meanings, I chase life with an insatiable thirst for understanding.

The Collective

🎨✨💻 Stay informed and inspired with our daily selection of the most relevant and engaging frontend and design news.

Pure inspiration and practical insights to keep you ahead of the game.

Check out the latest news