Interactive Particles with Three.js

A tutorial on how to draw a large number of particles with Three.js and make them interactive.

This tutorial is going to demonstrate how to draw a large number of particles with Three.js and an efficient way to make them react to mouse and touch input using shaders and an off-screen texture.

Attention: You will need an intermediate level of experience with Three.js. We will omit some parts of the code for brevity and assume you already know how to set up a Three.js scene and how to import your shaders — in this demo we are using glslify.
codrops-02

Instanced Geometry

The particles are created based on the pixels of an image. Our image’s dimensions are 320×180, or 57,600 pixels.

However, we don’t need to create one geometry for each particle. We can create only a single one and render it 57,600 times with different parameters. This is called geometry instancing. With Three.js we use InstancedBufferGeometry to define the geometry, BufferAttribute for attributes which remain the same for every instance and InstancedBufferAttribute for attributes which can vary between instances (i.e. colour, size).

The geometry of our particles is a simple quad, formed by 4 vertices and 2 triangles.

quad


const geometry = new THREE.InstancedBufferGeometry();

// positions
const positions = new THREE.BufferAttribute(new Float32Array(4 * 3), 3);
positions.setXYZ(0, -0.5, 0.5, 0.0);
positions.setXYZ(1, 0.5, 0.5, 0.0);
positions.setXYZ(2, -0.5, -0.5, 0.0);
positions.setXYZ(3, 0.5, -0.5, 0.0);
geometry.addAttribute('position', positions);

// uvs
const uvs = new THREE.BufferAttribute(new Float32Array(4 * 2), 2);
uvs.setXYZ(0, 0.0, 0.0);
uvs.setXYZ(1, 1.0, 0.0);
uvs.setXYZ(2, 0.0, 1.0);
uvs.setXYZ(3, 1.0, 1.0);
geometry.addAttribute('uv', uvs);

// index
geometry.setIndex(new THREE.BufferAttribute(new Uint16Array([ 0, 2, 1, 2, 3, 1 ]), 1));

Next, we loop through the pixels of the image and assign our instanced attributes. Since the word position is already taken, we use the word offset to store the position of each instance. The offset will be the x,y of each pixel in the image. We also want to store the particle index and a random angle which will be used later for animation.


const indices = new Uint16Array(this.numPoints);
const offsets = new Float32Array(this.numPoints * 3);
const angles = new Float32Array(this.numPoints);

for (let i = 0; i < this.numPoints; i++) {
	offsets[i * 3 + 0] = i % this.width;
	offsets[i * 3 + 1] = Math.floor(i / this.width);

	indices[i] = i;

	angles[i] = Math.random() * Math.PI;
}

geometry.addAttribute('pindex', new THREE.InstancedBufferAttribute(indices, 1, false));
geometry.addAttribute('offset', new THREE.InstancedBufferAttribute(offsets, 3, false));
geometry.addAttribute('angle', new THREE.InstancedBufferAttribute(angles, 1, false));

Particle Material

The material is a RawShaderMaterial with custom shaders particle.vert and particle.frag.

The uniforms are described as follows:

  • uTime: elapsed time, updated every frame
  • uRandom: factor of randomness used to displace the particles in x,y
  • uDepth: maximum oscillation of the particles in z
  • uSize: base size of the particles
  • uTexture: image texture
  • uTextureSize: dimensions of the texture
  • uTouch: touch texture

const uniforms = {
	uTime: { value: 0 },
	uRandom: { value: 1.0 },
	uDepth: { value: 2.0 },
	uSize: { value: 0.0 },
	uTextureSize: { value: new THREE.Vector2(this.width, this.height) },
	uTexture: { value: this.texture },
	uTouch: { value: null }
};

const material = new THREE.RawShaderMaterial({
	uniforms,
	vertexShader: glslify(require('../../../shaders/particle.vert')),
	fragmentShader: glslify(require('../../../shaders/particle.frag')),
	depthTest: false,
	transparent: true
});

A simple vertex shader would output the position of the particles according to their offset attribute directly. To make things more interesting, we displace the particles using random and noise. And the same goes for particles’ sizes.


// particle.vert

void main() {
	// displacement
	vec3 displaced = offset;
	// randomise
	displaced.xy += vec2(random(pindex) - 0.5, random(offset.x + pindex) - 0.5) * uRandom;
	float rndz = (random(pindex) + snoise_1_2(vec2(pindex * 0.1, uTime * 0.1)));
	displaced.z += rndz * (random(pindex) * 2.0 * uDepth);

	// particle size
	float psize = (snoise_1_2(vec2(uTime, pindex) * 0.5) + 2.0);
	psize *= max(grey, 0.2);
	psize *= uSize;

	// (...)
}

The fragment shader samples the RGB colour from the original image and converts it to greyscale using the luminosity method (0.21 R + 0.72 G + 0.07 B).

The alpha channel is determined by the linear distance to the centre of the UV, which essentially creates a circle. The border of the circle can be blurred out using smoothstep.


// particle.frag

void main() {
	// pixel color
	vec4 colA = texture2D(uTexture, puv);

	// greyscale
	float grey = colA.r * 0.21 + colA.g * 0.71 + colA.b * 0.07;
	vec4 colB = vec4(grey, grey, grey, 1.0);

	// circle
	float border = 0.3;
	float radius = 0.5;
	float dist = radius - distance(uv, vec2(0.5));
	float t = smoothstep(0.0, border, dist);

	// final color
	color = colB;
	color.a = t;

	// (...)
}

Optimisation

In our demo we set the size of the particles according to their brightness, which means dark particles are almost invisible. This makes room for some optimisation. When looping through the pixels of the image, we can discard the ones which are too dark. This reduces the number of particles and improves performance.

optimised

The optimisation starts before we create our InstancedBufferGeometry. We create a temporary canvas, draw the image onto it and call getImageData() to retrieve an array of colours [R, G, B, A, R, G, B … ]. We then define a threshold — hex #22 or decimal 34 — and test it against the red channel. The red channel is an arbitrary choice, we could also use green or blue, or even an average of all three channels, but the red channel is simple to use.


// discard pixels darker than threshold #22
if (discard) {
	numVisible = 0;
	threshold = 34;

	const img = this.texture.image;
	const canvas = document.createElement('canvas');
	const ctx = canvas.getContext('2d');

	canvas.width = this.width;
	canvas.height = this.height;
	ctx.scale(1, -1); // flip y
	ctx.drawImage(img, 0, 0, this.width, this.height * -1);

	const imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
	originalColors = Float32Array.from(imgData.data);

	for (let i = 0; i < this.numPoints; i++) {
		if (originalColors[i * 4 + 0] > threshold) numVisible++;
	}
}

We also need to update the loop where we define offset, angle and pindex to take the threshold into account.


for (let i = 0, j = 0; i < this.numPoints; i++) {
	if (originalColors[i * 4 + 0] <= threshold) continue;

	offsets[j * 3 + 0] = i % this.width;
	offsets[j * 3 + 1] = Math.floor(i / this.width);

	indices[j] = i;

	angles[j] = Math.random() * Math.PI;

	j++;
}

Interactivity

Considerations

There are many different ways of introducing interaction with the particles. For example, we could give each particle a velocity attribute and update it on every frame based on its proximity to the cursor. This is a classic technique and it works very well, but it might be a bit too heavy if we have to loop through tens of thousands of particles.

A more efficient way would be to do it in the shader. We could pass the cursor’s position as a uniform and displace the particles based on their distance from it. While this would perform a lot faster, the result could be quite dry. The particles would go to a given position, but they wouldn’t ease in or out of it.

Chosen Approach

The technique we chose in our demo was to draw the cursor position onto a texture. The advantage is that we can keep a history of cursor positions and create a trail. We can also apply an easing function to the radius of that trail, making it grow and shrink smoothly. Everything would happen in the shader, running in parallel for all the particles.

codrops-05

In order to get the cursor’s position we use a Raycaster and a simple PlaneBufferGeometry the same size of our main geometry. The plane is invisible, but interactive.

Interactivity in Three.js is a topic on its own. Please see this example for reference.

When there is an intersection between the cursor and the plane, we can use the UV coordinates in the intersection data to retrieve the cursor’s position. The positions are then stored in an array (trail) and drawn onto an off-screen canvas. The canvas is passed as a texture to the shader via the uniform uTouch.

In the vertex shader the particles are displaced based on the brightness of the pixels in the touch texture.


// particle.vert

void main() {
	// (...)

	// touch
	float t = texture2D(uTouch, puv).r;
	displaced.z += t * 20.0 * rndz;
	displaced.x += cos(angle) * t * 20.0 * rndz;
	displaced.y += sin(angle) * t * 20.0 * rndz;

	// (...)
}

Conclusion

Hope you enjoyed the tutorial! If you have any questions don’t hesitate to get in touch.

rhino

Bruno Imbrizi

Bruno Imbrizi is a freelance creative developer interested in beautiful things made with code.

Stay in the loop: Get your dose of frontend twice a week

👾 Hey! Looking for the latest in frontend? Twice a week, we'll deliver the freshest frontend news, website inspo, cool code demos, videos and UI animations right to your inbox.

Zero fluff, all quality, to make your Mondays and Thursdays more creative!

Feedback 12

Comments are closed.
  1. The “particlized” results look amazing and greyscale seems the way to go! Great work and I’ll definitely give it a try

  2. Very nice article about three.js. I am trying a loop to reload the pixels/picture every n seconds, even when there is no mouse-click. Do you have any suggestions how I can do this feature? Thanks again for your great post.