How to Create Interactive, Droplet-like Metaballs with Three.js and GLSL

In this tutorial, we’ll walk you through how to create bubble-like spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements.

Fragment shaders allow us to create smooth, organic visuals that are difficult to achieve with standard polygon-based rendering in WebGL. One powerful example is the metaball effect, where multiple objects blend and deform seamlessly. This can be implemented using a technique called ray marching, directly within a fragment shader.

In this tutorial, we’ll walk you through how to create droplet-like, bubble spheres using Three.js and GLSL—an effect that responds interactively to your mouse movements. But first, take a look at the demo video below to see the final result in action.

Overview

Let’s take a look at the overall structure of the demo and review the steps we’ll follow to build it.

1. Setting Up the Fullscreen Plane

We create a fullscreen plane that covers the entire viewport.

2. Rendering Spheres with Ray Marching

We’ll render spheres using ray marching in the fragment shader.

3. From Spheres to Metaballs

We blend multiple spheres smoothly to create a metaball effect.

4. Adding Noise for a Droplet-like Appearance

By adding noise to the surface, we create a realistic droplet-like texture.

5. Simulating Stretchy Droplets with Mouse Movement

We arrange spheres along the mouse trail to create a stretchy, elastic motion.

Let’s get started!

1. Setup

We render a single fullscreen plane that covers the entire viewport.

// Output.ts

const planeGeometry = new THREE.PlaneGeometry(2.0, 2.0);
const planeMaterial = new THREE.RawShaderMaterial({
    vertexShader: base_vert,
    fragmentShader: output_frag,
    uniforms: this.uniforms,
});
const plane = new THREE.Mesh(planeGeometry, planeMaterial);
this.scene.add(plane);

We define a uniform variable named uResolution to pass the canvas size to the shader, where Common.width and Common.height represent the width and height of the canvas in pixels. This uniform will be used to normalize coordinates based on the screen resolution.

// Output.ts

this.uniforms = {
    uResolution: {
        value: new THREE.Vector2(Common.width, Common.height),
    },
};

When using RawShaderMaterial, you need to provide your own shaders. Therefore, we prepare both a vertex shader and a fragment shader.

// base.vert

attribute vec3 position;
varying vec2 vTexCoord;

void main() {
    vTexCoord = position.xy * 0.5 + 0.5;
    gl_Position = vec4(position, 1.0);
}

The vertex shader receives the position attribute.

Since the xy components of position originally range from -1 to 1, we convert them to a range from 0 to 1 and output them as a texture coordinate called vTexCoord. This is passed to the fragment shader and used to calculate colors or effects based on the position on the screen.

// output.frag

precision mediump float;

uniform vec2 uResolution;
varying vec2 vTexCoord;

void main() {
    gl_FragColor = vec4(vTexCoord, 1.0, 1.0);
}

The fragment shader receives the interpolated texture coordinate vTexCoord and the uniform variable uResolution representing the canvas size. Here, we temporarily use vTexCoord to output color for testing.

Now we’re all set to start drawing in the fragment shader!
Next, let’s move on to actually rendering the spheres.

2. Ray Marching

2.1. What is Ray Marching?

As mentioned at the beginning, we will use a method called ray marching to render spheres. Ray marching proceeds in the following steps:

  1. Define the scene
  2. Set the camera (viewing) direction
  3. Cast rays
  4. Evaluate the distance from the current ray position to the nearest object in the scene.
  5. Move the ray forward by that distance
  6. Check for a hit

For example, let’s consider a scene with three spheres. These spheres are expressed using SDFs (Signed Distance Functions), which will be explained in detail later.

First, we determine the camera direction. Once the direction is set, we cast a ray in that direction.

Next, we evaluate the distance to all objects from the current ray position, and take the minimum of these distances.

After obtaining this distance, we move the ray forward by that amount.

We repeat this process until either the ray gets close enough to an object—closer than a small threshold—or the maximum number of steps is reached.
If the distance is below the threshold, we consider it a “hit” and shade the corresponding pixel.

For example, in the figure above, a hit is detected on the 8th ray marching step.

If the maximum number of steps were set to 7, the 7th step would not have hit anything yet. But since the limit is reached, the loop ends and no hit is detected.

Therefore, nothing would be rendered at that position. If parts of an object appear to be missing in the final image, it may be due to an insufficient number of steps. However, be aware that increasing the step count will also increase the computational load.

To better understand this process, try running this demo to see how it works in practice.

2.2. Signed Distance Function

In the previous section, we briefly mentioned the SDF (Signed Distance Function).
Let’s take a moment to understand what it is.

An SDF is a function that returns the distance from a point to a particular shape. The key characteristic is that it returns a positive or negative value depending on whether the point is outside or inside the shape.

For example, here is the distance function for a sphere:

float sdSphere(vec3 p, float s)
{
    return length(p) - s;
}

Here, p is a vector representing the position relative to the origin, and s is the radius of the sphere.

This function calculates how far the point p is from the surface of a sphere centered at the origin with radius s.

  • If the result is positive, the point is outside the sphere.
  • If negative, it is inside the sphere.
  • If the result is zero, the point is on the surface—this is considered a hit point (in practice, we detect a hit when the distance is less than a small threshold).

In this demo, we use a sphere’s distance function, but many other shapes have their own distance functions as well.

If you’re interested, here’s a great article on distance functions.

2.3. Rendering Spheres

Let’s try rendering spheres.
In this demo, we’ll render two slightly overlapping spheres.

// output.frag

precision mediump float;

const float EPS = 1e-4;
const int ITR = 16;

uniform vec2 uResolution;

varying vec2 vTexCoord;

// Camera Params
vec3 origin = vec3(0.0, 0.0, 1.0);
vec3 lookAt = vec3(0.0, 0.0, 0.0);
vec3 cDir = normalize(lookAt - origin);
vec3 cUp = vec3(0.0, 1.0, 0.0);
vec3 cSide = cross(cDir, cUp);

vec3 translate(vec3 p, vec3 t) {
    return p - t;
}

float sdSphere(vec3 p, float s)
{
    return length(p) - s;
}

float map(vec3 p) {
    float radius = 0.5;
    float d = 1e5;

    float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
    float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
    d = min(sphere0, sphere1);

    return d;
}

void main() {
    vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

    // Orthographic Camera
    vec3 ray = origin + cSide * p.x + cUp * p.y;
    vec3 rayDirection = cDir;

    float dist = 0.0;

    for (int i = 0; i < ITR; ++i) {
        dist = map(ray);
        ray += rayDirection * dist;
        if (dist < EPS) break;
    }

    vec3 color = vec3(0.0);

    if (dist < EPS) {
        color = vec3(1.0, 1.0, 1.0);
    }

    gl_FragColor = vec4(color, 1.0);
}

First, we normalize the screen coordinates:

vec2 p = (gl_FragCoord.xy * 2.0 - uResolution) / min(uResolution.x, uResolution.y);

Next, we set up the camera. This demo uses an orthographic camera (parallel projection):

// Camera Params
vec3 origin = vec3(0.0, 0.0, 1.0);
vec3 lookAt = vec3(0.0, 0.0, 0.0);
vec3 cDir = normalize(lookAt - origin);
vec3 cUp = vec3(0.0, 1.0, 0.0);
vec3 cSide = cross(cDir, cUp);

// Orthographic Camera
vec3 ray = origin + cSide * p.x + cUp * p.y;
vec3 rayDirection = cDir;

After that, inside the map function, two spheres are defined and their distances calculated using sdSphere. The variable d is initially set to a large value and updated with the min function to keep track of the shortest distance to the surface.

float map(vec3 p) {
    float radius = 0.5;
    float d = 1e5;

    float sphere0 = sdSphere(translate(p, vec3(0.4, 0.0, 0.0)), radius);
    float sphere1 = sdSphere(translate(p, vec3(-0.4, 0.0, 0.0)), radius);
    d = min(sphere0, sphere1);

    return d;
}

Then we run a ray marching loop, which updates the ray position by computing the distance to the nearest object at each step. The loop ends either after a fixed number of iterations or when the distance becomes smaller than a threshold (dist < EPS):

for ( int i = 0; i < ITR; ++ i ) {
	dist = map(ray);
	ray += rayDirection * dist;
	if ( dist < EPS ) break ;
}

Finally, we determine the output color. We use black as the default color (background), and render a white pixel only if a hit is detected:

vec3 color = vec3(0.0);

if ( dist < EPS ) {
	color = vec3(1.0);
}

We’ve successfully rendered two overlapping spheres using ray marching!

2.4. Normals

Although we successfully rendered spheres in the previous section, the scene still looks flat and lacks depth. This is because we haven’t applied any shading or visual effects that respond to surface orientation.

While we won’t implement full shading in this demo, we’ll still compute surface normals, as they’re essential for adding surface detail and other visual effects.

Let’s look at the code first:

vec3 generateNormal(vec3 p) {
    return normalize(vec3(
            map(p + vec3(EPS, 0.0, 0.0)) - map(p + vec3(-EPS, 0.0, 0.0)),
            map(p + vec3(0.0, EPS, 0.0)) - map(p + vec3(0.0, -EPS, 0.0)),
            map(p + vec3(0.0, 0.0, EPS)) - map(p + vec3(0.0, 0.0, -EPS))
        ));
}

At first glance, this may seem hard to understand. Put simply, this computes the gradient of the distance function, which corresponds to the normal vector.

If you’ve studied vector calculus, this might be easy to understand. For many others, though, it may seem a bit difficult.

That’s totally fine—a full understanding of the details isn’t necessary to use the result. If you just want to move on, feel free to skip ahead to the section where we debug normals by visualizing them with color.

However, for those who are interested in how it works, we’ll now walk through the explanation in more detail.

The gradient of a scalar function 𝑓(𝑥,𝑦,𝑧) is simply a vector composed of its partial derivatives. It points in the direction of the greatest rate of increase of the function:

To compute this gradient numerically, we can use the central difference method. For example:

We apply the same idea for the 𝑦 and 𝑧 components.
Note: The factor 2𝜀 is omitted in the code since we normalize the result using normalize().

Next, let us consider a signed distance function 𝑓(𝑥,𝑦,𝑧), which returns the shortest distance from any point in space to the surface of an object. By definition, 𝑓(𝑥,𝑦,𝑧)=0 on the surface of the object.

Assume that 𝑓 is smooth (i.e., differentiable) in the region of interest. When the point (𝑥,𝑦,𝑧) undergoes a small displacement Δ𝒓=(Δ𝑥,Δ𝑦,Δ𝑧), the change in the function value Δ𝑓 can be approximated using the first-order Taylor expansion:

Here,∇𝑓 is the gradient vector of 𝑓, and Δ𝒓 is an arbitrary small displacement vector.

Now, since 𝑓=0 on the surface and remains constant as we move along the surface (i.e., tangentially), the function value does not change, so Δ𝑓=0. Therefore:

This means that the gradient vector is perpendicular to any tangent vector Δ𝒓 on the surface. In other words, the gradient vector ∇𝑓 points in the direction of the surface normal.

Thus, the gradient of a signed distance function gives the surface normal direction at any point on the surface.

2.5. Visualizing Normals with Color

To verify that the surface normals are being calculated correctly, we can visualize them using color.

if ( dist < EPS ) {
	vec3 normal = generateNormal(ray);
	color = normal;
}

Note that within the if block, ray refers to a point on the surface of the object. So by passing ray to generateNormal, we can obtain the surface normal at the point of intersection.

When we render the scene, you’ll notice that the surface of the sphere is shaded in red, green, and blue based on the orientation of the normal vectors. This is because we’re mapping the 𝑥, 𝑦, and 𝑧 components of the normal vector to the RGB color channels respectively.

This is a common and intuitive way to debug normal vectors visually, helping us ensure they are computed correctly.

3. From Spheres to Metaballs

When combining two spheres with the standard min() function, a hard edge forms where the shapes intersect, resulting in an unnatural boundary.
To avoid this, we can use a blending function called smoothMin, which softens the transition by merging the distance values smoothly.

// added
float smoothMin(float d1, float d2, float k) {
    float h = exp(-k * d1) + exp(-k * d2);
    return -log(h) / k;
}

float map(vec3 p) {
    float radius = 0.5;
    float k = 7.; // added: smoothing factor for metaball effect
    float d = 1e5;

    float sphere0 = sdSphere(translate(p, vec3(.4, 0.0, 0.0)), radius);
    float sphere1 = sdSphere(translate(p, vec3(-.4, 0.0, 0.0)), radius);
    d = smoothMin(d, sphere0, k); // modified: blend with smoothing
    d = smoothMin(d, sphere1, k); // modified

    return d;
}

This function creates a smooth, continuous connection between shapes—producing a metaball-like effect where the forms appear to merge organically.

The parameter k controls the smoothness of the blend. A higher k value results in a sharper transition (closer to min()), while a lower k produces smoother, more gradual merging.

For more details, please refer to the following two articles:

  1. wgld.org | GLSL: オブジェクト同士を補間して結合する
  2. Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more

4. Adding Noise for a Droplet-like Appearance

So far, we’ve covered how to calculate normals and how to smoothly blend objects.

Next, let’s tune the surface appearance to make things feel more realistic.

In this demo, we’re aiming to create droplet-like metaballs. So how can we achieve that kind of look? The key idea here is to use noise to distort the surface.

Let’s jump right into the code:

// output.frag

uniform float uTime;

// ...

float rnd3D(vec3 p) {
    return fract(sin(dot(p, vec3(12.9898, 78.233, 37.719))) * 43758.5453123);
}

float noise3D(vec3 p) {
    vec3 i = floor(p);
    vec3 f = fract(p);

    float a000 = rnd3D(i); // (0,0,0)
    float a100 = rnd3D(i + vec3(1.0, 0.0, 0.0)); // (1,0,0)
    float a010 = rnd3D(i + vec3(0.0, 1.0, 0.0)); // (0,1,0)
    float a110 = rnd3D(i + vec3(1.0, 1.0, 0.0)); // (1,1,0)
    float a001 = rnd3D(i + vec3(0.0, 0.0, 1.0)); // (0,0,1)
    float a101 = rnd3D(i + vec3(1.0, 0.0, 1.0)); // (1,0,1)
    float a011 = rnd3D(i + vec3(0.0, 1.0, 1.0)); // (0,1,1)
    float a111 = rnd3D(i + vec3(1.0, 1.0, 1.0)); // (1,1,1)

    vec3 u = f * f * (3.0 - 2.0 * f);
    // vec3 u = f*f*f*(f*(f*6.0-15.0)+10.0);

    float k0 = a000;
    float k1 = a100 - a000;
    float k2 = a010 - a000;
    float k3 = a001 - a000;
    float k4 = a000 - a100 - a010 + a110;
    float k5 = a000 - a010 - a001 + a011;
    float k6 = a000 - a100 - a001 + a101;
    float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;

    return k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;
}

vec3 dropletColor(vec3 normal, vec3 rayDir) {
    vec3 reflectDir = reflect(rayDir, normal);

    float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
    float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

    vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
    vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;

    float intensity = 2.3;
    vec3 color = (_color0 + _color1) * intensity;

    return color;
}

// ...

void main() {
	// ...

	if ( dist < EPS ) {
		vec3 normal = generateNormal(ray);
		color = dropletColor(normal, rayDirection);
	}
	
	 gl_FragColor = vec4(color, 1.0);
}

To create the droplet-like texture, we’re using value noise. If you’re unfamiliar with these noise techniques, the following articles provide helpful explanations:

3D value noise is generated by interpolating random values placed at the eight vertices of a cube. The process involves three stages of linear interpolation:

  1. Bottom face interpolation: First, we interpolate between the four corner values on the bottom face of the cube
  2. Top face interpolation: Similarly, we interpolate between the four corner values on the top face
  3. Final z-axis interpolation: Finally, we interpolate between the results from the bottom and top faces along the z-axis

This triple interpolation process is called trilinear interpolation.

The following code demonstrates the trilinear interpolation process for 3D value noise:

float n = mix(
	mix( mix( a000, a100, u.x ), mix( a010, a110, u.x ), u.y ),
	mix( mix( a001, a101, u.x ), mix( a011, a111, u.x ), u.y ),
	u.z
);

The nested mix() functions above can be converted into an explicit polynomial form for better performance:

float k0 = a000;
float k1 = a100 - a000;
float k2 = a010 - a000;
float k3 = a001 - a000;
float k4 = a000 - a100 - a010 + a110;
float k5 = a000 - a010 - a001 + a011;
float k6 = a000 - a100 - a001 + a101;
float k7 = -a000 + a100 + a010 - a110 + a001 - a101 - a011 + a111;

float n = k0 + k1 * u.x + k2 * u.y + k3 *u.z + k4 * u.x * u.y + k5 * u.y * u.z + k6 * u.z * u.x + k7 * u.x * u.y * u.z;

By sampling this noise using the reflection vector as coordinates, we can create a realistic water droplet-like texture. Note that we are using the surface normal obtained earlier to compute this reflection vector. To add time-based variation, we generate noise at positions offset by uTime:

vec3 reflectDir = reflect(rayDir, normal);

float noisePosTime = noise3D(reflectDir * 2.0 + uTime);
float noiseNegTime = noise3D(reflectDir * 2.0 - uTime);

Finally, we blend two noise-influenced colors and scale the result:

vec3 _color0 = vec3(0.1765, 0.1255, 0.2275) * noisePosTime;
vec3 _color1 = vec3(0.4118, 0.4118, 0.4157) * noiseNegTime;

float intensity = 2.3;
vec3 color = (_color0 + _color1) * intensity;

It’s starting to look quite like a water droplet! However, it still appears a bit murky.
To improve this, let’s add the following post-processing step:

// output.frag

if ( dist < EPS ) {
	vec3 normal = generateNormal(ray);
	color = dropletColor(normal, rayDirection);
}

vec3 finalColor = pow(color, vec3(7.0)); // added

gl_FragColor = vec4(finalColor, 1.0); // modified

Using pow(), darker regions are suppressed, allowing the highlights to pop and creating a more glass-like, translucent surface.

5. Simulating Stretchy Droplets with Mouse Movement

Finally, let’s make the droplet stretch and follow the mouse movement, giving it a soft and elastic feel.

We’ll achieve this by placing multiple spheres along the mouse trail.

// Output.ts

constructor() {
	// ...
	this.trailLength = 15;
	this.pointerTrail = Array.from({ length: this.trailLength }, () => new THREE.Vector2(0, 0));
	
	this.uniforms = {
	    uTime: { value: Common.time },
	    uResolution: {
	        value: new THREE.Vector2(Common.width, Common.height),
	    },
	    uPointerTrail: { value: this.pointerTrail },
	};
}

// ...

/**
 * # rAF update
 */
update() {
  this.updatePointerTrail();
  this.render();
}

/**
 * # Update the pointer trail
 */
updatePointerTrail() {
  for (let i = this.trailLength - 1; i > 0; i--) {
     this.pointerTrail[i].copy(this.pointerTrail[i - 1]);
  }
  this.pointerTrail[0].copy(Pointer.coords);
}
// output.frag

const int TRAIL_LENGTH = 15; // added
uniform vec2 uPointerTrail[TRAIL_LENGTH]; // added

// ...

// modified
float map(vec3 p) {
    float baseRadius = 8e-3;
    float radius = baseRadius * float(TRAIL_LENGTH);
    float k = 7.;
    float d = 1e5;

    for (int i = 0; i < TRAIL_LENGTH; i++) {
        float fi = float(i);
        vec2 pointerTrail = uPointerTrail[i] * uResolution / min(uResolution.x, uResolution.y);

        float sphere = sdSphere(
                translate(p, vec3(pointerTrail, .0)),
                radius - baseRadius * fi
            );

        d = smoothMin(d, sphere, k);
    }

    float sphere = sdSphere(translate(p, vec3(1.0, -0.25, 0.0)), 0.55);
    d = smoothMin(d, sphere, k);

    return d;
}

Conclusion

In this tutorial, we explored how to create a dynamic, droplet-like effect using ray marching and shading techniques. Here’s what we covered:

  1. Used ray marching to render spheres in 3D space.
  2. Applied smoothMin to blend the spheres into seamless metaballs.
  3. Added surface noise to give the spheres a more organic appearance.
  4. Simulated stretchy motion by arranging spheres along the mouse trail.

By combining these techniques, we achieved a soft, fluid visual that responds to user interaction.

Thanks for following along—I hope you find these techniques useful in your own projects!

Yuki Kojima

Front-End Developer

The
New
Collective

🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.

Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.