Rain & Water Effect Experiments

Some experimental rain and water drop effects made with WebGL and shown in different demo scenarios.

Today we’d like to share some WebGL experiments with you. The idea is to create a very realistic looking rain effect and put it in different scenarios. In this article, we’ll give an overview of the general tricks and techniques used to make this effect.

Please note that the effect is highly experimental and might not work as expected in all browsers. Best viewed in Chrome.

Getting Started

If we want to make an effect based on the real world, the first step is to dissect how it actually looks, so we can make it look convincing.
If you look up pictures of water drops on a window in detail (or, of course, observed them in real life already), you will notice that, due to refraction, the raindrops appear to turn the image behind them upside down.

Image credits:  Wikipedia, GGB reflection in raindrop
Image credits: Wikipedia, GGB reflection in raindrop

You’ll also see that drops that are close to each other get merged – and if it gets past a certain size, it falls down, leaving a small trail.
To simulate this behavior, we’ll have to render a lot of drops, and update the refraction on them on every frame, and do all this with a decent frame rate, we’ll need a pretty good performance – so, to be able to use hardware accelerated graphics, we’ll use WebGL.

WebGL

WebGL is a JavaScript API for rendering 2D and 3D graphics, allowing the use of the GPU for better performance. It is based on OpenGL ES, and the shaders aren’t written in JS at all, but rather in a language called GLSL.

All in all, that makes it look difficult to use if you’re coming from exclusively web development — it’s not only a new language, but new concepts as well — but once you grasp some key concepts it will become much easier.

In this article we will only show a basic example of how to use it; for a more in depth explanation, check out the excellent WebGl Fundamentals page.

The first thing we need is a canvas element. WebGL renders on canvas, and it is a rendering context like the one we get with canvas.getContext('2d').

<span class="hljs-tag"><<span class="hljs-title">canvas</span> <span class="hljs-attribute">id</span>=<span class="hljs-value">"container"</span> <span class="hljs-attribute">width</span>=<span class="hljs-value">"800"</span> <span class="hljs-attribute">height</span>=<span class="hljs-value">"600"</span>></span><span class="hljs-tag"></<span class="hljs-title">canvas</span>></span>
<span class="hljs-keyword">var</span> canvas = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"container"</span>);
<span class="hljs-keyword">var</span> gl = canvas.getContext(<span class="hljs-string">"webgl"</span>);

Then we’ll need a program, which is comprised of a vertex shader and a fragment shader. Shaders are functions: a vertex shader will be run once per vertex, and the fragment shader is called once per pixel. Their jobs are to return coordinates and colors, respectively. This is the heart of our WebGL application.

First we’ll create our shaders. This is the vertex shader; we’ll make no changes on the vertices and will simply let the data pass through it:

<script id="vert-shader" type="x-shader/x-vertex">
  // gets the current position
  attribute vec4 a_position;

  void main() {
   // returns the position
   gl_Position = a_position;
  }
</script>

And this is the fragment shader. This one sets the color of each pixel based on its coordinates.

<script id="frag-shader" type="x-shader/x-fragment">
  precision mediump float;

  void main() {
     // current coordinates
     vec4 coord = gl_FragCoord;
     // sets the color
     gl_FragColor = vec4(coord.x/800.0,coord.y/600.0, 0.0, 1.0);
  }
</script>

Now we’ll link the shaders to the WebGL context:

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">createShader</span>(<span class="hljs-params">gl,source,type</span>)</span>{
  <span class="hljs-keyword">var</span> shader = gl.createShader(type);
  source = <span class="hljs-built_in">document</span>.getElementById(source).text;
  gl.shaderSource(shader, source);
  gl.compileShader(shader);

  <span class="hljs-keyword">return</span> shader;
}

<span class="hljs-keyword">var</span> vertexShader = createShader(gl, <span class="hljs-string">'vert-shader'</span>, gl.VERTEX_SHADER);
<span class="hljs-keyword">var</span> fragShader = createShader(gl, <span class="hljs-string">'frag-shader'</span>, gl.FRAGMENT_SHADER);

<span class="hljs-keyword">var</span> program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragShader);

gl.linkProgram(program);
gl.useProgram(program);

Then, we’ll have to create an object in which we will render our shader. Here we will just create a rectangle — specifically, two triangles.

<span class="hljs-comment">// create rectangle</span>
<span class="hljs-keyword">var</span> buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(
    gl.ARRAY_BUFFER,
    <span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>([
        -<span class="hljs-number">1.0</span>, -<span class="hljs-number">1.0</span>,
         <span class="hljs-number">1.0</span>, -<span class="hljs-number">1.0</span>,
        -<span class="hljs-number">1.0</span>,  <span class="hljs-number">1.0</span>,
        -<span class="hljs-number">1.0</span>,  <span class="hljs-number">1.0</span>,
         <span class="hljs-number">1.0</span>, -<span class="hljs-number">1.0</span>,
         <span class="hljs-number">1.0</span>,  <span class="hljs-number">1.0</span>]),
    gl.STATIC_DRAW);

<span class="hljs-comment">// vertex data</span>
<span class="hljs-keyword">var</span> positionLocation = gl.getAttribLocation(program, <span class="hljs-string">"a_position"</span>);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, <span class="hljs-number">2</span>, gl.FLOAT, <span class="hljs-literal">false</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>);

Finally, we render the whole thing:

gl.drawArrays(gl.TRIANGLES, <span class="hljs-number">0</span>, <span class="hljs-number">6</span>);

And this is the result:

webgl-1

After that, you can play with the shaders to get a hang of how it works. You can see a lot of great shader examples on ShaderToy.

Raindrops

Now let’s see how to make the raindrop effect. First, let’s see how a single raindrop looks:

drop1

Now, there are a couple of things going on here.

The alpha channel looks like this because we’ll use a technique similar to the one in the Creative Gooey Effects article to make the raindrops stick together.

drop-merged

There’s a reason for the color: we’ll be using a technique similar to normal mapping to make the refraction effect. We’ll use the color of the raindrop to get the coordinates of the texture we’ll see through the drop. This is how it looks without the mask:

drop-color

From this image, we’ll use data from the green channel to get the X position, and from the red channel to get the Y position.

drop-color2

Now we can write our shader and use both that data and the drop’s position to flip and distort the texture right behind the raindrop.

screen1_dropdetail

Raining

After creating our raindrop, we’ll make our rain simulation.

Making the raindrops interact with each other can get heavy fast — the number of calculations increase exponentially with each new drop — so we’ll have to optimize a little bit.

In this demo, I’m splitting between large and small drops. The small drops are rendered on a separate canvas and are not kept track of. That way, I can make thousands of them and not get any slower. The downside is that they are static, and since we are making new ones every frame, they accumulate. To fix that, we’ll use our bigger drops.

Since the big drops do move, we can use them to erase smaller drops underneath them. Erasing in canvas is tricky: we have to actually draw something, but use globalCompositeOperation='destination-out'. So, every time a big drop moves, we draw a circle on the small drops canvas using that composite operation to clean the drops and make the effect more realistic.

screen2_droptrail

Finally, we’ll render them all on a big canvas and use that as a texture for our WebGL shader.

raindrops-no-texture

To make things lighter, we’ll take advantage of the fact that the background is out of focus, so we’ll use a small texture for it and stretch it out; in WebGL, texture size directly impacts performance. We’ll have to use a different, in-focus texture for the raindrops themselves. Blur is an expensive operation, and doing it in real time should be avoided – but since the raindrops are small, we can make that texture small as well.

texture-drizzle-fg

texture-drizzle-bg

Conclusion

To make realistic looking effects like raindrops we need to consider many tricky details. Taking apart a real world effect first is the key to every effect recreation. Once we know how things work in reality, we can map that behavior to the virtual world. With WebGL we can aim for good performance for this kind of simulation (we can use hardware accelerated graphics) so it’s a good choice for this kind of effect.

We hope you enjoyed this experiment and find it inspiring!

Tagged with:

Lucas Bebber

Lucas is a developer and illustrator at Garden Estúdio in Brazil. Check out his work on Codepen.

Stay up to date with the latest web design and development news and relevant updates from Codrops.