Case Study: Ronin161’s Portfolio – 2024

A look into the making of Ronin161’s new portfolio for 2024, from ideas to code. Plus an in-depth explanation about the custom Toon Shader.

Introduction & Motivation

We wanted to update our portfolio, which was created in 2018. For this, our main objective was to maintain the essence of the website, retaining its key points, but rework the art direction, make it more colorful, review our 3D character, the performance, go further on the creative side and add a CMS among other things.

The V01 aimed to offer an original concept as a portfolio with a touch of surrealism and provocation via our 3D, navigation and through the character’s behavior capturing the user’s attention (with awkward, funny or even arrogant poses).

Home – V01
Project – V01
Contact – V01

In this post, we’ll explain some of the main concepts and effects of the website along with some examples.

Challenges

We wanted to completely rethink the rendering of the meshes to achieve a slightly cartoonish/drawing result.

Also maintaining smooth transitions between pages, with different lighting and rendering, having texts in WebGL stylized with randomized parameters for a unique experience and better integration with the 3D.

Of course, all of this needs to remain as performant as possible.

Tech stack

  • Nuxt.js
  • Prismic
  • Three.js & Custom WebGL Tools
  • GSAP

Custom Toon Shading

There are two parts that compose Toon Shading (also call cel shading).

The first is that, rather than a continuous change of color, values of luminance are clamp resulting in some regions that are all the same color.

Second is that usually Toon shaders objects have outlines around them.

We use Three.js as our WebGL library. There’s a Toon Material in the examples, but unfortunately, it wasn’t quite what we were looking for. We wanted better control over the rendering and to add some grain to the image. So we had to create a custom shader.

Three.js example: https://threejs.org/examples/webgl_materials_toon.html

The first step was to obtain only 2 tones in the shading: the lit part in some color and the shadow part in black. For this, we used the maths of the lights in the shader and compare the value the luminance of the pixel with a threshold. Then, we switch the shadows from black to blue.

The second step was to calculate a gradient for where to apply afterwards the noise gradually.

The third step was to insert noise into this gradient to break up the linear aspect and give it a more drawn aspect.

Afterward, we adjusted the various parameters based on the lighting, character, and the rest of the website to get the desired result.

Final result of the shader

Post Effects

The website has several post-processing effects.

For context, each page has its own scene, with its own meshes (character, texts, etc.) depending on the content of the page. The different pages are drawn in 2 render targets used in ping-pong method (only active pages are rendered) so that the Slice transition can be possible and done in a fullscreen plane.

Some additional post-effects are added on this plane shader:

  • Vignette, RGB Shift and Bulge Effect: linked to scroll and mouse movement

Next, the rest of the effects are then applied:

  • Fluid simulation: linked to mouse movement and coupled with datamoshing simulation afterwards
  • Datamoshing: UVs deformation based on fluid simulation + optical flow algorithm linked to scroll + pixelation and noise on the result of both simulations
  • Bloom: mainly for videos in the project pages
  • Blur: used when the gallery in project pages is open
  • Color grain: reminder of the toon shader
  • Cursor: the cursor is completely managed in post with shapes in the shader using transform, state, shading, blend
  • Film Grain

We used our own in-house library of post-processing to give us better control, results and performance for all these effects.

Indeed, some effects such as simulations or bloom require separate render targets for calculations, but all the final calculations and computation of theses do not require separate rendering, so they are automatically compiled into a single shader (resulting in only 1 pass). This avoids having 1 effect = 1 render, which would be too heavy in terms of performance, and lose some quality.

// Preview of the fragment shader logic
void fluid(inout vec4 color, in vec2 uv) {// fluid maths}
void datamosh(inout vec4 color, in vec2 uv) {// datamosh maths}
void bloom(inout vec4 color, in vec2 uv) {// bloom maths}
void dof(inout vec4 color, in vec2 uv) {// dof maths}
void drawCursor(inout vec4 color, in vec2 uv) {// drawCursor maths}
void noise(inout vec4 color, in vec2 uv) {// noise maths}
void colorGrain(inout vec4 color, in vec2 uv) {// colorGrain maths}
void main() {
    vec4 c = texture2D(tInput, vUv);
    vec2 uv = vUv;
    fluid(c, uv);
    datamosh(c, vUv);
    bloom(c, vUv);
    dof(c, uv);
    colorGrain(c, vUv);
    drawCursor(c, vUv);
    noise(c, vUv);
    gl_FragColor = vec4(c.rgb, 1.0);
}

WebGL Texts

We wanted to have text in WebGL as well to better integrate them with the rest of the website and the art direction. However, we didn’t want to entirely neglect accessibility, which is why we also had HTML text underneath some WebGL texts for click and select purposes.

This also gave us more possibilities for rendering (solid texts, outlines, noise, gradients…).

Unfortunately, it’s always a bit complicated to work with text in WebGL (size, wrapping, responsive…), but a few years ago, we also created a custom library to handle this, which we had to improve a lot for this portfolio update. This library uses MSDF texts, allowing for clear, sharp, flexible typography while remaining efficient.

One of the challenges was to able to customize the shader according to our desires, dynamic parameters and the animations of each text, as if they were HTML texts, especially hiding the texts letter by letter, which was a real challenge, we are doing so with custom attribute and uniforms in the shaders.

Text match & selection
Home to Project
Project to Project

More infos on MSDF: https://github.com/Chlumsky/msdfgen

3D Character

A significant part of the website identity also comes from our iconic character. This was created in Character Creator (CC), a 3D character design software. All textures have been done in Substance Painter.

first batch WIP

We wanted to take up the concept of the V01 of the website, where the character could have multiple poses and follow the cursor with his head to track the user movement.

However, we wanted to make it more lively and unique with each pose. For this, we thought of making his eye blink, as well as giving it more pronounced facial expressions and custom textures for each pose.

The character currently has 7 states, therefore 7 poses, facial expressions, and set of textures (head, arms, body).

This resulted in several challenges:

  1. How to use the character from CC but optimize it for the web?
  2. How to load multiple poses and facial expressions without loading different files for loading performance?
  3. How to make the character blink his eyes?
  4. How to optimize textures if we wanted several?

  1. Since we only see the top of the character, we removed the bottom part and bones below the waist, allowing us to reduce drastically the filesize. Thanks to CC, we could also export this character in lower definition.
  2. To avoid loading multiple models, we decided to have only one file using a skeletal mesh. This allows us to move the bones and thus recreate the poses we wanted. Unfortunately, this doesn’t work for facial expressions, for this we needed morphs targets. For optimization and filesize reasons, we decided to divide the character into 2 parts, the body and the head, which we would recombine in code. Indeed, the morphs duplicate all the vertices. Only the vertices of the head needed to be changed, unlike the body which remained static. So we exported the head with all the bones and the morphs we need to set them in code like the bones to give us the expression we wanted. This means we also can animate the face if we wanted. We also exported the body but with only the bones, much smaller filesize.
  3. Thanks to the morphs, we were able to obtain one for opening or closing the eyes. We can play with different timing and delays to add some randomness and realism to the blinking.
  4. Despite our efforts, the weight of the mesh, especially the head, and all the textures were too heavy and took up too much GPU RAM for our taste. We converted the mesh into binary glTF with mesh optimizer extension to optimize the vertices and weight. As for textures, we grouped different parts of the body (head, arms, legs) into a spritesheet, which we then converted into the basis format, which is more suitable (and lighter) for the GPU. With all this, we achieved a more satisfying result.

With the workflow we had found, it is entirely possible to easily create new poses and crazy combination and integrate them into the website without adding to much weight in filesize.

Before optimization:

  • Head: 4.9Mo
  • Body: 631Ko
  • 21 textures (7 poses x 3 mesh parts): 3.9Mo

After optimization:

  • Head: 2.9Mo
  • Body: 301Ko
  • 7 textures spritesheet: 1.39Mo

More infos on gltfpack and mesh optimizer: https://meshoptimizer.org/gltf/

Basis compression: https://github.com/BinomialLLC/basis_universal

Conclusion

We hope you enjoyed this case study. We can’t go into details about every aspect and issue we encountered, but we hope this gives you a good overview of the challenges and ambitions we had and perhaps you even learned a few tips and tricks.

If you have any questions, feel free to ask us on Twitter or Instagram.

Ronin161

D I G I T A L . G A N G. Doing 3D, videos, creative coding and much more..

Stay in the loop: Get your dose of frontend twice a week

Fresh news, inspo, code demos, and UI animations—zero fluff, all quality. Make your Mondays and Thursdays creative!