Free course recommendation: Master JavaScript animation with GSAP through 34 free video lessons, step-by-step projects, and hands-on demos. Enroll now →
The Idea Behind the Project
This project primarily serves as a technical demo and learning material. It began when I decided to start learning Blender. I followed a few tutorials, then decided to do a small project using it—so I chose to create the Canon F-1 camera!
After that, I decided to export the project to Three.js to add some cool post-processing shader effects. I wanted to create a sketch effect similar to what I had seen in some repair guides.

After spending a few hours experimenting with it, I decided to integrate it into a fully functional website featuring some cool shaders and 3D effects!
In this article, I’m going to walk through some of the key features of the site and provide a technical breakdown, assuming you already have a basic or beginner-level understanding of Three.js and shaders.
1. The Edge Detection Shader
Three.js includes a built-in edge detection shader called SobelOperatorShader
. Basically, it detects edges based on color contrast—it draws a line between two areas with a strong enough difference in color.
To make my effect work the way I want, I need to assign a unique color to each area I want to highlight on my model. This way, Three.js will draw a line around those areas.
Here’s my model with all the materials applied:

This way, Three.js can accurately detect each area I want to highlight!
As you can see, the lines are not all the same intensity—some are white, while others are light gray. This is because, by default, line intensity depends on contrast: edges with lower contrast appear with lighter lines. To fix this, I manually modified the post-processing shader to make all lines fully white, regardless of contrast.
The shader can be found in:
node_modules/three/examples/jsm/shaders/SobelOperatorShader.js
I copied the contents of the fragment shader into a separate file so I could freely modify it.
uniform vec2 resolution;
varying vec2 vUv;
float sobel(sampler2D tDiffuse,vec2 texel)
{
// kernel definition (in glsl matrices are filled in column-major order)
const mat3 Gx = mat3( -1, -2, -1, 0, 0, 0, 1, 2, 1 ); // x direction kernel
const mat3 Gy = mat3( -1, 0, 1, -2, 0, 2, -1, 0, 1 ); // y direction kernel
// fetch the 3x3 neighbourhood of a fragment
// first column
float tx0y0 = texture2D( tDiffuse, vUv + texel * vec2( -1, -1 ) ).r;
float tx0y1 = texture2D( tDiffuse, vUv + texel * vec2( -1, 0 ) ).r;
float tx0y2 = texture2D( tDiffuse, vUv + texel * vec2( -1, 1 ) ).r;
// second column
float tx1y0 = texture2D( tDiffuse, vUv + texel * vec2( 0, -1 ) ).r;
float tx1y1 = texture2D( tDiffuse, vUv + texel * vec2( 0, 0 ) ).r;
float tx1y2 = texture2D( tDiffuse, vUv + texel * vec2( 0, 1 ) ).r;
// third column
float tx2y0 = texture2D( tDiffuse, vUv + texel * vec2( 1, -1 ) ).r;
float tx2y1 = texture2D( tDiffuse, vUv + texel * vec2( 1, 0 ) ).r;
float tx2y2 = texture2D( tDiffuse, vUv + texel * vec2( 1, 1 ) ).r;
// gradient value in x direction
float valueGx = Gx[0][0] * tx0y0 + Gx[1][0] * tx1y0 + Gx[2][0] * tx2y0 +
Gx[0][1] * tx0y1 + Gx[1][1] * tx1y1 + Gx[2][1] * tx2y1 +
Gx[0][2] * tx0y2 + Gx[1][2] * tx1y2 + Gx[2][2] * tx2y2;
// gradient value in y direction
float valueGy = Gy[0][0] * tx0y0 + Gy[1][0] * tx1y0 + Gy[2][0] * tx2y0 +
Gy[0][1] * tx0y1 + Gy[1][1] * tx1y1 + Gy[2][1] * tx2y1 +
Gy[0][2] * tx0y2 + Gy[1][2] * tx1y2 + Gy[2][2] * tx2y2;
// magnitute of the total gradient
float G = sqrt( ( valueGx * valueGx ) + ( valueGy * valueGy ) );
return G;
}
void main() {
vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
vec4 t = texture2D(tDiffuse,vUv);
float G = sobel(t,texel);
G= G > 0.001 ? 1. : 0.;
gl_FragColor = vec4(vec3(G),1.0);
#include <colorspace_fragment>
}
What I’m doing here is moving all the edge detection logic into the Sobel function. Then, I pass the tDiffuse
texture—which is the composer’s render—to this function.
This way, I can modify the output of the edge detection shader before passing it back to the composer:
float G = sobel(t,texel);
G= G > 0.001 ? 1. : 0.;
G represents the intensity of the edge detection. It’s a single value because the lines are monochrome. G ranges from 0 to 1, where 0 means full black (no edge detected) and 1 means full white (strong contrast detected).
As mentioned earlier, this value depends on the contrast. What I’m doing in the second line is forcing G to be 1 if it’s above a certain threshold (I chose 0.001, but you could pick a smaller value if you want).
This way I can get all the edges to have the same intensity.
Here’s how I’m applying the custom fragment shader to the Sobel Operator shader pass:
import { SobelOperatorShader } from "three/addons/shaders/SobelOperatorShader.js"
import { ShaderPass } from "three/addons/postprocessing/ShaderPass.js"
export default class CannonF1 {
constructor() {
//....code
}
setupPostprocessing()
{
SobelOperatorShader.fragmentShader = sobelFragment
this.effectSobel = new ShaderPass(SobelOperatorShader)
this.effectSobel.uniforms["resolution"].value.x =
window.innerWidth * Math.min(window.devicePixelRatio, 2)
this.effectSobel.uniforms["resolution"].value.y =
window.innerHeight * Math.min(window.devicePixelRatio, 2)
this.composer.addPass(this.effectSobel)
}
}
2. The Mesh Highlight on Hover Effect
Next, let’s take a look at the lens parts section.
This is mainly achieved using a Three.js utility called RenderTarget.
A render target is a buffer where the GPU draws pixels for a scene being rendered off-screen. It’s commonly used in effects like post-processing, where the rendered image is processed before being displayed on the screen.
Basically, this allows me to render my scene twice per frame: once with only the highlighted mesh, and once without it.
First I setup the render targets:
/*
....Code
*/
createRenderTargets() {
const sizes = {
width:
window.innerWidth * Math.ceil(Math.min(2, window.devicePixelRatio)),
height:
window.innerHeight * Math.ceil(Math.min(2, window.devicePixelRatio)),
}
this.renderTargetA = new THREE.WebGLRenderTarget(
sizes.width,
sizes.height,
rtParams
)
this.renderTargetB = new THREE.WebGLRenderTarget(
sizes.width,
sizes.height,
rtParams
)
}
/*
...Code
*/
Then, using three.js Raycaster
, I can retrieve the uuid
of the mesh that is being hoverer on:
onMouseMove(event: MouseEvent) {
this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
this.raycaster.setFromCamera(this.mouse, this.camera)
const intersects = this.raycaster.intersectObjects(this.scene.children)
const target = intersects[0]
if (target && "material" in target.object) {
const targetMesh = intersects[0].object as THREE.Mesh
this.cannonF1?.onSelectMesh(targetMesh.uuid)
} else {
this.cannonF1?.onSelectMesh()
}
}
In the onSelectMesh
method, I set the value of this.selectedMeshName
to the name of the mesh group that contains the target mesh from the Raycaster (I’m using names to refer to groups of meshes).
This way, in my render loop, I can create two distinct renders:
- One render (
renderTargetA
) with all the meshes except the hovered mesh - Another render (
renderTargetB
) with only the hovered mesh
render() {
// Render renderTargetA
this.modelChildren.forEach((mesh) => {
if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
mesh.visible = false
} else {
mesh.visible = true
}
})
this.renderer.setRenderTarget(this.renderTargetA)
this.renderer.render(this.scene, this.camera)
// Render renderTargetB
this.modelChildren.forEach((mesh) => {
if (this.mesheUuidToName[mesh.uuid] === this.selectedMeshName) {
mesh.visible = true
} else {
mesh.visible = false
}
})
if (this.targetedMesh) {
this.targetedMesh.children.forEach((child) => {
child.visible = true
})
}
this.renderer.setRenderTarget(this.renderTargetB)
this.renderer.render(this.scene, this.camera)
this.modelChildren.forEach((mesh) => {
mesh.visible = false
})
this.effectSobel.uniforms.tDiffuse1.value = this.renderTargetA.texture
this.effectSobel.uniforms.tDiffuse2.value = this.renderTargetB.texture
this.renderer.setRenderTarget(null)
}
This is what the renderTargetA
render looks like:
…and renderTargetB
:
As you can see, I’m sending both renders as texture uniforms to the effectSobel
shader. The post-processing shader then “merges” these two renders into a single output.
At this point, we have two renders of the scene, and the post-processing shader needs to decide which one to display. Initially, I thought of simply combining them by adding the two textures together, but that didn’t produce the correct result:
What I needed was a way to hide the pixels of one render when they are “covered” by pixels from another render.
To achieve this, I used the distance of each vertex from the camera. This meant I had to go through all the meshes in the model and modify their materials. However, since the mesh colors are important for the edge detection effect, I couldn’t change their colors.
Instead, I used the alpha channel of each individual vertex to set the distance from the camera.
#include <common>
varying vec3 vPosition;
uniform vec3 uColor;
float normalizeRange(float value, float oldMin, float oldMax, float newMin, float newMax) {
float normalized = (value - oldMin) / (oldMax - oldMin);
return newMin + (newMax - newMin) * normalized;
}
void main()
{
float dist = distance(vPosition,cameraPosition);
float l = luminance( uColor );
gl_FragColor=vec4(vec3(l),normalizeRange(dist,0.,20.,0.,1.));
#include <colorspace_fragment>
}
Here’s an explanation of this shader:
- First, the
luminance
function is a built-in Three.js shader utility imported from the<common>
module. It’s recommended to use this function with the Sobel effect to improve edge detection results. - The
uColor
value represents the initial color of the mesh. - The
dist
value calculates the distance between the vertex position (passed from the vertex shader via a varying) and the camera, using the built-incameraPosition
variable in Three.js shaders. - Finally, I pass this distance through the alpha channel. Since the alpha value can’t exceed 1, I use a normalized version of the distance.
And here is the updated logic for the postprocessing shader:
uniform sampler2D tDiffuse;
uniform sampler2D tDiffuse1;
uniform sampler2D tDiffuse2;
uniform vec2 resolution;
varying vec2 vUv;
float sobel(sampler2D tDiffuse,vec2 texel)
{
//sobel operator
}
void main() {
vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
vec4 t = texture2D(tDiffuse,vUv);
vec4 t1 = texture2D(tDiffuse1,vUv);
vec4 t2 = texture2D(tDiffuse2,vUv);
if(t1.a==0.)
{
t1.a = 1.;
}
if(t2.a==0.)
{
t2.a = 1.;
}
float G = sobel(tDiffuse1,texel);
G= G > 0.001 ? 1. : 0.;
float Gs = sobel(tDiffuse2,texel);
Gs = Gs > 0.001 ? 1. : 0.;
vec4 s1 = vec4(vec3(G),1.);
vec4 s2 = vec4(vec3(Gs),1.);
vec4 sobelTexture = vec4(vec3(0.),1.);
if(t1.a>t2.a)
{
sobelTexture = s2;
}
else{
sobelTexture = s1;
}
gl_FragColor = sobelTexture;
#include <colorspace_fragment>
}
Now that the alpha channel of the textures contains the distance to the camera, I can simply compare them and display the render that have the closer vertices to the camera.
3. The Film Roll Effect
Next is this film roll component that moves and twist on scroll.
This effect is achieved using only shaders, the component is a single plane component with a shader material.
All the data is sent to the shader through uniforms:
export default class Film {
constructor() {
//...code
}
createGeometry() {
this.geometry = new THREE.PlaneGeometry(
60,
2,
100,
10
)
}
createMaterial() {
this.material = new THREE.ShaderMaterial({
vertexShader,
fragmentShader,
side: THREE.DoubleSide,
transparent: true,
depthWrite: false,
blending: THREE.CustomBlending,
blendEquation: THREE.MaxEquation,
blendSrc: THREE.SrcAlphaFactor,
blendDst: THREE.OneMinusSrcAlphaFactor,
uniforms: {
uPlaneWidth: new THREE.Uniform(this.geometry.parameters.width),
uRadius: new THREE.Uniform(2),
uXZfreq: new THREE.Uniform(3.525),
uYfreq: new THREE.Uniform(2.155),
uOffset: new THREE.Uniform(0),
uAlphaMap: new THREE.Uniform(
window.preloader.loadTexture(
"./alpha-map.jpg",
"film-alpha-map",
(texture) => {
texture.wrapS = THREE.RepeatWrapping
const { width, height } = texture.image
this.material.uniforms.uAlphaMapResolution.value =
new THREE.Vector2(width, height)
}
)
),
//uImages: new THREE.Uniform(new THREE.Vector4()),
uImages: new THREE.Uniform(
window.preloader.loadTexture(
"/film-texture.png",
"film-image-texture",
(tex) => {
tex.wrapS = THREE.RepeatWrapping
}
)
),
uRepeatFactor: new THREE.Uniform(this.repeatFactor),
uImagesCount: new THREE.Uniform(this.images.length * this.repeatFactor),
uAlphaMapResolution: new THREE.Uniform(new THREE.Vector2()),
uFilmColor: new THREE.Uniform(window.colors.orange1),
},
})
}
createMesh() {
this.mesh = new THREE.Mesh(this.geometry, this.material)
this.scene.add(this.mesh)
}
}
The main vertex shader uniforms are:
-
uRadius
is the radius of the cylinder shape -
uXZfreq
is the frequency of the twists on the (X,Z) plane uYfreq
is a cylinder height factoruOffset
is the vertical offset of the roll when you scroll up and down
Here is how they are used in the vertex shader:
#define PI 3.14159265359
uniform float uPlaneWidth;
uniform float uXZfreq;
uniform float uYfreq;
varying vec2 vUv;
uniform float uOffset;
varying vec3 vPosition;
uniform float uRadius;
void main()
{
vec3 np = position;
float theta = -(PI*np.x)/(uPlaneWidth*0.5);
np.x=cos(uXZfreq*theta+uOffset)*uRadius;
np.y+=theta*uYfreq;
np.z=sin(uXZfreq*theta+uOffset)*uRadius;
vec4 modelPosition = modelMatrix * vec4(np, 1.0);
vec4 viewPosition = viewMatrix * modelPosition;
vec4 projectedPosition = projectionMatrix * viewPosition;
gl_Position = projectedPosition;
vUv=uv;
vPosition=np;
}
As you can see they are used to modify the initial position attribute to give it the shape of a cylinder. the modified position’s X Y and Z factors are using uOffset
in their frequency. this uniform is linked to a Scrolltrigger timeline that will give the twist on scroll effect.
const tl = gsap.timeline({
scrollTrigger: {
trigger: this.section,
start: "top bottom",
end: "bottom top",
scrub: true,
invalidateOnRefresh: true,
},
})
tl.to(
this.material.uniforms.uOffset,
{
value: 10,
duration: 1,
},
0
)
Conclusion
That’s it for the most part! Don’t feel frustrated if you don’t understand everything right away—I often got stuck for days on certain parts and didn’t know every technical detail before I started building.
I learned so much from this project, and I hope you’ll find it just as useful!
Thank you for reading, and thanks to Codrops for featuring me again!