From our sponsor: Meco is a distraction-free space for reading and discovering newsletters, separate from the inbox.
Introduction
Bruno Arizio, Designer — @brunoarizio
Since I first became aware of the energy in this community, I felt the urge to be more engaged in this ‘digital avant-garde landscape’ that is being cultivated by the amazing people behind Codrops, Awwwards, CSSDA, The FWA, Webby Awards, etc. That energy has propelled me to set up this new portfolio, which acted as a way of putting my feet into the water and getting used to the temperature.
I see this community being responsible for pushing the limits of what is possible on the web, fostering the right discussions and empowering the role of creative developers and creative designers across the world.
With this in mind, it’s difficult not to think of the great art movements of the past and their role in mediating change. You can easily draw a parallel between this digital community and the Impressionists artists in the last century, or as well the Bauhaus movement leading our society into modernism a few decades ago. What these periods have in common is that they’re pushing the boundaries of what is possible, of what is the new standard, doing so through relentless experimentation. The result of that is the world we live in, the products we interact with, and the buildings we inhabit.
The websites that are awarded today, are so because they are innovating in some aspects, and those innovations eventually become a new standard. We can see that in the apps used by millions of people, in consumer websites, and so on. That is the impact that we make.
I’m not saying that a new interaction featured on a new portfolio launched last week is going to be in the hands of millions of people across the globe in the following week, although constantly pushing these interactions to its limits will scale it and eventually make these things adopted as new standards. This is the kind of responsibility that is in our hands.
Open Source
We decided to be transparent and take a step forward in making this entire project open source so people can learn how to make the things we created. We are both interested in supporting the community, so feel free to ask us questions on Twitter or Instagram (@brunoarizio and @lhbzr), we welcome you to do so!
The repository is available on GitHub.
Design Process
With the portfolio, we took a meticulous approach to motion and collaborated to devise deliberate interactions that have a ‘realness’ to it, especially on the main page.
The mix of the bending animation with the distortion effect was central to making the website ‘tactile’. It is meant to feel good when you shuffle through the projects, and since it was published we received a lot of messages from people saying how addictive the navigation is.
A lot of my new ideas come from experimenting with shaders and filters in After Effects, and just after I find what I’m looking for — the ‘soul’ of the project — I start to add the ‘grid layer’ and begin to structure the typography and other elements.
In this project, before jumping to Sketch, I started working with a variety of motion concepts in AE, and that’s when the version with the convection bending came in and we decided to take it forward. So we can pretty much say that the project was born from motion, not from a layout in this matter. After the main idea was solid enough, I took it to Sketch, designed a simple grid and applied the typography.
Collaboration
Working in collaboration with Luis was so productive. This is the second (of many to come) projects working together and I can safely say that we had a strong connection from start to finish, and that was absolutely important for the final results. It wasn’t a case in which the designer creates the layouts and hands them over to a developer and period. This was a nuanced relationship of constant feedback. We collaborated daily from idea to production, and it was fantastic how dev and design had this keen eye for perfectionism.
From layout to code we were constantly fine-tuning every aspect: from the cursor kinetics to making overhaul layout changes and finding the right tone for the easing curves and the noise mapping on the main page.
When you design a portfolio, especially your own, it feels daunting since you are free to do whatever you want. But the consequence is that this will dictate how people will see your work, and what work you will be doing shortly after. So making the right decisions deliberately and predicting its impact is mandatory for success.
Technical Breakdown
Luis Henrique Bizarro, Creative Developer — @lhbzr
Motion Reference
This was the video of the motion reference that Bruno shared with me when he introduced me his ideas for his portfolio. I think one of the most important things when starting a project like this with the idea of implementing a lot of different animations, is to create a little prototype in After Effects to drive the developer to achieve similar results using code.
The Tech Stack
The portfolio was developed using:
That’s my favorite stack to work with right now; it gives me a lot of freedom to focus on animations and interactions instead of having to follow guidelines of a specific framework.
In this particular project, most of the code was written from scratch using ECMAScript 2015+ features like Classes, Modules, and Promises to handle the route transitions and other things in the application.
In this case study, we’ll be focusing on the WebGL implementation, since it’s the core animation of the website and the most interesting thing to talk about.
1. How to measure things in Three.js
This specific subject was already covered in other articles of Codrops, but in case you’ve never heard of it before, when you’re working with Three.js, you’ll need to make some calculations in order to have values that represent the correct sizes of the viewport of your browser.
In my last projects, I’ve been using this Gist by Florian Morel, which is basically a calculation that uses your camera field-of-view to return the values for the height and width of the Three.js environment.
// createCamera()
const fov = THREEMath.degToRad(this.camera.fov);
const height = 2 * Math.tan(fov / 2) * this.camera.position.z;
const width = height * this.camera.aspect;
this.environment = {
height,
width
};
// createPlane()
const { height, width } = this.environment;
this.plane = new PlaneBufferGeometry(width * 0.75, height * 0.75, 100, 50);
I usually store these two variables in the wrapper class of my applications, this way we just need to pass it to the constructor of other elements that will use it.
In the embed below, you have a very simple implementation of a PlaneBufferGeometry
that covers 75% of the height and width of your viewport using this solution.
2. Uploading textures to the GPU and using them in Three.js
In order to avoid the textures to be processed in runtime while the user is navigating through the website, I consider a very good practice to upload all images to the GPU immediately when they’re ready. On Bruno’s portfolio, this process happens during the preloading of the website. (Kudos to Fabio Azevedo for introducing me this concept a long time ago in previous projects.)
Another two good additions, in case you don’t want Three.js to resize and process the images you’re going to use as textures, are disabling mipmaps and change how the texture is sampled by changing the generateMipmaps
and minFilter
attributes.
this.loader = new TextureLoader();
this.loader.load(image, texture => {
texture.generateMipmaps = false;
texture.minFilter = LinearFilter;
texture.needsUpdate = true;
this.renderer.initTexture(texture, 0);
});
The method .initTexture()
was introduced back in the newest versions of Three.js in the WebGLRenderer
class, so make sure to update to the latest version of the library to be able to use this feature.
But my texture is looking stretched! The default behavior of Three.js map
attribute from MeshBasicMaterial
is to make your image fit into the PlaneBufferGeometry
. This happens because of the way the library handles 3D models. But in order to keep the original aspect ratio of your image, you’ll need to do some calculations as well.
There’s a lot of different solutions out there that don’t use GLSL shaders, but in our case we’ll also need them to implement our animations. So let’s implement the aspect ratio calculations in our fragment shader that will be created for the ShaderMaterial
class.
So, all you need to do is pass your Texture
to your ShaderMaterial
via the uniforms
attribute. In the fragment shader, you’ll be able to use all variables passed via the uniforms attribute.
In Three.js Uniform documentation you have a good reference of what happens internally when you pass the values. For example, if you pass a Vector2
, you’ll be able to use a vec2
inside your shaders.
We need two vec2
variables to do the aspect ratio calculations: the image resolution and the resolution of the renderer. After passing them to the fragment shader, we just need to implement our calculations.
this.material = new ShaderMaterial({
uniforms: {
image: {
value: texture
},
imageResolution: {
value: new Vector2(texture.image.width, texture.image.height)
},
resolution: {
type: "v2",
value: new Vector2(window.innerWidth, window.innerHeight)
}
},
fragmentShader: `
uniform sampler2D image;
uniform vec2 imageResolution;
uniform vec2 resolution;
varying vec2 vUv;
void main() {
vec2 ratio = vec2(
min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
);
vec2 uv = vec2(
vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
);
gl_FragColor = vec4(texture2D(image, uv).xyz, 1.0);
}
`,
vertexShader: `
varying vec2 vUv;
void main() {
vUv = uv;
vec3 newPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
}
`
});
In this snippet we’re using template strings to represent the code of our shaders only to keep it simple when using CodeSandbox, but I highly recommend using glslify to split your shaders into multiple files to keep your code more organized in a more robust development environment.
We’re all good now with the images! Our images are preserving their original aspect ratio and we also have control over how much space they’ll use in our viewport.
3. How to implement infinite scrolling
Infinite scrolling can be something very challenging, but in a Three.js environment the implementation is smoother than it’d be without WebGL by using CSS transforms and HTML elements, because you don’t need to worry about storing the original position of the elements and calculate their distance to avoid browser repaints.
Overall, a simple logic for the infinite scrolling should follow these two basic rules:
- If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
- If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.
Sounds reasonable right? So, first we need to detect in which direction the user is scrolling.
this.position.current += (this.scroll.values.target - this.position.current) * 0.1;
if (this.position.current < this.position.previous) {
this.direction = "up";
} else if (this.position.current > this.position.previous) {
this.direction = "down";
} else {
this.direction = "none";
}
this.position.previous = this.position.current;
The variable this.scroll.values.target
is responsible for defining to which scroll position the user wants to go. Then the variable this.position.current
represents the current position of your scroll, it goes smoothly to the value of the target with the * 0.1
multiplication.
After detecting the direction the user is scrolling towards, we just store the current position to the this.position.previous
variable, this way we’ll also have the right direction value inside the requestAnimationFrame
.
Now we need to implement the checking method to make our items have the expected behavior based on the direction of the scroll and their position. In order to do so, you need to implement a method like this one below:
check() {
const { height } = this.environment;
const heightTotal = height * this.covers.length;
if (this.position.current < this.position.previous) {
this.direction = "up";
} else if (this.position.current > this.position.previous) {
this.direction = "down";
} else {
this.direction = "none";
}
this.projects.forEach(child =>; {
child.isAbove = child.position.y > height;
child.isBelow = child.position.y < -height;
if (this.direction === "down" && child.isAbove) {
const position = child.location - heightTotal;
child.isAbove = false;
child.isBelow = true;
child.location = position;
}
if (this.direction === "up" && child.isBelow) {
const position = child.location + heightTotal;
child.isAbove = true;
child.isBelow = false;
child.location = position;
}
child.update(this.position.current);
});
}
Now our logic for the infinite scroll is finally finished! Drag and drop the embed below to see it working.
You can also view the fullscreen demo here.
4. Integrate animations with infinite scrolling
The website motion reference has four different animations happening while the user is scrolling:
- Movement on the z-axis: the image moves from the back to the front.
- Bending on the z-axis: the image bends a little bit depending on its position.
- Image scaling: the image scales slightly when moving out of the screen.
- Image distortion: the image is distorted when we start scrolling.
My approach to implementing the animations was to use a calculation of the element position divided by the viewport height, giving me a percentage number between -1
and 1
. This way I’ll be able to map this percentage into other values inside the ShaderMaterial
instance.
-1
represents the bottom of the viewport.0
represents the middle of the viewport.1
represents the top of the viewport.
const percent = this.position.y / this.environment.height;
const percentAbsolute = Math.abs(percent);
The implementation of the z-axis animation is pretty simple, because it can be done directly with JavaScript using this.position.z
from Mesh
, so the code for this animation looks like this:
this.position.z = map(percentAbsolute, 0, 1, 0, -50);
The implementation of the bending animation is slightly more complex, we need to use the vertex shaders to bend our PlaneBufferGeometry
. I’ve choose distortion
as the value to control this animation inside the shaders. Then we also pass two other parameters distortionX
and distortionY
which controls the amount of distortion of the x and y axis.
this.material.uniforms.distortion.value = map(percentAbsolute, 0, 1, 0, 5);
uniform float distortion;
uniform float distortionX;
uniform float distortionY;
varying vec2 vUv;
void main() {
vUv = uv;
vec3 newPosition = position;
// 50 is the number of x-axis vertices we have in our PlaneBufferGeometry.
float distanceX = length(position.x) / 50.0;
float distanceY = length(position.y) / 50.0;
float distanceXPow = pow(distortionX, distanceX);
float distanceYPow = pow(distortionY, distanceY);
newPosition.z -= distortion * max(distanceXPow + distanceYPow, 2.2);
gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
}
The implementation of image scaling was made with a single function inside the fragment shader:
this.material.uniforms.scale.value = map(percent, 0, 1, 0, 0.5);
vec2 zoom(vec2 uv, float amount) {
return 0.5 + ((uv - 0.5) * (1.0 - amount));
}
void main() {
// ...
uv = zoom(uv, scale);
// ...
}
The implementation of distortion was made with glsl-noise
and a simple calculation displacing the texture on the x and y axis based on user gestures:
onTouchStart() {
TweenMax.to(this.material.uniforms.displacementY, 0.4, {
value: 0.1
});
}
onTouchEnd() {
TweenMax.killTweensOf(this.material.uniforms.displacementY);
TweenMax.to(this.material.uniforms.displacementY, 0.4, {
value: 0
});
}
#pragma glslify: cnoise = require(glsl-noise/classic/3d)
void main() {
// ...
float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);
uv.x += noise * displacementX;
uv.y += noise * displacementY;
// ...
}
And that’s our final code of the fragment shader merging all the three animations together.
#pragma glslify: cnoise = require(glsl-noise/classic/3d)
uniform float alpha;
uniform float displacementX;
uniform float displacementY;
uniform sampler2D image;
uniform vec2 imageResolution;
uniform vec2 resolution;
uniform float scale;
uniform float time;
varying vec2 vUv;
vec2 zoom(vec2 uv, float amount) {
return 0.5 + ((uv - 0.5) * (1.0 - amount));
}
void main() {
vec2 ratio = vec2(
min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
);
vec2 uv = vec2(
vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
);
float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);
uv.x += noise * displacementX;
uv.y += noise * displacementY;
uv = zoom(uv, scale);
gl_FragColor = vec4(texture2D(image, uv).xyz, alpha);
}
You can also view the fullscreen demo here.
Photos used in examples of the article were taken by Willian Justen and Azamat Zhanisov.
Conclusion
We hope you liked the Case Study we’ve written together, if you have any questions, feel free to ask us on Twitter or Instagram (@brunoarizio and @lhbzr), we would be very happy to receive your feedback.
Very amazing!!!
Thanks for writing the article