Building a Fully-Featured 3D World in the Browser with Blender and Three.js

Go behind the scenes of an immersive 3D museum built with Blender and Three.js, and explore the creative process, technical decisions, and browser-ready workflow that brought it to life.

Andrew Woan is a developer and educator who loves creating projects that inspire and bring joy. His passion for exploration and discovery shines through in everything he does, and he’s dedicated to helping others tap into their creative potential. On his YouTube channel, he shares tutorials that make complex ideas accessible, always with an enthusiasm that makes learning fun.

With his motto, “It’s not perfect, but it’s cool,” Andrew reminds us that creativity is about progress and embracing imperfection. We’re incredibly honored to feature his work on Codrops, especially this monumental tutorial, which he’s poured countless hours into. The demo he’s built is a true testament to his expertise and dedication, and we’re proud to share it with you. I hope this article inspires you as much as it has inspired us!

1. Introduction

I’ve always wanted to create immersive 3D experiences because I love exploring places! When I’m going from point A to point B and have a lot of time, you’ll find me exploring random stores and buildings, or random alleyways, parks, and streets, etc. It’s like you’ll never know what’s next, which is rare in a world where life can be fairly predictable and mundane all the time. It’s really exciting to just explore somewhere randomly with no expectations other than to observe, and it’s one of my favorite things to do. You’ll also find unexpected inspiration all the time!

I hope this tutorial helps you create your own creative worlds to get lost in—something that is emotionally meaningful to you, or someone or something you love. Of course, it can be anything, not just a museum (like I do in this article), and just because I show this approach doesn’t mean it’s the only way. This article is just one of the approaches I take, and some steps will have pros and cons.

I also want to mention that a lot of stuff in this article isn’t really “groundbreaking”—it’s just what I learned from jamming a bunch of random resources online into a single project! And honestly, I think that’s pretty cool. A large project is always a combination of smaller problems, so if you can find small pieces of knowledge to the smaller problems, then you can tackle a large project! As you go through the article, I think you’ll likely find that this project isn’t as daunting as it might seem. That doesn’t mean it’s easy because it’s not and it takes a ton of work, but it’s probably less scary than you think once you break it down into its smaller parts.

This article is more about an introduction to those who ever wanted to get into Three.js and Blender and want to see what kind of work goes on behind such immersive experiences. Before reading, you should be somewhat familiar with Three.js, React, and Blender as this is not a step-by-step tutorial. I do have a much slower and beginner-friendly version up on my YouTube Channel on creating a similar immersive experience if you’re interested! If you’re completely new to Three.js and Blender, I also have a free course for that too on YouTube and of course I always recommend Bruno Simon’s Three.js Journey! Even if you’re a complete beginner, I hope you’ll find some bits and pieces interesting—and maybe look up a few terms and concepts as you go through the article. Feel free to reach out anytime if you have any questions, I’d be more than happy to answer! Anyway, I digress, thanks so much for stopping by and I hope you enjoy the article!

2. Inspiration for Scene & Approaching Creativity

Mindset

When I’m creating something, I generally think of the times I felt most emotionally impacted by someone, something, or some place. There’s a lot of theories and philosophies of what makes things meaningful to each of us and sometimes that helps, but usually I just use my gut. Over time you can develop your own intuition based on your own experiences and that will give you your unique outlook on life and anything you create!

Whether you do crochet, orgami, coding or UI design, at the end of the day, you’re someone that creates things that makes other people happy. I have never seen these fields as separate until I got my first job where I realized there is a very clear distinction between job roles and titles. To me, none of that ever mattered and I’m starting to find that roles are starting to blend more and I’m really happy to see that trend! This is important when creating an immersive world because worlds are filled with variety. It’s important not to prematurely constrain your brain even at a subconcious level.

The creative field is not a field like infrastructural engineering or cybersecurity where you have to be perfect and carefully plan out things otherwise really bad things will happen. There’s a different mindset in many other fields and I think it’s really important to make that distinction. What I describe in this section is a mindset I use for my creative projects and it is not the only valid mindset.

Being creative isn’t about being always new or innovative. It’s sometimes just about copying what someone else did and changing it up a bit. Then someone will take what you did, copy it, and change it up a little bit. The more people that do that, the more unrecognizable the same original project will be. Even in that original project you can jam multiple copied ideas together to make a unique concept. Like what about Flappy Bird but instead of a bird you use a car with pogo sticks for its wheels, have it jump over gas stations, and call it Jumpy Car? I made that idea copying Flappy Bird as I typed that sentence out. Depending on who you ask, some might say it’s not a copied idea and really original, whereas others might say I copied Flappy Bird. Someone else could have came up with the concept of Jumpy Car, but from looking at the no internet Dinosaur game on Google Chrome. The really awesome thing is many things exist on a spectrum so let yourself be comfortable with the unknown. I’m the only one who really knows where my idea came from. I could easily claim it to be a completely innovative original idea and no one would question it, but if Flappy Bird didn’t exist, I probably wouldn’t have thought about that game at all.

We often look down upon people without “unique” concepts but creativity, just like academics or anything really, is always built upon the people that came before us. We all stand on the shoulders of giants. Just always, and I mean always give credit where credit is due (this is literally the only thing you have to do)! No one thinks less of you if you give credit to someone else, really. It’s easy to feel that way, but there has never been a time in my entire career where someone felt less about me because of giving credit to someone else.

The more you copy, the more you adjust and tweak from copying and adding your own flair, is when you’ll create your own style eventually because we pick and choose things that we want to copy and smash together. No one ever picks or can even pick the same combination of things their entire lives compared to another person! That would be literally impossible. We also have factors we can’t control like the way our neurons fire in our brains or how our childhood shaped our adulthood. Naturally then, of course we’d eventually develop our own styles. I think being creative is just another form of trial and error, no different from trial and erroring different coding solutions. When you’re coding, you copy code, smash random code together, and adjust it. Being creative is no different, you copy ideas, smash random ideas together, and adjust it.

And me saying all that isn’t unique either. There’s a lot of other people who share the same views like the author of Steal Like An Artist. If you’re ever not feeling creative and feel like you’re just copying, you’re on the right track to becoming more creative whether you want to believe it or not. Never look down on yourself because you think society will. That’ll only limit your creative intuition development in the long run.

The world isn’t as dark as maybe you’ve been told and everyone really is faking some aspect of themselves, including myself. I’m a 23 year old guy who is still sometimes insecure about the pimples popping up on my face. I know in 10 years I probably won’t be insecure about the pimples on my face anymore, but I’ll still be insecure about something else no matter my job title or how much I’ve grown. This will indefinitely last until I no longer exist on planet Earth. In other words, you can choose to grow, and you can choose to be happy a lot of times. Through all the pain and suffering, eventually you realize that happiness is a choice in more circumstances than you’d like to believe. It takes a lot of cognitive effort to reframe your brain, but once you do, life becomes a whole lot easier and you become a whole lot happier no matter what pain you’re going through. I deal with a lot of chronic and likely permanent health issues and reframing my brain is what keeps me happy.

Anyway, the point is, embrace the chaotic process and your mindset on how you approach your work will definitely help speed up everything and make you have a lot more fun (at least that’s how it works for me). Besides, life itself is a chaotic journey filled with good and bad, you have to be comfortable with the unknown and no one will ever be perfect as long as they live so there’s no reason to pretend to be.

I hope this section helps in some way or another, and feel free to reach out if you want someone to talk to!

Creative Direction & Reference Collecting

For this scene specifically, I’m making a tutorial for Codrops! One of the coolest resources out there that I use all the time. On this website are a lot of amazing devs/designers/articles etc., so why not make a museum to highlight the website pages! After all, museums are a place to highlight awesome stuff haha.

I used PureRef to organize references which is a really awesome software. Oftentimes though I just use Figma and hide the UI with CTRL + \. Since this museum is intended to be realistic, I only collected realistic references. If this were to be a more creative museum like experience, I would definitely seek inspiration elsewhere like looking at video games or other enviornment art and see what I can bring in to match the overall theme of the project.

2.1 – PureRef with museum reference images.

3. Blocking out & Concepting in Blender

Blocking out is kind of like getting a feel/vibe for your scene before filling in the details. You won’t always do it, but I did it for this project just to see what kind of movement I’d have and where I’d place things. For a larger project sometimes you do it just to see how it’ll feel, like what corners you’d turn and what you’d reveal as you move from room to room. I didn’t spend too much time blocking out though. Sometimes when you have an intuition you can just start modeling out things with details right away!

For a lot of projects you might want to work in real-world scales, measurements, and import references, etc. I didn’t feel the need to do that for this project because sometimes you can just eyeball things and that’s good enough! I did however, use a small reference character called X-Bot from Mixamo and set its height to 1.7 meters (5 feet and 7 inches) in the Z-axis. This gives me at least some reference to help when I eyeball things. I chose this height because that’s how tall I am and I gauge 3D models relative to my own experiences and imagination.

3.1 – First blocking out of a museum with two stories.
3.2 – Second blocking out of a single floor museum.
3.3 – Third blocking out of a museum shaped like the Codrop’s logo.
3.4 – Trying a aquatic/ocean theme out with lights shaped like the Codrops logo during blockout.

Creative Pivot

One thing though, is that I often find myself going through a ton of blocking out ideas before I finally settle on something. I think almost every creative feels the same way. At some point you kind of just have to stop and pick a direction to go with, otherwise it’ll never end! While earlier in this article I said it was going to be a realistic museum I suddenly decided to change my mind so you can scratch what I said earlier. Back to collecting references!

Below is the the new blockout for the new Lord of the Rings themed museum! I decided to go with this fantasy approach because one of my friends really likes Lord of the Rings and I thought it’d be cool to merge museum with something that probably wouldn’t happen in real life. The fantasy vibe also adds to the cool factor, in my opinion. It’s like discovering a hidden place or a Wonder of the World (like the Pyramids) for the very first time. If you can imagine that feeling of pure curiosity, wonder, surrealness, and maybe even a touch of danger and playfulness, almost like you’re dreaming—that’s the kind of feeling I wanted to evoke through this experience.

3.5 – A new semi-blockout/semi-detail added screenshot of a Lord of the Rings Rivendell inspired Museum.

By the way, a lot of techniques I used are from watching other video creators who have remade Rivendell in Blender already (this creator and this creator are two creators in which I copied a lot of their techniques)! There’s a lot of them on YouTube if you search for it. Unfortunately though, all of which I found are timelapsed so they’re not beginner friendly. You’ll have to be familiar with Blender in order to know what they’re doing. If you’re interested, I did make a short video breaking down some of the modeling techniques I used for this project.

Anyway the models are not as scary as they look. Some other models like cars or dogs are way more difficult to model than what is shown in image 3.5. For this “blockout” I just started modeling details right away as I was creating the shapes just because I had a lot of anxiety and just wanted to start something rather than plan too long. The lack of a planning phase may have been somewhat of a mistake as you’ll discover later in the article. Typically you want a longer planning phase, but I pretty much skipped that.

4. Sculpting and Modeling

Sculpting

In the previous section, there are a bunch of large overlapping cubes you can see in the background. I selected all of them and joined them together. Then I went to the sculpting tab and went into sculpting mode and hit “remesh” with a value of 0.01. This remesh operation basically subdivides the cubes a lot so there are more verticies to adjust during sculpting (which at a high level is just moving verticies around based on brush types which are based on math functions).

4.1 – Small section of a cube now has a TON of geometry added to it after remesh operation.

To sculpt, I used the smoothing brush (which you can activate by holding SHIFT), the smudge brush, and the clay strips brush built into Blender. You can hold ctrl/cmd to do the “opposite” of your operation. For example, by default the sculpting tool will cut in your mesh, you can hold ctrl/cmd during sculpting to “add” or “inflate” instead of cutting into your mesh.

When it comes to sculpting you should just focus on having fun! Just try and make it so they aren’t obviously cubes anymore and you’ll be good to go because the displacement maps from the textures will do most of the work anyway. Sculpting is honestly optional but it’s fun and you get a bit more control. Additionally, you can always come back and scuplt some more after applying materials if it isn’t to your liking!

4.2 – Semi-finished sculpting the scene.

Hard-Surface Models

As mentioned earlier, when it comes to the models themselves, these look very intimidating but most of them are actually just regular shapes! While this article isn’t a modeling tutorial I just wanted to highlight that these shapes aren’t too complicated. The scary curvy looking ones are actually just extruded planes in random directions and then given a subdivision surface modifier. Even the Codrops logo you see as the museum emblem started as a cut up extended plane with a subdivision surface modifier!

4.3 Look at the roof decoration! It looks really scary and complicated but it’s actually just a extruded plane with a subdivision surface modifier. Left is the one with the modifier, right is the original mesh without the modifier.

When it comes to topology I didn’t really follow best practices and this is actually quite common because the result would end up being effectively the same. If you’re interviewing for a 3D role typically they expect you to have models in quads or an occasional triangle here or there as a best practice. In reality once you get the job, it’s not going to be like that always (but it’ll depend on the role of course, some 3D artists have to prioritize best practices more often. Like a 3D realistic character artist would likely have to focus more on good topology compared to a rapidly prototyping 3D concept artist). This is very similar to software engineering and leetcode. Leetcode is like a “good to know” but once you’re actually on the job it’s very different. Similar here, it’s good to know what good topology looks like, but it’s also good to know when you can cut corners, save a ton of time, maximize output, and get the same exact end result.

4.4 A look at some of my bad topology that has no effect on the end result.

Other Models

To save some time I sourced some models from the internet. For the brazier, I got it from Sky_Hunter, the lectern from distance_voices, and the torch from Divyesh. Us 3D artists often like to use what’s already out there and make adjustments, similar to how developers like to use UI component libraries or how designers like to use existing design system layouts! Us 3D artists do it all the time too, well, at least I do. Just make sure to check the license and always give credit where credit is due!

5. Initial Materials and Details

Like most creative processes, everything is really iterative. So if you need to change a blockout in the previous step or some sort of little detail you added you can always do that later. Also, sometimes as you start applying materials to your objects, you realize that they don’t make up for compositional issues as you thought they would, so you start adding extra objects during the texturing process.

Preparation

An important thing to note is to check your face oreintation/normals before you start texturing, otherwise sometimes it’ll look funky as you apply materials. Not always an issue, but a good thing to keep in mind, and you’ll have to do it anyway later when we start baking (don’t worry if you don’t know what baking is yet, I’ll get to it)! In the image below, you can see some of my things are red. Everything that I want to be visible should be blue. You can fix these red areas by selecting them in edit mode and hitting alt + n. This will open the normals menu and you can choose flip or some other option. You might have to create more geometry if you want something blue on both sides, (more on that later when I cover the interior).

5.1 – Showing normal directions.

In addition to checking your normals, make sure to apply your scale before texturing. You might need to apply other things like your rotation or location if you’re working with complex modifiers, but I didn’t have to do that in this project.

Also if your computer is lagging, feel free to start deleting some extra geometry that you know you won’t see. During this time I already start planning out my art direction with camera movements. I add a camera and move it and rotate at certain angles to see what will be visible in the final shot and not. Some things I deleted include the unscuplted areas on the mountain, the caps of the pillars that are covered by the ground and roof elements etc.

5.2 – Initial camera view.

Materials

Since I’m going for a realistic sort of scene I am going to be using PBR (physically based rendering) materials. These are basically virtual materials that try and simulate real world properties to help your 3D software with the mathematical calculations when interacting with light to make things more realistic. This is different from NPR (non-photorealistic rendering) materials like you see in things like cartoons or anime. I should mention though it isn’t rare to combine NPR and PBR materials sometimes! Being creative has no limits after all and it’s really up to you and the vibe you’re going for.

For this project, I’m sourcing my PBR materials from PolyHaven, BlenderKit, Textures.com, Poliigon and AmbientCG. These are really popular resources and most of the materials on these websites are in the realm of public domain/creative commons CC0 which means they’re safe to use even for commercial projects. Of course, you should always give credit somewhere if you use them even if you don’t have to!

When it comes to texturing or applying materials it’s really all about trial and error. Of course you can always make your own with things like Substance Painter, but sometimes it’s easier just to shut off your brain and see what’s out there rather than create your own every single time! Sometimes I reuse materials I created in the past so there’s always that you can do to save time for your future projects.

When I apply materials I liked to do this thing called a “first round” of materials. Basically I do simple color adjustments with the color ramp, Hue/Saturation, and Curves nodes to make my materials match and feel nice together. It doesn’t look too realistic yet, but I later go back and adjust my materials to make them more realistic which is covered in the next section. Of course, like all things, sometimes on different projects I will finalize a material first just because I feel like doing that, but typically I like to add a first set round of minimally edited materials just to get an overall vibe of the scene. It also allows me to figure out what areas I need to adjust for more concretely if my other textures don’t really make up for another texture. Don’t be shocked if this first round takes a lot of time because honestly it sometimes does.

As I’m applying materials I usually just stay in Cycles rendered view but sometimes use material preview. Because I’m staying in rendered view, it means I need a lighting source in order to see my materials properly! I usually just use the built-in Sky Texture that comes with Blender for an initial light source but will probably later switch to using an HDRI from PolyHaven. I also have paid addons for skies but honestly I don’t use them as much as I should.

5.3 – Basic Final Materials.

In the image above you can see initial basic materials applied. It looks okay from far away but it does still give sort of a CGI vibe to it. To make it more realistic what we need is variations in our materials. If you look at most buildings, especially older ones, you’ll notice that chaos is part of it. You’ll notice that some areas are darker than others, more crooked than others, or have more wear and tear. This is especially important for this scene because it’s an old museum on the side of a mountain which has very treacherous weather. So let’s move on to detailing out materials.

6. More Materials and Details

For this section, I’m using a lot of techniques that you can also find in this amazing video and this one too. I also break my materials down in my own video that I mentioned earlier in the article. One really simple way to break up the uniformity in regular textures is just to mix in the same exact texture but with a slightly different color! To do that you can use a mix shader node and then use some random black and white noise texture to control which areas show more of what texture. For more flexiblity, you can attach a color ramp node to adjust the black and white area fall-off! I’m going to be doing this for nearly all of materials. While it may not always be more realistic, it definitely helps break up the uniformity of some of the repeated image textures. If you do this over and over, sometimes with different image textures or simply built-in noise functions, you add a lot more realisim. You can do this mixing as many times as you want! I’m going to be doing this a bunch and honestly I just pick random ones and smash them together. There’s definitely more principles than what I cover here but the general idea is that uniformity is something you want to get rid of for more realisim.

6.1 – Node setup for mixing two PBR materials together using a noise texture.
6.2 – Mixed roof material with itself but a darker version for variety.
6.3 Random node setup with several different PBR materials mixed with random manual adjusted noise blends.

7. Foliage, Trees, Greenery, and Stuff Like That

There are multiple approaches to creating foliage and nature stuff in Blender and it really depends on what your project goals are. For example, I want to make a tree. Intutively it seems like you would just model a trunk, branches, leaves and then scatter the branches and leaves. That would be great for realisim but that would increase render times a lot. Additionaly, when it comes to using that model in another area outside of Blender like Three.js, its file size will likely be bigger than a tree with less geometry. One way to optimize a tree is just to use a bunch of image planes (like this video) of tree branches with leaves rather than actual geometry for leaves and branches (at least the smaller branches) and then instance them with code. Another way is just to use sprite images like the amazing creator of slowroads.io did.

I got lazy and just decided to use a Blender plugin called Botaniq. This is a super popular library with a load of plant assets to choose from. The good news is Botaniq assets don’t use actual shaped leaves, rather they use images of leaves on simplifed rectangular meshes, so that definitely helps with the file size like I mentioned in the previous paragraph. Ideally you would still instance those planes with code, but I didn’t just to save time.

7.1 Trees from Botaniq added to the scene.

8. Interior

Because faces have sides in 3D modeling, the interior needed to be created. For smaller scenes I typically just select the outside walls I want to show inside too, duplicate them and flip the normals and adjust them so they don’t overlap. Sometimes I exturde the walls or give the outside walls a solidify modifier. For this experience specifically, I did a combination of a few of those but also manually created my interior shape at points.

Merging, Creating, and Cutting

I used the boolean modifier with the intersect option to join the two roof objects together an deleted the area of the roof that was occupying interior space. It almost did the job, but I had to polish it up with the knife tool.

8.1 Area of roof that needs to be cut out.
8.2 Still some cutting to do after deleting intersecting faces.

I also made a smaller mesh inside to make the experience a little smaller and intimate inside and to save a little bit of time. You can also see I started deleting a lot of outside walls, for example the top right wall circled in green (now empty) is covered by the roof which is why i deleted the upper half of it. Also the back half of the tower is deleted as it was also invading the interior space.

8.3 Interior is much smaller than how it looks on the outside, tower and other extra walls are deleted.
8.4 When roof is back on it covers all that stuff in the interior.
8.5 When looking from the front outside, no one can tell things are missing.

Interior Lighting

For the interior lighting I just decided to use floor torches rather than anything too fancy. I also used pointlights to light up the area in Blender with an increased radius. You can see the lower the radius the sharper the shadows. I kind of wanted to go for a less sharp shadows so I increased the radius quite a bit.

8.6 Back torches have point lights with radius 0m which leaders to sharper shadows. You can also use small area lights instead of point lights for a similar effect instead of adjusting the radius property.

Interior Content Concept & Art Direction

Since this is a Codrops museum, I thought, why not make the paintings link to different sections of the Codrops website! Originally I wanted to make a larger museum with each section being related to a section of the website but due to time constraints I opted out for paintings on a wall instead.

To get the images I needed for the texture I used a browser extension called GoFullPage. But before I captured the screen I zoomed in at 200% so the text would be a bit larger. The issue was the navigation menu was still in the image which looked a bit odd when I UV mapped it in Blender so I brought the images into Figma using the Insert Big Image plugin (the image was too large to directly load into Figma without it automatically resizing it). Then after some deleting and cropping I placed a same color box around the navigation menu to hide it.

To make sure the headings were aligned I went into orthographic view and added a temporary cube and adjusted my UV maps for my image textures accordingly. Of course I also wanted to blend it in with the overall theme so I dirtied it up with a bunch of noise and mixed in this peeling paint texture.

8.7 Same color rectangles as website background added in Figma to hide the navigation items.
8.8 Added temporary horizontal cube in orthographic view to align heading text for consistent spacing.
8.9 Paintings with images final result.

Paper Asset Creation

For the paper on the lectern, I just googled “paper image texture generator online” and chose one at random. So I ended up using Tech Lagoon‘s generator and brought the outputted texture into Figma. In Figma I cropped it and added some text over it and exported it as an image after grouping the text and paper texture together. You can do this in any design sort of tool like Canva, I just use Figma because I use it often.

8.10 Image texture for Blender created with Figma and Tech Lagoon.
8.11 Small noise mixed with paper image texture to add dirtiness vibe.

9. Exporting, Optimization, & Baking

Baking at a high level is the idea of saving complicated information to use later to improve performance. For example, you can bake simulations and animations. For this experience, I’m baking lighting into an image texture which means instead of doing real-time lighting, I’m saving all that image lighting into a 2D image texture which will end up attached to my 3D model.

Random note, but when working with image textures we typically go for powers of two, but not always. Here’s a small discussion on it, but you can find a lot of other resources on it.

A Brief Discussion on Handling Large Immersive 3D Web Experiences

For an immersive experience, the typical workflow is not to bake your entire scene because that leads to really large file sizes and long loading times. When you visit large 3D immersive experiences you’ll likely find a combination of baking and other real-time lighting methods (and multiple other optimization methods). The reason I’m choosing to bake my entire scene even though it’s not the most common practice is because scenes like these fit right in this small little “trade-off” zone.

This trade-off zone is something I made up, but the idea is that the visual quality compared to how much time it takes to implement as well as load on a website is well worth it. For example, I assume this website will probably take 6 seconds to load on an average connection for my target audience demographic, but for the quality I think that’s worth 6 seconds of loading time. While I could get a similar visual result without having to increase my loading times by using code and other optimization techniques, the other side to the trade-off is development time. For example, if development time would add an additional 20 hours to save on 3 seconds of loading time, I might just choose to save the 20 hours of work over saving 3 seconds of loading time.

As you work in a design agency, you’ll often find trade-offs being taken all the time and best practices will definitely be overlooked sometimes, but if the result isn’t affected then it really doesn’t matter all too much. The more websites you visit, the more you’ll see there are a ton of websites (even award-winning websites) out there that break if you resize during a transition, click too many things at once, or scroll during an animation etc. This is completely normal because of time constraints, stress, fear of burnout and many other reasons.

Baking Preparations

In order to bake properly we have to optimize the UVs to reduce as much wasted space as possible since images have a specific amount of pixels and resolution that we can work with. That means we need to delete things that won’t be visible to the camera. While I was deleting things all along the process, now is the time to make sure everything really gets deleted. What I like to do is create a camera with a 1920×1080 resolution and while in camera view use Blender’s fly mode controls with SHIFT + ~ and move around with WASD. As I move around I try and plan out the path my camera will take. As I go along my supposed path, I exit camera view and fly mode frequently to delete the things I can’t see in my camera view then hop back in and continue the back and forth process. Before you fly around though you should also set in stone a good focal length you’d like for your camera as it will effect what you see as you move the camera around. Since I want the scene to feel large and epic I went with a lower focal length for my camera to give it that wide-angle lens feel. Just make sure when you translate that later to three.js it feels the same. For example, a low focal length for the Blender camera means a higher field of view (FOV) for the perspective camera in three.js.

9.1 The UV unwrapping on the left is okay, but the packing isn’t great—there’s a lot of empty space and unused pixels. On the right, the same unwrapping is packed much more efficiently, using a lot more texture space. This means more pixels and better resolution once the image is mapped onto our 3D model! Yes, the file size on the left would be smaller since all the empty space would be filled with a solid color (like black), but if you’re loading the entire set of pixels (aka your image) anyway, you might as well fill it up and get better quality.

I know there are a lot of workflows when it comes to camera movement such as creating an animation in Blender and using that information in Three.js (like this post) or converting a custom curve in Blender to a JSON file of points to be used to create a curve in Three.js, but for this project I’m doing everything manually and will select my curve points based on predefined points when using OrbitControls to move around, which I’ll talk about later in this article.

9.2 Camera view and moving around with WASD in Fly Mode.
9.3 I moved a few things around to show how much was deleted from the mountains covered by the museum to the overlapping floors and roofs, etc.

Baking

Now that everything is deleted and one last final check for the normals to make sure everything is blue that I want to be visible, we can proceed to baking. There’s a lot of amazing baking tutorials on YouTube for Blender out there already so I want to talk about the workflow I use often. I like to use two paid addons (I’m not sponsored by them), one for baking called SimpleBake and another for optimizing the UV packing process called UVPACKMASTER.

In order to bake the entire scene, it has to be split up into multiple textures otherwise it’ll get super blurry. Additionally, one extremely large image texture would cause issues later down the road even if it looks fine. When it comes to large scenes like this, its honestly just trial and error. Sometimes you split it up and overkill how many textures you need and have to rebake textures and other times you add too much. I generally try to err on the side of caution and overestimate how much space I need which does mean I might have too many textures, but given how much time it potentially saves it’s a trade-off I’m willing to make. In other projects I will spend the time to bake and rebake over and over until it’s as optimal as I think it can get, but sometimes I just use what I do the first time through based on intuition.

Anyway, here’s a small list of things I try to do during and/or before baking.

  • I bake at 4K (4096px x 4096px) because I can always reduce it to a lower resolution at another time and it’s more difficult to go from a lower resolution to a higher resolution.
  • I bake with an HDR file or EXR file in order to retain more color information initially and later convert it to a different format that’s more useable. Oftentimes you can directly bake with PNG format and it looks almost no different than with a HDR file but sometimes there are differences.
  • Make sure to disable denoise for the renderer if you have it on otherwise it’ll cause black lines across the seams (see GitHub issue here).
  • I bake with similar objects based on location together or certain types. Like I’ll try and group furniture together or interior elements together. Practically this serves no purpose other than semantic clarity when I load the models (which I guess does speed up development process or if I ever need to go back and rexport). If I was going for pure optimization I wouldn’t bake similar objects together because it might mean increasing load times and having less optimal bakes.
  • I like to save a copy of the file or save incremental right before I start baking, that way I have a completely separate “clean” file to start over with if I need to. This has saved me countless hours of work.
  • Make sure to apply your scale with CTRL+A > Scale (if anything breaks make sure to fix that). Then apply all your modifers, you can quickly do this by hitting “A” to select everything, F3 to open the global search menu and then search for “Convert to Mesh.” If you get an error trying to convert to a mesh, you likely have an instanced object. In that case, do F3 again and search for “Make Single User” and pick “Object & Data”, or one of the other ones depending on what you need.
  • Join objects that are static to reduce drawcalls and for easier selection. I join objects before baking if they have the same UVmap, otherwise I join objects with different UV maps after baking.
  • Use decimate modifier to either reduce subdivisions or reduce geometry on certain objects. You can see my ground mesh is decimated. The topology is worse, but the polycount is lower reducing file size. Just don’t go overboard or it’ll start affecting the aesthetics as well. If you did a super high remesh (like I did), you might be able to un-subdivide first and then go for the decimate modifier after.
  • I try to scale my materials as large as possible without taking away the aesthetic. For example, when it came to the stone wall, I tried to make the stones really big that way the image texture won’t repeat itself and thus have more detail which would increase the image texture size. When I originally had the material for the building it was more of a brick one, but I later switched it to larger stones, that way it had less detail it needed for the bake and saved on memory. A similar approach would be to replace PBR materials with flat solid colors.

And here are the steps I do to bake with the two paid addons SimpleBake and UVPACKMASTER:

  1. Set SimpleBake Settings:
    • Cycles Bake
    • Bake Type: Combined
    • Denoise (Compositor)
    • Colour Space: Linear Rec.709
    • Bake Width & Bake Height: 4096px
    • All internal 32-bit float
    • Transparent background (if working with transparency)
    • Multiple objects to one texture set (if you have two or more objects that share the same image texture)
    • Export bakes
    • Format Open EXR
    • Prefer existing UV’s called SimpleBake
  2. Typically, but not always, I group items by proximity or somewhat relatable to each other in a way to a single texture set (e.g., trees & plants share one texture set or roof elements share one texture set etc.). I then put these into their own collection and give it an ID, the ID is typically just a number.
  3. Give each object a second UV map called “SimpleBake” which SimpleBake uses to bake (if you choose to set the setting like I did in step 1).
  4. Make sure SimpleBake UV map is highlighted and use Smart UV Project to unwrap and UVPACKMASTER to pack the UVs. It depends on the project whether I decide to unwrap with manual seams or if I just use Smart UV Project. In this case I’m just using Smart UV project to unwrap for me. It’s less optimal, but it cuts down preperation time by a few hours at least.
    • I did, however, for some of the longer objects, mark “slicing” seams. Basically seams that reduce the length of object when it is unwrapped so it packs a lot better. In the image below you can see the front part of the mountain. With Smart UV project it’ll have a hard time knowing where to slice it even if you adjust the parameters when it comes to these long continious meshes. You’ll also see it in other areas like the roof top where I also sliced it down the middle if you download the Blender file.
  5. Add selected objects to SimpleBake queue and bake.
  6. Convert all baked images from .EXR to PNG by using compositor in Blender.
  7. When it comes to compression I’ve sometimes found basis KTX textures better than WebP and vice versa so I usually try them both and sometimes use both in the same project. I’m honestly not too familiar with the trade-offs in this area other than I just trial and error until I get something that looks nice and has a small enough file size without spending a bunch of time continously trial and erroring. If I’m using WebP I like to use Squoosh because I can directly see how much quality I’m losing as I compress and adjust and for KTX I like to use KTX software. I am aware there are several other methods to go for, but these are the ones I’m most familiar with.

Additional 3D Assets

There’s a lot of other things that are part of the experience that I didn’t know where to fit in this article and most of them were done during the export phase so I’ll briefly talk a little bit about the extra details and how I handled those as they are slightly different.

The background image in the back I simply duplicated a portion of my scuplted mountain and rendered a transparent image with it in Blender and then attached that image to a flat plane mesh. The waterfall was the same idea, render a transparent image then attach it to a bent mesh with appropriate UVs. I also moved the waterfall’s UVs up and down to check what it would look like when I scroll the UVs with code (more on that later). These things were so far away and not the main focus so I don’t think I needed a normal map or anything to fake more details; a simple image was good enough. I don’t even remember what resolution these were but it doesn’t really matter too much, just make sure they’re not too large, I think both of these didn’t have an X or Y resolution larger than 1000px.

9.4 Rendered image of a portion of the mountain.
9.5 Background image attached to a mesh plane.
9.6 UV Mapping rendered waterfall image to bent mesh.

For the bird I got a model from GremorySaiyan and I only baked it at 256×256 resolution completely on its own texture set. Honestly, I should have put this to share another texture set instead of having another texture on its own but I already baked everything else and didn’t want to go back and optimize for it and rebake my assets. On frame 4 of the animation is when the wings were spread so I baked when the animation was on frame 4 for maximum lighting distribution. The cool thing about using flat lighting with soft shadows (like we did for this project) is that when stuff moves around, you can’t really tell it’s baked in compared to hard shadows.

9.7 Bird wings spread at frame four which is great for baking lighting.
9.8 Just make sure to check mark animation during export! Exporting is covered in the next subsection but keep this in mind for animated objects.

Exporting

When exporting the models, there are multiple ways I tend to do it and I listed some reasons I’m familiar with why each method exists, but there are a ton of reasons to choose one or the other more than I listed here. I’m using GLB file format because it is highly recommended for web usage compared to other formats.

  • Export each object per image texture as a GLB file without a material and attach it later through code.
    • Probably good for things like configurators, more flexiblity, and more lazy loading opportunities.
  • Export each object per image texture as a GLB file with the material hooked up meaning the texture is embedded into the GLB file by default.
    • Speeds up development process sometimes and if you’re using a in-line command tool, sometimes you need it embedded in the GLB file to use the tool.

Regardless of the method you use, make sure to join objects in Blender together to reduce draw calls and leave them separate if you want to interact with them later using React Three Fiber’s built-in event handlers for meshes. You can also make invisible target hitboxes to interact with as well, but I decided to just use the mesh objects themselves which I discuss in the coding section later in this article.

For this project I tried out WebP and decided that KTX was the better option. My file sizes were smaller and the quality was okay. I could’ve played around more for it, but decided just to save time and stick with KTX. For the ones that have an alpha channel, make sure to plug that into the alpha channel of the Principled BSDF node, otherwise command line tools might get rid of it during their processes. The specific command I used was as follows: gltf-transform etc1s output1.glb output2.glb --quality 255 --verbose. I tried varying amounts of quality values at random so honestly I don’t even remember what I chose, but I know the lowest I went was 155, not because I couldn’t go lower, just because I didn’t decide to try more options and the file sizes were already surprsingly low. Another cool thing is that because the scene is so dark with darker colors and dirtier looking materials, a lot of compression artifacts just blended in as part of the dirty materials so I didn’t really have to worry about compression artifacts.

9.9 Material export node setup with alpha channel.
9.10 Final Blender file outliner showing setup, LCA is “lights, camera, action”, other elements are anything that I need to do something special for, and the world is pretty much the rest of the models. Then I also have a separate collection for the actual final things I want to export. You can see the Ninth texture set has the paintings as a separate mesh as I want to target it later, same with the Eighth texture set and the tree.

When determining where to sacrifice quality, always make sure to take into consideration where the user will be focusing on. Since our perphieral vision isn’t as good, it’s safe to reduce quality in areas that our eyes aren’t focused on. Of course when someone looks at that low quality area it’ll look really off, but for a first run through it’ll be okay and, for this experience, that’s all I’m kind of going for. Adapt according to your needs, maybe you want the user to look all around, in which case you might want to keep details in surrouding areas, but that’s not the case for this experience. There’s also visual overloading in the sense that there are so many objects the user wants to rapidly look around rather than at a single area which makes low quality textures not as noticeable.

9.11 Red areas have poorer quality because of higher compression since user is focusing on green area so we can sacrifice quality in the red areas.

Draco Compression

For even more optimization, I use Draco compression. In the older version of Blender there used to be a compression option which I used, but it has since been removed in newer versions and I never bothered to check why. Today I sometimes use gltfjsx, gltf-piepline, or GLTF Report. I’ve tried multiple objects with the same settings and sometimes one method performs better than the other in reducing file size, but it’s typically only a few KBs. So instead of going about trial and error I typically just put all my GLB files into GLTF Report and rexport them with compression:

9.12 GLTF report export settings with Draco compression.

10. Miscellaneous Assets (Fonts, Audio, & SVGs etc.)

When it comes to assets, not just 3D assets, it’s almost always a back and forth thing. I often like to start coding to lower the anxiety before I prepare my assets, but in this article I am speaking as if I’m preparing my assets in advance which is far from the reality.

Fonts

For fonts I didn’t spend too much time picking, the typography choice is definitely not the best, but at least it fits the old-age vibe a little bit. I got them from Google Fonts, one is called Eagle Lake while the other is called Beth Ellen. I downloaded them and put them into Transfonter to convert them to .WOFF and .WOFF2 files and to generate a really handy stylesheet to copy and paste. Random, but I really like using GooFonts for Google Fonts because it categorizes the kind of fonts like “coporate” fonts for example. You can also just google like “best corporate Google fonts” and find a list.

Custom SVGs with Figma

For custom SVGs I didn’t really spend too much time here, just made a X in Figma with the draw tool and copied and pasted as SVG directly into my code. I almost always update fill or stroke to be “currentColor” so it takes the color that I set with my CSS just like my other text.

10.1 Cool Figma trick allows you to copy as SVG instead of having to export it
10.2 Change stroke color to “currentColor”

Music, SFX, and Audio Editing with Audacity

An immersive experience can greatly feel more immersive just with music and SFX alone! I found a great hover effect sound googling “thump free VFX sound” and stumbled across this one! The torch SFX was sourced from freesoundsxx. For other nature SFX I sourced from Epidemic Sound.

For the music I didn’t think too much about it haha, but this one was pretty cool and I ended up using this (I’m not sponsored by the way, I just find random stuff online and smash it together). You can give a listen to the music on their YouTube channel as well!

Huge credits to all the original creators!

Since the music was an hour long I decided to trim out a 4 minute segment of it using Audacity and added some fade in and volume differences throughout the audio, you can also do this with code but I just decided to process it manually with an audio editing software. By the way, like Audacity, there’s a lot of amazing free ones out there and you can even use video editors to do the same thing. For the file format, I typically find .ogg files giving the lowest file size for the best audio quality, but not always. The downside is that .ogg files don’t always work on iOS/macOS so you’ll need a fallback format too like MP3 (more on this later).

In image 10.4, you can see my export settings. I’m choosing mono because we don’t need spatial audio. I also lowered my sample rate from 48000Hz to 44100Hz and reduced my quality output and didn’t notice much of a difference. There’s a huge debate between 48000Hz to 44100Hz for audio when you look it up online, but honestly I just trust my ear and it’s not like this experience is competing with other professionals. You could probably get away with even lower settings, but I decided to just go for saftey and saving time here. The entire 3 minute 56 seconds of music was only 1.6mb which is amazing! A lot of people will say 1.6mb is way too big and if you look at network tabs and see other audio track file sizes they’re typically smaller or streamed/loaded better, but I feel like it’s okay for what I want to do just to straight up load 1.6mb upfront.

During browsing you might stumble on amazing different options and you could spend an entire day looking and there’s often times where I later swap to a different sound track. For example, I really like this one and may end up switching to it.

10.3 You can see the volume levels get louder at the start and quieter at the start. The random volume changes are areas I thought could use a bit more intensity and give some variation to the visitor as they stay on the website.
10.4 Audacity export settings for our .ogg files.

When it comes to the SFX, make sure to trim the spaces between the left and right portions of the audio, otherwise when you play your audio there will be a weird delay between your trigger and when you play your audio, you can always set a play time with code, but it’s easier just to process it before hand in my opinion (and it reduces file sizes, but often very little if it’s just emtpy space). There may be cases where sometimes you want that space, but for me I want the audio to play as soon as the trigger occurs so I’m removing it. I should note looks can be deceiving! Make sure the flat line is actually quiet because sometimes it’s not and if you cut it out the audio will have a jagged ending so just make sure you’re not cutting too much out!

10.5 Cutting out the red portions, notice how I’m leaving a lot more green in the audio despite it flat lining, it’s because there’s actually still audio there just very faint. You could also hide this with your own manual fade out with code if you did accidently cut too much so not really a big deal, but something to look out for.

Great we got everything setup! Now we just need to export all of them as an MP3 file so we can have audio playing on Apple devices (specifically those that also use the Safari browser).

10.6 Export settings in audacity for an MP3 file. Play around with it until you get something, these were just mine that I didn’t spend too much time on testing.

Background Cube Map

In Blender I used an HDRI for lighting but I couldn’t use that in the browser as the background image because the file size would be unecessarily large. To convert the .hdr file into a cube map I used this super awesome tool called HDRI to CubeMap. Unfortunately, the exposure setting didn’t seem to work for me which means I had to manually adjust the image exposure in a design tool which is exactly what I did in Figma (when I was using this .hdr in Blender, I set the strength to 0.4 so I needed to match the exposure color accordingly). In order to reduce all the exposure values to the same value in Figma, I had to use a special plugin called Image Adjustments Copy & Paste which allows you to copy Figma changes to other selected items. You could avoid this by using an actual image editing software that allows you to edit the same property for multiple items/images, but Figma was what I had opened and it was fast anyway thanks to this awesome plugin.

10.7 Showing special plugin that allows you to copy adjustments made from one image to another image.

11. Coding

For coding I’m choosing to use React because of the huge React Three Fiber (R3F) ecosystem. R3F has so many handy components that makes the development process so much faster. While this isn’t a step-by-step coding article, I will talk about the larger components and ideas of how I made/approached the main features of the experience.

Loading Models

The first thing I want to tackle is loading models and being able to target them easily. I typically do that by putting the model through GLTF pmnd or with their command line tool gltfjsx. This will generate JSX code based on your model allowing you to easily target the meshes you need! It also sets up a general structure and loader for you so you only need to do a few changes to have it showing up in your 3D experience.

Here were those few changes:

  • Since I’m using KTX textures, useGLTF didn’t work out of the box, so I created a custom hook extending useGLTF to load the models with embedded KTX textures.
  • I exported the image textures embedded in the model through the Principled BSDF node in Blender (as shown in image 9.9) which, when imported and loaded with Three.js, converts to a MeshStandardMaterial. Since I have no real-time lighting, I made another utility function to switch all of the materials to a MeshBasicMaterial for better performance. Also, as mentioned earlier, some materials included an alpha channel. If the function detects this, it updates the material by enabling transparency and setting a default alphaTest value (which I later adjusted manually).

Camera Animation & Movement Along Curve

For the camera positions and rotations, I used OrbitControls from React Three Drei and just printed out the camera position and rotations as I manually moved around throughout the scene and stored this in my code. As mentioned earlier, the other typical workflow is to make the path in 3D software and export the data as a JSON to create a curve but honestly I don’t know how much faster that workflow is sometimes. I’ve tried it and I think it kind of just depends. There’s also Theatre.js which is a tool I’ve seen a lot of websites use. I also know some people/agencies/companies have their own custom tools that speed up things a lot like how Merci-Michel has a tool where you can click a button in Blender and it’ll refresh your code and update it automatically.

Anyway, I took a manual approach here and it worked out very quickly, maybe only about 30 minutes to create an initial curve I liked. Later I spent some time adjusting the curve after I implemented the other coding parts, but I didn’t spend much time on adjustments either. Don’t worry if it takes you longer, I’ve just done it so many times the intuition I developed over time kind of made up for all the time I would’ve lost otherwise using a manual approach. It’s also the fact I wasn’t basing my camera movement on any path set by designers, a client, or someone like a creative director, it was just random based on what I wanted in the moment.

11.1 Me copying a camera position to save for the CatmullRomCurve3.
11.2 Camera path curve shown as a red curve.

The general intuition for animating a camera along a path is using the built-in getPoint() method that comes with all curve objects in Three.js and copying that position and then updating that value passed into getPoint() based on some changing value like amount scrolled. Now that would work, but it would be very “rough” or “choppy.” It would be like you’re manually setting the position on every change of value. Therefore, we want to use some sort of interpolation between two points and let the requestAnimationFrame() handle the smoothing. The most common method is often called linear interpolation often called “lerp”. In this case we can do that with the useFrame hook from R3F. The idea with linear interpolation is that you have a target position and then go some factor closer to that target position on each frame/loop causing the smooth effect. As an analogy, it’s like if your friend runs ahead of you, you run to catch up. When they slow down or stop you start slowing down until you reach them. If they start running again randomly you’ll start running again to catch up with them. Your friend ahead of you is like the target position and you are the current position.

You can see in the code below if you get rid of the bottom logic all you really need is newProgress to linear interpolate the camera. The extra code below is to handle the camera offset based on mouse position. I also wrapped the camera in a group that way the camera can be offset relative to it’s local axis rather than the global axis.

11.3 Camera movement along curve and offset animation logic.

The camera rotation logic is the same intuitive idea, you have a target rotation and interpolate to it. This is often facilitated with two curves like this award-winning website or this award-winning website where a second curve has the position where the camera looks at. For me, I didn’t do the two curve approach and I just hard coded some rotations and then slerped (spherical linear interpolation) between those set rotations using a functional spline. So I guess in a way I did end up using two curves, but the rotation spline curve isn’t a curve you could put a line geometry on to visualize.

Looking back I think the two curve method would have been much faster as I could plan the rotations in Blender or use Theater.js, but I kind of just set on manually picking rotations, generating a rotation spline, and slerping between them. Anyway, also make sure to use quaternions when slerping camera rotations to avoid gimbal locking and so the camera takes the shortest rotation path when rotating to the new target rotation.

11.4 Object of target rotations for different positions based on camera progress along curve.
11.5 Rotation spline function using slerp and quaternions before converting back to Euler angles.

Bird

The bird was done just like the camera, except the rotation was managed by looking at the next point on the curve and the value was the elapsed time rather than based on a delta scroll value. I’ve thought about not making it an elliptical path, but I just decided to make it a go around the mountain in an oval-like shape. Depending on where you are with your camera you can actually see the bird make a turn and that’s pretty cool. If I had more time I’d love to make it glide with another animation but it always flapping is okay too. I think in the future I’d also make it so the curve randomly updates it’s value (within certain constraints that allow it to be visible) that way everytime you load the experience the bird will be flying a different path.

11.6 Bird flight path shown as a red curve.

Fire Shader

I asked my friend WaWa Sensei and Drin for help here as I wanted to write a custom one, but I later ended up using a public one found here! Huge credits to the creator drcmda for making this public. He’s a huge contributor to the Three.js community and he’s really amazing, definitely check him out! I adjusted a few parameters to get the fire looking a little bit different. I manually positioned these one at a time. It seems tedious rather than just copying preset positions, but honestly didn’t take much time.

Waterfall Shader

The waterfall shader I added some fading to the edges to help disguise it looked like a plane, but mainly I just scrolled the UVs in the y-direction. I also tried adding a tint to make it blend better, but it didn’t help much because I didn’t spend time trial and erroring on it. In image 12.5 you’ll see that I needed to mirror the waterfall texture because it wasn’t seamless. My gut is also telling my I overcomplicated the edge detection for the fade but at least I got something working haha. This is something I’ll be revisiting later to polish up.

fragmentShader: `
  uniform float time;
  uniform sampler2D map;
  uniform float textureAspect;
  uniform float repeatY;
  uniform float edgeFade;
  uniform vec3 tintColor;
  uniform float tintIntensity;
  varying vec2 vUv;
  varying float vWidthFactor;
  
  void main() {
    vec2 uv = vUv;
    uv.x = (uv.x - 0.5) / textureAspect + 0.5;
    
    // Scrolling effect with mirroring
    float scroll = time * ${speed};
    float phase = -uv.y * repeatY + scroll;
    float wrapped = mod(phase, 2.0);
    uv.y = (wrapped > 1.0) ? 2.0 - wrapped : wrapped;
    
    // Sample texture
    vec4 color = texture2D(map, uv);
    
    // Edge fading based on mesh width
    float fade = smoothstep(0.0, edgeFade, vWidthFactor) * 
                (1.0 - smoothstep(1.0 - edgeFade, 1.0, vWidthFactor));
    
    // Apply fading
    color.a *= fade;
    
    // Add some tint to it to blend better
    vec3 tintedColor = mix(color.rgb, color.rgb * tintColor, tintIntensity);
    color.rgb = tintedColor;
    
    gl_FragColor = color;
  }
`,

Tree Wind Shader

For the tree wind shader, I definitely wanted to up the realism. For context, all the leaves were joined together with the trunk as a single object. The general idea is to start layering effects when it comes to more complex shaders. First start with a gentle sway across the whole tree, and then adjust the sway based on height, and then add a vertical adjustment, and then add a bit more variation etc. This is a similar intuitive concept when you’re working in Photoshop, you add layers for more added effects, or when working with materials and shader nodes in Blender you create some style and then mix it with some sort of blending node. When you view math functions in GLSL code like “layers” or “nodes” it becomes easier to visualize in your brain and doesn’t feel as scary. After all those other nodes like in Blender are created to make the math behind the scenes less scary and more accessible, now we’re just writing the actual math behind the nodes.

I’ll also say AI helps for shader code and explaining how it works (not always, but oftentimes yes). Take the explanations slow, trace it out, do the math, write it out, draw it out, ask for different analogies and forms of explanations etc. Just make sure to always ask “are you sure” to have the AI recalibrate and validate with other AIs and multiple online sources as well. You can also immediately apply what it tells you in a completely different context to validate it. AI can be a great learning tool if you use it correctly, but to use it effectively you’ll need to develop a good intuition first.

const vertexShader = `
  uniform float time;
  uniform float swayAmount;
  uniform float swaySpeed;
  uniform float baseFrequency;

  attribute float instanceScale;

  varying vec2 vUv;
  varying float vHeight;

  void main() {
    vUv = uv;
    vHeight = position.y;
    
    // Normalize the height for easier calculations (0 at base, 1 at top)
    float normalizedHeight = position.y / 10.0; // Divisor based on tree height, but honestly doesn't really matter too much
    
    // Multi-frequency sway for more organic movement
    float sway1 = sin(time * swaySpeed * 0.8 + position.x * baseFrequency) * 0.5;
    float sway2 = cos(time * swaySpeed * 1.3 + position.z * baseFrequency * 1.7) * 0.3;
    float sway3 = sin(time * swaySpeed * 0.5 + position.x * baseFrequency * 0.3) * 0.2;
    
    // Combine sway effects with more influence at the top
    float combinedSway = (sway1 + sway2 + sway3) * swayAmount;
    float trunkStiffness = 1.0 - smoothstep(0.0, 0.3, normalizedHeight); // More stiff at bottom of tree
    
    // Apply sway with height influence and trunk stiffness
    vec3 swayedPosition = position;
    swayedPosition.x += combinedSway * 0.3 * normalizedHeight * (1.0 - trunkStiffness);
    swayedPosition.z += combinedSway * 0.4 * normalizedHeight * (1.0 - trunkStiffness);
    
    // Add a slight vertical movement for leaf flutter
    swayedPosition.y += sin(time * swaySpeed * 2.0 + position.x) * 0.02 * normalizedHeight;
    
    // Some wind direction for variation
    float windDirection = sin(time * 0.1) * 0.5 + 0.5;
    swayedPosition.xz += windDirection * combinedSway * 0.1 * normalizedHeight;
    
    // Adjust the position
    vec4 modelPosition = instanceMatrix * vec4(swayedPosition, 1.0);
    vec4 viewPosition = viewMatrix * modelPosition;
    vec4 projectedPosition = projectionMatrix * viewPosition;
    
    gl_Position = projectedPosition;
  }
`;

Instancing

I used quite a bit of instancing with React Three Drei’s instancing wrapper for my trees or background images. In other articles you’ll find others using different scattering methods (like this one) to scatter more effectively. Since I only had a few instances and had no idea where I wanted them I just manually rotated and positioned them until I got something I liked. With Vite’s hot module reload, the page didn’t refresh everytime I changed a position or rotation so it didn’t really take that long.

11.7 A bunch of tree instances with the custom shader material attached.

For the background instances I positioned them based on my max screen width which is unforunately not that large. Therefore, those with devices that can support larger window widths will have the illusion of the world being large and continuous shattered as they could see the cube map background instead of the immersive background backdrop. If I wanted to accommodate for this, I could dynamically increase the number of instances based on the window width and position them dynamically.

Pulsating Materials Interactions & Pop-up Modals

For events, the awesome thing about R3F is how it has built-in events for the mesh objects. All I did was use those events to trigger my modal opening through a global store with Zustand. I then disabled my camera logic based on if the modal is open or not global state.

11.8 Modal global store with Zustand.
11.9 For each painting call a callback function that updates modal store.
11.10 Experience disabling camera movement if isModalOpen global store is true.
11.11 Conditionally rendering modal component based on isModalOpen.

For the pulsating materials, I passed in progress as a prop into my child component to determine when it’ll start pulsating and adjusted the color value based on the pulse intensity I adjusted in my useFrame hook. I used a sin() function such that the value goes back and forth as time increases rather than always gets brighter.

11.12 Updating pulse intensity with a sin function on each frame.
11.13 Child component reacting to progress and pulseIntensity passed down.

Audio System with Howler.js

To handle audio, I created an audio system with Howler that exported a bunch of utility functions I could use throughout my code. I’m using Howler because it takes care of audio compatibility issues across different browsers for me instead of having to take care of that myself. However, because the main audio files are .ogg, we need to update the path to use our .mp3 files if we’re on an iOS/macOS device using Safari. I created a utlitity function to help with this.

11.14 Platform detector utility function.
11.15 My audio system created with Howler.

Loading Screen, Chunked Loading, & isExperienceReady Global Store

For the loading screen, I typically use useProgress which handles everything for you by giving you a load percentage until everything is loaded. For this project I didn’t because I used chunk loading with multiple <Suspense> components which caused the useProgress hook to go from 0 to 100 for every chunk loading. I just settled with a very basic global state progress value that adjusts the loading bar width based on the amount of chunks loaded so you’ll see it jumps from 25 to 50 to 75 to 100 which doesn’t look great but it gets the job done. In this case, I manually determined each set of chunk models to load rather than any sort of systemic reason.

While R3F has built-in code to help with crashing when loading a bunch of models compared to vanilla Three.js where you have to write more JS code to handle that issue (which typically happens on mobile iOS devices), it’s still generally best practice to adjust loading to what your project needs. Since I’m loading a lot of models I decided to split the loading into chunks as shown in the image below. My original intention was to only load certain models at certain points but I realized that I didn’t want to do that. This is actually not necessary for this project and I prematurely optimized. Regardless, the principle applies, you generally don’t want to have too many concurrent loads or a high initial load before the user sees something.

11.16 Splitting up models into chunks to be loaded.

There’s a lot of other ways to chunk load, for example after hitting the enter button I could load the paintings in the far background chunk after, or load a different based on my scroll progress. This allows the user to have a much initial quicker load time and they can see something first. Remember when I talked about the trade-off zone when it comes to development time and loading time? Now I’m revisiting it here. I eventually realized I didn’t need to do chunk loading and the amount of initial load time it saved compared to the models I could load at a later time without taking away from the experience wasn’t worth it and I decided I want the user to see everything up front anyway.

Chunk loading is often synonymously used with the term lazy loading and honestly you can even further extend the metaphor to LODs or how you only load assets as your URL updates in your website. As long as you get the concept and the person you’re communicating with is on the same page, the terminology you use when you communicate doesn’t really matter.

Another important thing here is another global store I often use in immersive experiences I called “isExperienceReady”; it’s a boolean value that your application can react to. For example, the positional audio I put on the fires needed to be played after the user clicked on the enter button not autoplayed. Additionally, if you tap the enter button with a touch device it will update the camera position before the loading screen animates so I set isExperienceReady only after I clicked on the button. In some applications once the progress value hits 100, I’ll set isExperienceReady to true automatically which will update my other components or have two, one that reacts to automatically hitting 100 and another one where they click the button. Obviously it depends on your use case, but I almost always have some sort of global store allowing my components to react accordingly to when the 3D experience is ready and done loading.

11.17 Loading screen sets experience ready to true when I click the enter world button. Also notice me playing audio from my audio system I created in the previous section.
11.18 Fire.jsx has a useEffect that reacts to isExperienceReady to start playing the positional audio sounds.
11.19 Experience.jsx event handlers are not going to be added until user clicks on the enter button. See the guard clause as the first line in the useEffect.

Handling Bad Performance, Optimization, & Mobile Devices

Typically for immersive experiences you’ll have some sort of difference on a mobile device. For one, the camera position was too close on mobile and I couldn’t see a lot of things. Therefore, I simply check the window innerWidth and made some adjustments manually. For example, my curve path starts further back from the museum if window innerWidth is less than 764px that way I can see more to the left and right. There are some other changes I made but hopefully this provides a general idea.

11.20 Adjust curve based on experience

I usually like to have a global store with a resize event listener that checks if we’re on mobile or not and adjusts things dynamically based on resize, but in this case I just set a single experience based on a first time check, so the experience will break on resize. You might see some redundant code checking throughout the code but it was much quicker to copy and paste then bring to a global store. You can see an example of a global store version updating a 3D experience here so it doesn’t break on resize and I made a step-by-step tutorial for it on YouTube if you’re interested!

Another really common method is to reduce the amount of instances or details in shaders (among many other things) based the framerate a user is experiencing. I didn’t do that for this project because I tested it on an iPhone 12 and it was running fairly smoothly. You can check out Jesse Zhou’s case study for an example of that and view his code! Shoutout to Jesse for making it open source!

Good to Knows & Other Information

Here’s a list of random things that I think are good to know or that helped me during the coding process:

  • Because the entire Three.js experience lives inside a canvas HTML element, I like to think of my 3D canvas world and non-canvas world as separate areas in my brain and structure the code accordingly. It doesn’t mean you shouldn’t merge non-canvas HTML stuff with your 3D canvas world as often times you will, but it helps me get my thoughts together during coding. It’s the same way like if you’ve ever worked with a physics library like Rapier, there’s a physics world and then your actual world and then you communicate between the two. Realizing there’s separate “worlds” or areas communicating to each other reduces the overwhelming feeling and allows me to focus better on each world when I need to and the communication between them.
  • Make sure to set your renderOrder property on your transparent materials otherwise you might see them overlapping each other back and forth. You can see my tree material renderOrder is set to 2 whereas my fire is set to 1. In other words, the renderer knows to render the fire first then the trees after.
  • You may also have to set depthTest and depthWrite values to false if your material is appearing through other objects. For this project, I set the fire shader material depthWrite to false.
  • For testing, I used BrowserStack. They’ve got a ton of devices you can test on. Of course, if you can afford it, it’s better to buy your own devices, but BrowserStack works for me usually. I actually didn’t test on BrowserStack at first and someone on Discord let me know my website wasn’t working on his iOS device which was a different version than the iOS device I tested on with my friend. In other words, don’t overlook testing. I try to test lower-end devices and especially mobile iOS devices with the Safari browser as Safari and iOS are typically known for having more issues with WebGL and other 3D web content compared to other more commonly used browsers and devices. The most common issue I see is a Three.js experience loading too much at once in parallel (loading one asset right after the other for several assets) or having too many concurrent loads leading to the “WebGL: context lost” error screen appearing. You can diagnose and view both of these issues by using the Network tab of your browser’s built-in developer tools (most browsers should have some sort of built-in developer tool).
  • You might have to set touch-action to “none” in your CSS code. What this essentially means is you let all your javascript do the work when handling touch events. This will help sometimes because default browser touch events like zooming might affect how other touch events work with your custom JS code or the JS code three.js wrote like OrbitControls.

12. “It’s not perfect, but it’s cool” – My mistakes

When I complete a project, I like to tear myself to shreds just so I know what to look out for next time. I was a perfectionist for a really long time until I started embracing the mindset that it doesn’t have to be perfect for it to be cool, and ever since then I’ve been a lot happier. This section will cover some of the mistakes and/or things I would change/look for next time. Also pretending to be someone else and obliterating myself is honestly part of my creative process during creation too. Of course I always ask friends and others for feedback too, but I like to roast myself first.

Of course there are a lot of things I wanted to add, like better fonts, better UI, shader UI transitions and animations, burning shader 3D text for an intro on scroll, custom volumetric fog/mist, flickering lights for the fires for more realisim, a flying dragon, interactive book, etc. This section is less about things I wanted to add and mainly about focusing on things I can just do right/better next time the first time through rather than having to look in retrospect. These things are generally quick changes or just intuitive changes that have a fairly large impact on the overall experience that I could have done better.

12.1 The alignment with the center pole and the bottom arch thing is not centered and it looks a bit strange. The arches and pillars are also way off haha. Luckily, this is somewhat distorted from the camera angle giving the illusion it’s probably centered even when it’s not.
12.2 If you look at the torches in the back (left) versus front (right), you’ll notice that the back ones have a light undernearth it. That’s because I deleted faces on the bottom of the torch and thus the pointlight was no longer blocked. I forgot to delete those faces for the front ones so during baking this is the result.
12.3 The overall scene composition is a bit blocky and has leading lines that focus on the back wall of the mountain rather than the museum which missed the effect I wanted. This is less of a real issue and more of a time issue. Although now I know what to keep in mind next time when planning ahead to avoid compositional issues. I didn’t really have much flexibility with camera positions and rotation options given I set in stone that I wanted to go for a front facing view. Although many times during the project I considered starting at different angles for a more epic effect, I decided to save time and go with a standard front-facing view and forgot to check the composition on change. I would also render out a more dynamically shaped background image rather than a rectangular shaped one.
12.4 The water fall image is blue in the top half and not blue in the bottom half so it looks a bit odd if you play close attention to the waterfall as it moves down from blue to not blue and vice versa. I was not thinking about the lighting affecting in the object when I was rendering.
12.5 I forgot to make the texture seamless so I had to flip the texture with the shader in order to make it appear seamless. Now it just looks like a duplicate but it looks a lot better than when it wasn’t seamless.
12.6 When I split the front wall from the backwall I forgot to unwrap them together first before splitting which caused this noticeable seam, I should have prepared the mapping first then split them afterwards. This occurs in several areas throughout the experience.
12.7 This paper is too square and kind of feels out of place for a rugged/ancient vibe. I should’ve added some distortion/burns to it. Additionally, the paper texture I generated with Tech Lagoon didn’t seem to show that well and honestly I probably could have bypassed that step and just made the paper texture in Blender instead.
12.8 The torches against the lighter background has an odd coloring to it and doesn’t feel realistic. I also forgot to put anything in the torch to burn. Luckily, it could be assumed something way down deep in the torch is burning instead.
12.9 The fire for the torches look like they’re in wrong positions based on the rotation and position of your camera. In reality, they are exactly where they should be, it’s more of the way the shader is coded that causes this effect. If you look on other websites with fire shaders, you’ll also notice this issue quite often. The good thing is this is a very minor problem that is rarely looked at. The easy fix would simply update the flame position based on your camera rotation and position, or in this case, since I have a curve, I can adjust it based on my progress value along the curve. The added mouse offset and rotation doesn’t really affect the fire appearing offset which is nice, but it would if the offset effect was stronger.
12.10 I could have instanced a lot of things that I didn’t. For example the arches in the back, the torches inside the museum, the fire braziers, or the roof decorations. I chose this lighting setup so I could do less work (and that worked), but I could have also used more instancing for better optimization, as many of these objects have nearly identical lighting just like how the trees were instanced and it looks fine in different areas.
12.11 The porportions for objects are off. If you look at images 8.5 and 8.6 earlier in this article you’ll see how big the torch and other objects in the world are compared to the person. That means the braziers, lectern, and paintings are also quiet large. This causes the entire museum to seem much smaller than it actually is and doesn’t feel as epic as I wanted it to feel. In the end, I relied way too much on visual fidelity rather than a clear guided art direction. As you can tell from the earlier sections, I didn’t have much of a planning phase for this project and now you the results of the lack of a planning phase is showing haha.
12.12 I didn’t consider the scale I had to account for the background images. Since I scaled them up by around 200, the image texture rocks now look huge and it’s a bit unrealistic. Next time I should scale up the UVs for more texture to accomdating for scaling. I did briefly position them in Blender and it looked nice but in Three.js I didn’t account for the fog and camera offset/rotation so I had to make manual adjustments for that which ended up having me scale it by a huge factor. I could just tile the texture with three.js but honestly it’s probably just easier and better if I made the adjustment before hand in Blender.
12.13 When you’re on mobile view, tapping updates camera rotation so everytime you tap on a painting or close it, your camera will update and it feels a bit abrupt. Normally you’d expect your camera to stay the same or go to a previous position. The pulsating for the paintings are also abpurt when they start flashing on enter or on exit because of how I coded it based on time rather than on enter or exit.
12.14 After joining objects I overrode one of my existing UV maps which caused this really large dark spot on the roof front cover which feels a bit odd.
12.15 I also accidentally had a temperature setting on my monitor and didn’t realize the color difference until it was too late, so the baked lighting looks a lot more yellow than the flame. I can’t adjust the flame color to match it otherwise it’d look to unrealistic for a fire. Hopefully the suspension of disbelief phenomenon makes it such that perhaps the wall material blending with an orange fire causes that color.

While I definitely sound harsh and maybe even toxic towards myself and I’m sure I could list a lot more issues out, I’m actually really really happy with the result. In fact, I’m dancing around my room in my pajamas at how cool this project turned out to be and it was honestly way better than I expected. I’ll probably be dancing for a while because of this project! This section is mainly just to give myself areas of improvement for next time and to poke some fun at myself.

Conclusion

I hope this article provides a better understanding of how immersive world experiences are created with Blender and Three.js! Of course, like I mentioned before, there are a lot of different approaches than the ones I mentioned in this article, but I really hope it helped out!

I also really hope that, in the end, you can tell the entire process was a chaotic mess, that there are trade-offs and the project was far from perfect. But, it doesn’t have to be perfect to be cool, and that’s why I love what I do so much. Creating cool stuff is about having fun! Yes, you might get frustrated at times, but I never lost sight of why I wanted to get into this field in the first place, to create cool things that make other people happy. To make something emotionally meaningful. To feel like I can do anything and everything I have ever wanted to do (or want to do).

Whatever your reason is for getting in this field, I hope you don’t lose sight of it either and find moments of joy and fun as you create cool things because at the end of the day that’s all it really is about, having fun and doing something you love.

Huge shoutout to Codrops for letting me have this opportunity, and I hope the Codrops museum is something you visit here and there for moments of joy sometimes (and of course the Codrops website as well)!

As always, reach out anytime if you have any questions! And if you make something from this, please do tag me or send me a message! I would love to see it and I’m sure many others would as well, so if you’re comfortable, definitely share it online too!

Thanks so much for reading and take care 😊

Andrew Woan

Heyo! 👋 I'm just a random guy who loves cute things, teaching, and creating things that make other people happy! I also have a panda plushie named Mr. Panda that I carry with me on all my adventures! Message me if you wanna see a pic of Mr. Panda!!! 🐼❤️ And like also totally feel free to msg me pics of cute things too 🥰!

The
New
Collective

🎨✨💻 Stay ahead of the curve with handpicked, high-quality frontend development and design news, picked freshly every single day. No fluff, no filler—just the most relevant insights, inspiring reads, and updates to keep you in the know.

Prefer a weekly digest in your inbox? No problem, we got you covered. Just subscribe here.