Editor’s note: If you’ve been around the world of web graphics, you probably know Hector Arellano, a.k.a Hat—a developer who’s spent years pushing the limits of what’s possible in the browser. We invited him to share his journey, a 13-year adventure through fluid simulations, from the early days of WebGL hacks to the breakthroughs enabled by WebGPU. This is more than a technical deep dive—it’s a story of persistence, experimentation, and the evolution of graphics on the web.
Note that the demo relies on WebGPU, which is not supported in all browsers. Please ensure you’re using a WebGPU-compatible browser, such as the latest versions of Google Chrome or Microsoft Edge with WebGPU enabled.
And now, here’s Hector to tell his story.
Before you start reading… go and get a drink, this is long.
13 years ago…
I was in front of my computer staring at the screen, minding my own business (bored), when a great friend of mine called Felix told me, very serious and exited, that there was a new demo from the Gathering Party being released. It had fluid simulations, particles animations, amazing shading solutions and above all… it was beautiful, really beautiful.
Back then, WebGL was relatively new, delivering 3D-accelerated graphics to the browser, and it seemed like it would open many doors for creating compelling effects. Naively, I thought WebGL could be a great candidate for making something similar to the demo Felix had just shown me.
The issue was that when I started reading about how that demo was made, I faced a harsh truth—it had graphics API features I had never heard of: “Atomics,” “Indirect Draw Calls,” “Indirect Dispatch,” “Storage Buffers,” “Compute Shaders,” “3D Textures.” These were features of a modern graphics API, but none of those advanced capabilities existed in WebGL.
Not only that, but the demo also used algorithms and techniques that sounded incredibly complex—”Smoothed Particle Hydrodynamics (SPH)” to drive particle animations, “histopyramids” for stream compaction (why would I need that?), “marching cubes (on the GPU)” (triangles from particles???), and many other features that seemed completely beyond my understanding.
I didn’t know where to start, and to make matters worse, Felix bet me that there was no way fluids like that could be done in the browser for production.
10 Years Ago…
Three years had passed since my conversation with Felix about fluid simulation, and he told me there was yet another amazing demo I had to watch. Not only did it feature the previous fluid simulations, but it also rendered the geometries using a real-time ray tracer—the materials were impressive, and the results were stunning.
The demo was showing raytracing in a way that looks so real. Of course now the challenge was not only to be able to simulate fluids, I also wanted to render them with a ray tracer to get those nice reflection and refraction effects.
It took me around 3 years to understand everything and being able to hack my way with WebGL to replicate the things that could be done with a modern API. Performance was a limiting factor but I was able to run particles simulations using SPH, which behaved like fluids, and I was also able to create a mesh from those particles using marching cubes (see figure 1).
There was no atomics but I could separate the data in the RGBA channels from a texture using multiple draw calls, there were no storage buffers nor 3D textures, but I could save data in textures and replicate the 3D textures using 2D layers. There were no indirect draw calls but I could just launch an expected amount of draw calls to generate the information necessary, or to draw the expected amount of triangles. There were no compute shaders but I could make GPGPU computing using the vertex shader to reallocate data, couldn’t save multiple memory positions inside a buffer, but at least I was able to generate an acceleration structure in the GPU.
The implementation was working but it was not as remotely beautiful as that original demo (Felix told me it was simply “ugly”… it was ugly, you can see the results in the figure 2), in fact it was just showing how to hack things. I didn’t know much of distance fields or making a shading more interesting than the usual phong shading.
The performance limited much of what could be done in terms of ambient occlusion or more complex effects to render fluids, like reflections or refractions, but at least I could render something.
7 Years Ago…
Three more years and I made some progress implementing a hybrid ray tracer too; the idea was to use the marching cubes to generate the triangles and then use the raytracer to evaluate secondary rays which would be used for reflection and refraction effects, I was also able to use the same ray tracer to traverse the acceleration structure and implement caustic effects. All of this following the ideas and concepts from Matt Swoboda who was the original creator of those previous demos. Actually most of my work was basically to take his ideas and try to make them work in WebGL (good luck with that).
Results were nice visually (take a look at figure 3) but I needed a really good GPU to make it work. Back in the time I was working with a NVidia 1080GTX GPU which means that even if it was feasible in WebGL it was not going to be able to be used for production. There was no way a mobile device, or even a decent laptop, was going to handle it.
It was really frustrating to see “results” but not be able to use them in a real project. In fact, my morale was low—I had spent so much time trying to achieve something, but it didn’t turn out as I had hoped. At the very least, I could use that codebase to continue learning new features and techniques.
So I stopped… And Felix won the bet.
This is a really long introduction for a tutorial, but I wanted to put things in context, sometimes you might think that a demo or effect can be done fairly quickly, but the reality is that some things take years to even make it feasible, it takes time to learn all the desired techniques, and you might rely on the ideas from other people to make things workout… or not.
WebGPU Enters the Scene
Remember all those fancy words and methods from the modern graphics API? Turns out that WebGPU is based on modern API standards, which means that I didn’t have to rely on hacks to implement all the ideas from Matt Swoboda, I could use compute shaders to interact with storage buffers, I could use atomics to save indices for neighbourhood search and stream compaction, I could use dispatch indirect to calculate the calculate just the necessary amount of triangles and also render them.
I wanted to learn WebGPU and decided to port all the work from the fluids and understand the new paradigm, so making a small demo could help me learn how to work with the new features, how to deal with the pipelines and bindings, how to handle memory and manage resources. It might not work for production, but it would help me to learn WebGPU in a deeper level.
In fact… the demo for this article is not suitable for “production”, it might work at 120 fps on good MacBook Pro machines like the M3 Max, it can work at 60fps on a MacBook Pro M1 Pro, and it can render at 50fps on other nice machines… Put this thing in a MacBookAir and your dreams of fluid simulation will fade very quickly.
So why is this useful then?
Turns out that this simple example is a collection of techniques that can be used on their own, this just happen to be a wrapper that uses all of them. As a developer you can be interested in animating particles, or generating a surface from a potential to avoid ray marching, or being able to render indirect lighting or world space ambient occlusion. The key is to take the code from the repository, read this article and take the parts you are interested for your project to build your own ideas.
This demo can be separated into 4 different mayor steps which are:
- Fluid simulation: this step is responsible for driving the fluid animation using particles simulations based on position based dynamics.
- Geometry generation: this step creates the rendering geometry (triangles) from the particles simulation using the marching cubes algorithm in the GPU.
- Geometry rendering: this step renders the previous triangles generated, the displayed material uses distance fields to evaluate the thickness of the geometry for subsurface scattering, and voxel cone tracing to calculate the ambient occlusion.
- Composition: this step is responsible to create the blurred reflections on the floor, implement the color correction, and the applying the bloom effect used to enhance lighting.
Fluid Simulations
Many years ago if you wanted to be part of the cool kids doing graphics you had to show that you could make your own fluid simulation, if you made simulations in 2D you were considered a really good graphics developer… if you made them in 3D you gained the “god status” (please take into consideration that all of this happened inside my head). Since I wanted that “god status” (and wanted to win the bet) I started reading all I could about how to make 3d simulations.
Turns out that there are many ways to do it, among them there was one called “Smoothed Particles Hydrodynamics ” (SPH), of course I could do the right thing (act rational) and check which type of simulation would be more suitable for the web, but I took this path because the name did sound so cool in my head. This methodology works over particles, which turned out to be really beneficial in the long term because I ended switching the SPH algorithm for position based dynamics.
You can understand SPH using some analogies if you have worked with steering behaviours before.
Turns out that Three.js has many amazing examples about the flocking algorithm which is based on steering behaviours. Flocking is the result of integrating the attraction, alignment and repulsion steering behaviours. These behaviours are blended with cosine functions, which decide the type of behaviour each particle will receive based on the distance among the other particles surrounding them.
SPH works in a similar fashion, you evaluate the density of each particle and this value is used to calculate the pressure applied to each one. The pressure effect can be considered like the attraction / repulsion effects from the flocking, meaning that it works making the particles get closer or farther depending on the density for the SPH.
The interesting thing is that the density of each particle is a function of the distance among the surrounding particles, which means that the pressure applied is an indirect function from the distance. This is why the two types of simulations can be considered “similar”. SPH has a unified pressure effect, that modifies positions, based on densities which relies on distances. The flocking simulation relies on attractions and repulsions behaviours, used to modify positions, which are functions based on distances too.
The viscosity term in the SPH simulations can be analogous to the alignment term from the flocking, both steps align the velocity of each particle to the velocity of the surroundings, which is basically checking the difference between the average velocity field and the velocity of the particle evaluated.
So to (over) simplify things you can think of SPH of a way to setup flocking with physically correct values to make those behaviours make your particle behave like… fluids. It’s true that it would require additional steps like the surface tension, and I’m leaving behind the concept of blending functions in SPH, but if you can make flocking work… you can make SPH work too.
Another thing to consider from the flocking algorithm is that it has O(n^2) complexity, meaning that it gets really slow when dealing with a lot of particles, since each particle needs to check its relationship with all the other particles from the simulation.
Flocking and SPH need an acceleration structure that allows to allocate only the closest particles inside a range, this avoids checking all the particles from the simulation and makes the complexity go from O(n^2) to O(k*n) where K is the amount of particles to check. This acceleration can be done using a regular voxels grid which store up to four particles allocated inside each voxel.
The algorithm can check up to 108 particles, evaluating up to four of them for all the 27 voxels surrounding the particle to update, this might sound like a lot of particles to evaluate, but it’s much better than evaluating the original 80.000 particles used in this example.
Traversing the neighbourhood can be pretty expensive, and the SPH algorithm requires multiple passes over all the particles, you need to calculate the density for all the particles, then the pressure and displacement for all of them, another pass is required for the viscosity and a fourth pass for the surface tension… Performance can become something to consider when you realise that you might be using all the processing power of the GPU to drive the particles.
SPH also requires a lot of tweaking, just like the flocking algorithm, and the tweaking has to done using physically correct parameters to make something visually compelling. You end up trying to understand many engineering parameters which makes things hard, sometimes really hard.
Luckily NVidia did release a different approach for physics dynamics over particles called Position Based Dynamics. These are a group of different particles simulations, which include (among others):
- rigid body
- soft body deformations with shape matching
- fluids simulations
- particles collisions
The positions based dynamics modify the particles’s positions using constrains which are the limitations that govern the movement of each particle for each type of simulation. The results are very stable and it’s much easier to tweak than SPH. This made me switch from SPH to PBF (position based fluids). The concept is similar, but the main difference is that the PBF relies on constrains to define the displacements for each particle instead of calculating densities.
PBF makes things easier since it removes many physical concepts and replace them with dimensionless parameters (imagine the Reynolds number but way easier to understand).
There’s one caveat though… position based dynamics use an iterative method for every step, meaning that you would need to calculate the constrains, apply the displacements and calculate viscosity more than 2 times to have a nice result. It’s more stable than SPH, but it’s actually slower. That being said… if you understand flocking… you understand SPH. If you understand SPH then you’ll find PBF a breeze.
Sadly I don’t want to render just particles, I want to render a mesh, which requires to use the GPU to calculate the triangles and render them accordingly, this means I do not have the luxury to use the GPU to calculate multiple steps in an iterative fashion… I needed to cut corners and simplify the simulation.
Luckily position based dynamics offer a very cheap way to evaluate particles collisions, it only requires a single pass once you apply the desired forces over the particles, so I decided to use gravity as the main force, implement the curl noise as a secondary force to provide some fluidity feel to the particles, include a very strong repulsion force driven by the mouse, and let the collisions do the magic.
The curl and the gravity will provide the desired “fluid effect”, and the collisions avoid that the particles group in weird clusters. it would not be as good as PBF but it would be much faster to calculate. The next video shows a demo of the resulting effect.
The implementation only requires a single pass to apply all the desired forces to the particles, this pass is also responsible to generate the grid acceleration structure inside a storage buffer; atomics are used to write the desired particle index to each memory address which only required very few lines of code. You can read the implementation of the forces and the grid acceleration inside the PBF_applyForces.wgsl shader from the repository.
The particles positions are updated using another shader called PBF_calculateDisplacements.wgsl, this shader is responsible to calculate the collisions traversing the neighbourhood, and also evaluate the collisions of the particles with the environment (the invisible bounding box).
The corresponding pipelines and bindings are defined inside the PBF.js module, all the simulation uses only three shaders, the forces application, the displacements updates and finally the velocity integration, which is also another part of the position based dynamics. Once the positions are updated the final velocities are calculated using the difference between the new position and the previous position.
This last shader called PBF_integrateVelocity.wgsl is also used to setup the 3d texture that contains all the particles which will be used to calculate a potential field used for the marching cubes algorithm.
Marching Cubes (Geometry Generation)
The first time I got the particles working with the SPH I got so excited that I spent a few days bragging about it in the office (well, just everywhere), it was an ok result but my ego was through the roof… Luckily I was working with Felix who had just the right medicine for it, he knew that the only way for me to stop bragging was to start working again; so he pushed me to start implementing the surface generation to render the fluids as liquids, not just like particles.
I didn’t really know where to start, there were different options to render surfaces from particles, among them there are the following ones:
- Point Splatting
- Raymarching
- Marching Cubes
Point splatting is the easiest and fastest way to generate a surface from a particles field, it’s a screen space effect that renders the particles and uses separable blur and depth information to generate the normals from the rendered particles. Results can be pretty convincing and you can make many effects, even caustics. To be honest it is the best solution for real time.
Raymarching is very interesting in the sense that it allows complex effects like reflections and refractions with multiple bounces, but it’s really slow performance wise, you have to generate a distance field from the particles and then traverse that field which requires to make software trilinear interpolations, there were no 3d textures when I started working on it. And even with hardware trilinear interpolation the performance is not very good. It’s amazing visually but not a great solution for real time.
Marching Cubes did sound like an interesting approach, the idea is to generate a mesh from a potential field generated from the particles. The good part is that the mesh can be rasterised, which means that it can be rendered over high screen resolutions, and you can also use the mesh to make reflections effects “for free” like the current example. You include the mesh into the scene without worrying about how to integrate the result like the previous two options.
Three.js did have some examples using Marching Cubes but the surface was generated in the CPU and the particles’s data is allocated in the GPU, so I started reading about Matt Swoboda’s presentation about how he managed to implement the marching cubes algorithm in the GPU. Sadly there were many steps that I needed to understand.
How I could generate a potential from a particles field?, What was he talking about when he talked about indirect dispatch? How I could actually generate the triangles using the GPU? there were too many questions that kept me busy and freed Felix from listening me bragging again.
Let’s talk about the different steps required for the complete implementation, you can read the Marching Cubes theory in here. First of all, the marching cubes algorithm is a method to create an iso surface from a potential field, which means that the most important thing is to generate the required potential from the particles. The next step is to evaluate the potential over a voxel grid; the idea is to check the potential value over each voxel, and use this value as an input to compare it against one of the 256 possible triangle combinations that can be generated using the marching cubes, which define 0 to up to 5 triangles to create inside each voxel.
In the CPU this is straightforward since you can place the voxels allocated inside an array and generate the triangles just for those voxels. The GPU has the voxels scattered inside a 3d texture so you can use atomics to reallocate them inside a storage buffer, all you have to do is to increase a memory position index atomically to setup the all information contiguously in the buffer. Finally one last step uses the voxels information gathered to generate the required triangles.
With the roadmap defined let’s get deeper into each step.
Potential Generation
If you have read about point splatting technique you’ll notice that a blur step is used to smooth the different points to generate some sort of screen space surface, this solution can be also used with a 3d texture to generate the potential. The idea is to simply apply a 3d blur which would results in a “poor’s man” distance field.
You could also use the jump flood algorithm to generate a more correct distance field from the particles, so let’s discuss the two options quickly to understand why the blur is a good solution.
The jump flood is a great method to calculate distance fields, even for particles, it is very precise in the sense that it does show the distance to each particle taken into consideration. It also seems to be more performant than applying a 3d blur over a 3d texture, but there’s one caveat that didn’t make this the best solution… It is too good.
The result from this algorithm shows the distance over a group of spheres which are connected depending on the threshold used to define the iso surface, this does not smooth the result in a pleasing way. You need a huge amount of particles that, even with particles, it looks like a surface, and if you’re in that scenario then it’s better to use point splatting.
The blurring on the other hand is smoothing and spreading the particles to behave more like a surface, basically removing the high frequency effects of particles, the result will be smoother with higher blurring steps. It gives you more control over the final surface than the jump flood algorithm. Weirdly enough this simple approach is actually faster and more performant too. You can also apply different blurring methods and combine the result to have different surface results.
The blur implementation is done using a compute shader called Blur3D.wgsl which is dispatched 3 times over the three different axis, the bindings and compute dispatch calls are defined inside the Blur3D.js file. I separated the potential generation into an isolated function since I wanted to study the potential generation and compare the Jump Flood results from the 3D blur results. This also allowed me to setup timestamps queries to check which solution would be more performant.
Checking voxels
I use another compute shader to check which voxels will be the ones responsible for the triangles generation once the potential is created. The repository has a compute shader called MarchCase.wgsl, this shader is dispatched over the whole voxels grid signalling the voxels that require to generate triangles inside them. It uses atomics to allocate the the 3d position of the voxel and the marching cube case for that corresponding voxel contiguously inside a storage buffer.
The EncodeBuffer.wgsl compute shader is used to read the total amount of voxels from the previous step and setup the indirect dispatch call for the amount of vertices to use for the triangles generation. It also encode the indirect draw to dispatch the amount of triangles to draw.
Triangles Generation
The shader responsible for this is called GenerateTriangles.wgsl , this shader uses the global invocation index from each thread to define the corresponding voxel and vertex to evaluate, it is dispatched using the indirect dispatch command which is setup with the encoded buffer created using the EncodeBuffer.wgsl shader.
The voxel information is used in the shader to calculate linear interpolations among the corners of each edge from the voxel to allocate the new vertex position where the edge is defined with the march case. The normal is calculated the linear interpolation of the gradient for each corner of the corresponding edge.
The different steps, potential generation, voxels retrieval and triangles generation are defined inside the generateTriangles function from the TrianglesGenerator.js file. This previous function is called whenever the particles simulation is resolved and new positions are generated.
Rendering
One of the big mistakes I have done over the years is to think that the simulations or GPGPU techniques were more important than visual aesthetics, I was so concerned with showing up that I could make complex things that I didn’t pay attention to what was the final result.
During the years Felix always tried to stop me before releasing a demo of the things I was working on, many times he tried to convince me that I should spend more time polishing visuals, to make it more pleasing, not just a technical thing that only four guys would appreciate.
Trust me on this one… you can make amazing simulations with physics and crazy materials, but if it looks like crap… it it just crap.
The issue with fluids simulations is that you are spending a lot of GPU time doing the position based dynamics and the surface generation, so you don’t have too many resources to put nice rendering effects on top of it. Your timing budget also needs to account for the rest of the things that are included in your scene, so fluids, in general, are not something you can do with an amazing visual quality in real time.
The best option to render liquids in real time is to use point splatting, it allows you to render the fluids with reflections, refractions, shadows and caustics too, the results can be pretty convincing and they can be done really “cheap” in terms of performance. If you don’t trust me take a look at this demo which is amazing, implementing the point splatting technique https://webgpu-ocean.netlify.app
For non transparent / translucent liquids like painting then marching cubes is a good approach, you can use a PBR material and get really nice visuals, and the best part is that it gets to be integrated in world space, so you don’t have to worry too much with integration with the rest of the scene.
For the scope of this demo I wanted to explore how I could make things interesting visually in a way that I could explote the fact that I have a voxel structure with the triangles, and the potential that generate those triangles which could be used as a distance field.
The first thing I explored was to implement ambient occlusion with Voxel Cone Tracing (VCT), turns out that the VCT algorithm requires to voxelize the triangles inside a voxel grid, but the current demo is doing things the other way around, it is using the marching cubes to generate triangles from a voxel grid. This means that one part of the VCT algorithm is already implemented in the code.
All I have to do is to update the MarchCase.wsgl compute shader to update the voxel grid setting up the voxels data with a discretisation method, where the voxels with triangles are marked as 1, while the voxels with no triangles are marked with 0 for the occlusion. I also marked with 0.5 all the voxels that are below a certain height to simulate the ambient occlusion of the floor. It only took two additional lines of code to setup the VCT information.
Once the voxel grid is updated I only need to implement a mipmapping pass for a 3d texture which is done using the MipMapCompute.wgsl compute shader, the mipmap bindings are defined inside the CalculateMipMap.js file. The results can be seen in the next video.
Notice that in the previous video I’m also rendering floor reflections, this is straightforward to implement with marching cubes since I already have the triangles for the mesh, all I have to do is to calculate the reflection matrix and render the triangles two times. This would be much more expensive if I try to render the same result using ray marching.
Results were interesting and I still had some GPU budget to add additional features in the material, which made me ask around too see what could be an interesting thing to do. One friend told me that it would be amazing to implement subsurface scattering for the material like the image below.
Subsurface scattering is one of effects that, well done, can enhance visuals much like reflections and refractions, it is pretty impressive and kind of challenging too. The reason for being difficult to implement in some cases is that it requires to know the thickness of the geometry to setup how much light will be scattered from the light source.
Many subsurface scattering demos use a thickness texture for the geometry, but for fluids it would not be possible to have thickness baked. This is the challenging part, gathering the thickness in real time.
Luckily the demo is creating a potential which can be used as a distance field to retrieve the thickness in real time for the surface, the concept is pretty similar than the ambient occlusion implementation done by Iñigo Quilez. He uses ray marching over the distance field to check how close is the surface from the ray fired on every step of the marching process, this way he can check how the geometry can occlude the light received from the point that fires the ray.
I decided to do the same thing, but firing the rays inside the geometry, that way I could see how the geometry occludes light traveling inside the mesh, thus showing me the regions where the light would not travel freely, avoiding the scattering. Results were really promising as you can see in the next video.
The material for the geometry is defined inside the RenderMC.wgsl file, it implements the vertex shader which uses the storage buffers that contain the positions and normals for the vertices of the triangles, the geometry is rendered using an indirect draw command using the storage buffer encoded with the EncodeBuffer.wgsl compute shader since the CPU has no information of the amount of triangles generated with the marching cubes.
The bindings are generated to use two different matrices to render the geometry two times, one for the regular view and the other matrix is used for the reflection geometry, all of this is done inside the Main.js file.
So far the simulation is working, the surface is generated and there is a material implemented for the surface, now it’s time to think in terms of composition.
Composition
So you might think you are great graphics developer, you are working with Three.js, Babylon.js or Playcanvas.js and doing cool visuals… Actually you might be an amazing developer and you’re doing things on your own, also making cool visuals…
Let me tell you something… I am not.
How do I know that?
Well… I was lucky enough to work at Active Theory (https://activetheory.net/) with amazing graphics developers and 3d artists who showed my limitations and also helped me to move forward with the end product I was delivering. If there is something you can do for yourself and your career is to try to work with them, trust me, you will learn many things that will improve your work in ways you never imagined.
Among those things… Composition is everything!
So I asked for help from Paul-guilhem Repaux who I used to work with at Active Theory (https://x.com/arpeegee) to help me with the composition since I know it is not my strongest characteristic.
In terms of composition he pointed out that the previous videos examples show some deficiencies that need to be solved:
- The reflection on the floor is too well defined, it would be beneficial to have some roughness on the floor reflection.
- The black background does not reflect where the light comes from. The background should reflect a better mood to make it playful.
- There are no light effects that integrate the geometry with the environment.
- The composition should have a justified transition between letters.
- The composition requires color correction.
And trust me, there are many more things that could be improved, Paul was just kind enough to pinpoint just the critical things.
Reflections
The first issue can be solved with post processing, the idea is to apply a blur on the reflection using the distance from the geometry to the ground to setup the intensity of the blur. The farther the geometry is from the flor the more intense the blur applied, that will provide a roughness effect.
The only issue with this solution is that blurring will only be applied in the regions where there is geometry since it is where the height is defined, this means that there will be no blurring in the surroundings from the geometry which makes the result look weird.
To overcome the previous issue one pre processing pass is done where an offset from the reflected geometry is saved inside a texture, this offset saves the closest height value from the geometry in order to define what amount of blurring should be applied in the empty surrounded space from the reflected geometry. The next video displays the offset pass.
The dark red geometry represents the non reflected geometry, while the green fragments represent the reflected geometry including the offsetting, notice that the green reflection is thicker than the red one. Once the offset texture is created the result is used in a post processing pass blurring only the regions defined by the offset in green. The height is encoded in the red channel where you can visualise the height from the floor as a gradient.
Background and Lighting
The subsurface scattering is assuming that the lighting comes from behind the geometry on every moment, even with the camera movements the light seem to come from the back, otherwise the subsurface scattering effect won’t be so noticeable.
That is actually really useful in terms of lighting since the background can apply a gradient that represents a light source allocated behind the geometry justifying the light direction coming from behind the geometry. The background color should also have a similar color than the material to have a better lighting integration which is something easy to do as you can see in the next video.
Lighting Integration
The last thing to do is to provide some lighting integration between the background and the geometry, the backlight defined with the background gradient justifies how the subsurface scattering is implemented, but the final result can be enhanced using a bloom effect. The idea is to use the bloom to provide a halo that is stronger when the geometry is thinner, thus making the subsurface scattering effect much stronger as seen in the next video.
If you take deeper look at the previous video you will notice that I also explored how to match the letters animations with the codrops logo, this was done animating each letter of the logo to reference it with the liquid letter. The idea was discarded because it looked like a children’s application to learn how to read.
Transitions
Transitions are important in the sense that provide the timing for the interactions, the concept behind the transitions is to enhance the idea of the letters mutating somehow, that made me work with different types of transitions. I tried the liquid floating with no gravity and the forming the new letter as displayed in the next video.
Also tried another transition where the letters would be generated by a guided flow, as you can see below.
None of those transitions were making sense in my head because there was no concept behind it, so I started playing the the idea falling because of the word ‘drops’ from “codrops” and things started to fall in place. You can see how the letters are transitioning in the next video.
The next videos also show how I was trying to implement the same falling transition for the background, the motivation was to enhance the idea of everything falling to transition to the new letter. I did try many different background transitions as you can see. Also tested different types of letters too.
The previous background transition was discarded because it looked much like the old “scanline” renderers from 3dsMax.
The idea behind the previous background transition is that the new letter is build by the columns raising it from the falling liquid. It was discarded because if affected too much visually with the interactivity of the letter and the user.
Color Correction and Mood
I also added brightness, contrast and gamma correction for the final result where the mood is setup selecting a warm color palette for the background and the letters. All the post processing is done using different compute shaders which are called inside the Main.js file.
Browse the complete code base. For a simplified version, have a look at this repo. You can change the word shown in the demo by using /?word=something
in the end of the demo URL.
Some Final Words
There are many things I didn’t talk about like optimisation and performance, but I consider it pointless since this demo is meant to run on good GPUs, not on mobile devices. WebGPU has timestamp queries which makes pretty easy to find bottlenecks and make things more performant, you can find how to do so reading the Blur3D.js file, it has the queries commented.
This does not mean that you can use this kind of work for production, Felix did manage to make a great exploration of the SPH with letters which is very performant and it’s also really cool, take a look at the next video to check it out.
So to wrap it up all I can say is that after all these years Felix is still winning the bet, and I’m still trying to change the outcome… Just hope you get to meet someone who makes you say “hold my beer”.