Messages and Particles

A few notes on using GLSL to animate messages

Particles in WebGL have always felt like being an ideal way of showing data flowing through systems. When we think about distributed systems, it's often in terms of the flow of network messages across networks, servers and the processes running on them. Simulating how those network messages behave is way better than an arrow in a box diagram.

Simple Particles

At the most basic level, what are we trying to achieve?

  1. Animate 1000s of basic dots in 3D space along a predefined path
  2. Define the path using a series of 'pass-through' points
  3. Define the path and speed independently for each particle dot

Here's a quick demonstration showing a path around some blocks, hit the button to see it in action...

This example shows a couple of thousand particles being animated around a set of blocks, where the pass through points' vertical position is randomized but the start and end are the same. The travel duration and color are also randomized.

Designing the Model for Particles in a Simulation

When constructing a simulation 'scene' we need to think about how we define the particles being animated. If we're showing two processes, one producing and one consuming messages, which of those owns the set of messages being animated? Do you pass ownership as part of the producer/consumer relationship? Going down this path actually leads to some quite complex state management logic that doesn't scale very well with a complex simulation. The more pragmatic solution that I ended up with is to have a single global set of messages that animates over the simulation, where simulation participants respond and trigger accordingly. This means we just need one "message simulator" for the entire simulation diagram working in world space.

Each particle in our global set of particles needs to have an independent path because it can be going on a journey from any two entities. It also needs to have an independent travel time and color. The path also needs to be a smooth curve passing through defined points in world-space. One approach is that we could calculate the position of each particle in JavaScript and instruct our point positions in WebGL on each frame. That would work for a small number of particles, but we could have multiple thousands. A better approach is to only pass to the WebGL shader the pass-through points and let the GPU calculate the curve between them. This way we just let the GPU animate our particles leaving the CPU totally idle.

Technical Notes

The main ThreeJS component used for point particles is the Points class. This simply gives us a starting point to define the real behavior of our set of points as vertex and fragment shaders.

import fragment from "./fragmentShader.glsl"; import vertex from "./vertexShader.glsl"; :: const uniforms = useMemo(() => ({ uTime: { value: 0.0 } }), []); const frameDb = useCallback<RenderCallback>(({ clock }) => { if (shaderRef.current) { shaderRef.current.uniforms.uTime.value = clock.elapsedTime; shaderRef.current.needsUpdate = true; } }, []); useFrame(frameDb); return <points ref={pointsRef} position={position} frustumCulled={false}> <DynamicParticleGeometry particles={particles} /> <shaderMaterial ref={shaderRef} blending={THREE.AdditiveBlending} depthWrite={true} fragmentShader={fragment} vertexShader={vertex} uniforms={uniforms} attach="material" /> </points>

The shaders passed to Points here come from glsl files. The other properties are fairly standard: - we're using AdditiveBlending so that one particle is alpha-blended on another, and we're using a depth buffer so that the particles are occluded in 3D space. It's worth noting that the position prop isn't really used as the vertex shader is working in world space (our particles are global). The shaderMaterial uniform is just the frame counter time to move our particles in with the clock. The real work in the above code is being done by the DynamicParticleGeometry.

Optimizing GPU Resources

We have potentially multiple thousands of messages with an independent start and end time and travel path. One common approach to particle systems is to define a maximum amount of particles and render inactive particles as hidden. For a simulation, with potentially undefined behavior, defining a hard ceiling like that could be too limiting. This is where the DynamicParticleGeometry components comes in.

The ThreeJS Points class needs a material (defined by the shaderMaterial) and a geometry, which defines the locations of each point. The geometry is defined by a buffer of data (inputs) in a ThreeJS BufferGeometry class, and a vertex shader that takes that data and defines the output location (gl_Position), size (gl_PointSize) and color of the particle. **Note:- ** the gl_ prefix is reserved for built-in varying attributes of the shader.

The DynamicParticleGeometry component wraps up the bufferGeometry and limits the number of particles being managed by the GPU to what is only currently visible using a one-second cascading window. This way, if 2 million particles are scheduled up in the simulator, the GPU still only needs to worry about what is only visible. No need for a fixed ceiling of particles.

Shader Logic for Curves - Catmull-Rom

The curve itself is defined by a start, end and a series of intermediary points. The equation is (fairly) straight forward. For the time t, the point on the curve between points P₁ and P₂ is:

q(t)=0.5×[2P1+(P2P0)t+(2P05P1+4P2P3)t2+(P0+3P13P2+P3)t3]q(t) = 0.5 \times \left[2P_1 + (P_2 - P_0)t + (2P_0 - 5P_1 + 4P_2 - P_3)t^2 + (−P_0 + 3P_1 − 3P_2 + P_3)t^3 \right]

What's important to understand from this is that the previous and next points P₀ and P₃ influence the shape of the curve between P₁ and P₂. This means that we need to use a constructed assumed point before the curve starts and after it ends. For this we can simply extrapolate the curve path linearly using the first and last pair of points.

vec3 calcCatMullRomPoint(float t, vec3 P0, vec3 P1, vec3 P2, vec3 P3) { float t2 = t * t; float t3 = t2 * t; return 0.5 * (2.0 * P1 + (P2 - P0) * t + (2.0 * P0 - 5.0 * P1 + 4.0 * P2 - P3) * t2 + (-1.0 * P0 + 3.0 * P1 - 3.0 * P2 + P3) * t3); } :: // extrapolate previous and next points before start and after end vec3 PM1 = P0 - (P1 - P0); vec3 P4 = P3 + (P3 - P2); // Four point curve split to 3 segments float t = progress * 3.0; vec3 particlePosition; if(t < 1.0) { particlePosition = calcCatMullRomPoint(t, PM1, P0, P1, P2); } else if(t < 2.0) { particlePosition = calcCatMullRomPoint(t - 1.0, P0, P1, P2, P3); } else if(t < 3.0) { particlePosition = calcCatMullRomPoint(t - 2.0, P1, P2, P3, P4); }

The above code for a 4-point curve simply shuffles the points depending on the time segment (where progress goes from 0 to 1.0). This could be made generic for n points, and we could also use the distance dot product between points to determine the time thresholds for more constant travel speed.

Fragment Shader - What Does Our Particle Look Like?

The fragment shader needs to color the pixels for the particle. It's actually called for each pixel covering the square in gl_PointSize. The input attribute is gl_PointCoord, which always flows from (0,0) to (1.0,1.0). We want to show a nice alpha blended round point, so we can use the distance from the pixel to the middle as the alpha value (opacity) and discard the pixels outside the circle. The output from the fragment shader is the color of the pixel.

void main() { // Calculate the vector from the center of the point to the current fragment vec2 coord = gl_PointCoord - vec2(0.5); float strength = distance(gl_PointCoord, vec2(0.5)); strength = 0.5 - strength; strength = pow(strength, 3.0) / (0.5 * 0.5 * 0.5); if(strength < 0.2) { discard; } gl_FragColor = mix(vec4(vColors.xyz, 0.0), vec4(vColors.xyz, 1.0), strength); }

There are optimizations that can be made here, as we should try to use the entire particle size if we can and limit what we discard (to reduce GPU load). However, this is for demonstration only as we will likely want to define different particle shapes to indicate different types of messages.

Gotchas

  • The ThreeJS Points class seems to reserve the position uniform for each particle, so that's being used as the starting point p0.
  • Updating the bufferGeometry uniforms seemed to need an explicit call to setDrawRange.
  • The cascading window for dynamic particle should include any particles that will become visible and will still be visible during the window. The window can be extended out to two seconds or more.
Originally posted:
Filed Under:
site
web
visualization
simulation