Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js - Exotic Digital Access
  • Kangundo Road, Nairobi, Kenya
  • support@exoticdigitalaccess.co.ke
  • Opening Time : 07 AM - 10 PM
Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js

Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js

Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js


From our sponsor: Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.jsConway’s Game Of Life – Cellular Automata and Renderbuffers in Three.jsCreate stunning brand assets with the help of our AI-driven Creative Assistant. Get started today.

Simple rules can produce structured, complex systems. And beautiful images often follow. This is the core idea behind the Game of Life, a cellular automaton devised by British mathematician John Horton Conway in 1970. Often called just ‘Life’, it’s probably one of the most popular and well known examples of cellular automata. There are many examples and tutorials on the web that go over implementing it, like this one by Daniel Shiffman.

But in many of these examples this computation runs on the CPU, limiting the possible complexity and amount of cells in the system. So this article will go over implementing the Game of Life in WebGL which allows GPU-accelerated computations (= way more complex and detailed images). Writing WebGL on its own can be very painful so it’s going to be implemented using Three.js, a WebGL graphics library. This is going to require some advanced rendering techniques, so some basic familiarity with Three.js and GLSL would be helpful in order to follow along.

Cellular Automata

Conway’s game of life is what’s called a cellular automaton and it makes sense to consider a more abstract view of what that means. This relates to automata theory in theoretical computer science, but really it’s just about creating some simple rules. A cellular automaton is a model of a system that consists of automata, called cells, that are interlinked via some simple logic which allows modelling complex behaviour. A cellular automaton has the following characteristics:

  • Cells live on a grid which can be 1D or higher-dimensional (in our Game of Life it’s a 2D grid of pixels)
  • Each cell has only one current state. Our example only has two possibilities: 0 or 1 / dead or alive
  • Each cell has a neighbourhood, a list of adjacent cells

The basic working principle of a cellular automaton usually involves the following steps:

  • An initial (global) state is selected by assigning a state for each cell.
  • A new generation is created, according to some fixed rule that determines the new state of each cell in terms of:
    • The current state of the cell
    • The states of cells in its neighbourhood
Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js
The state of a cell together with its neighbourhood determine the state in the next generation

As already mentioned, the Game of Life is based on a 2D grid. In its initial state there are cells which are either alive or dead. We generate the next generation of cells according to only four rules:

  • Any live cell with fewer than two live neighbours dies as if caused by underpopulation.
  • Any live cell with two or three live neighbours lives on to the next generation.
  • Any live cell with more than three live neighbours dies, as if by overpopulation.
  • Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

Conway’s Game of Life uses a Moore neighbourhood, which is composed of the current cell and the eight cells that surround it, so those are the ones we’ll be looking at in this example. There are many variations and possibilities to this, and Life is actually Turing complete, but this post is about implementing it in WebGL with Three.js so we will stick to a rather basic version but feel free to research more.

Three.js

Now with most of the theory out of the way, we can finally start implementing the Game of Life.

Three.js is a pretty high-level WebGL library, but it lets you decide how deep you want to go. So it provides a lot of options to control the way scenes are structured and rendered and allows users to get close to the WebGL API by writing custom shaders in GLSL and passing Buffer Attributes.

In the Game of Life each cell needs information about its neighbourhood. But in WebGL all fragments are processed simultaneously by the GPU, so when a fragment shader is in the midst of processing one pixel, there’s no way it can directly access information about any other fragments. But there’s a workaround. In a fragment shader, if we pass a texture, we can easily query the neighbouring pixels in the texture as long as we know its width and height. This idea allows all kinds of post-processing effects to be applied to scenes.

We’ll start with the initial state of the system. In order to get any interesting results, we need non-uniform starting-conditions. In this example we’ll place cells randomly on the screen, so we’ll render a regular noise texture for the first frame. Of course we could initialise with another type of noise but this is the easiest way to get started.

/**
 * Sizes
 */
const sizes = {
	width: window.innerWidth,
	height: window.innerHeight
};

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();

/**
 * Textures
 */
//The generated noise texture
const dataTexture = createDataTexture();

/**
 * Meshes
 */
// Geometry
const geometry = new THREE.PlaneGeometry(2, 2);

//Screen resolution
const resolution = new THREE.Vector3(sizes.width, sizes.height, window.devicePixelRatio);

//Screen Material
const quadMaterial = new THREE.ShaderMaterial({
	uniforms: {
		uTexture: { value: dataTexture },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
});

// Meshes
const mesh = new THREE.Mesh(geometry, quadMaterial);
scene.add(mesh);

/**
 * Animate
 */

const tick = () => {
    //The texture will get rendere to the default framebuffer
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
};

tick();

This code simply initialises a Three.js scene and adds a 2D plane to fill the screen (the snippet doesn’t show all the basic boilerplate code). The plane is supplied with a Shader Material, that for now does nothing but display a texture in its fragment shader. In this code we generate a texture using a DataTexture. It would be possible to load an image as a texture too, in that case we would need to keep track of the exact texture size. Since the scene will take up the entire screen, creating a texture with the viewport dimensions seems like the simpler solution for this tutorial. Currently the scene will be rendered to the default framebuffer (the device screen).

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Framebuffers

When writing a WebGL application, whether using the vanilla API or a higher level library like Three.js, after setting up the scene the results are rendered to the default WebGL framebuffer, which is the device screen (as done above).

But there’s also the option to create framebuffers that render off-screen, to image buffers on the GPU’s memory. Those can then be used just like a regular texture for whatever purpose. This idea is used in WebGL when it comes to creating advanced post-processing effects such as depth-of-field, bloom, etc. by applying different effects on the scene once rendered. In Three.js we can do that by using THREE.WebGLRenderTarget. We’ll call our framebuffer renderBufferA.

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();
//Create a second scene that will be rendered to the off-screen buffer
const bufferScene = new THREE.Scene();

/**
 * Render Buffers
 */
// Create a new framebuffer we will use to render to
// the GPU memory
let renderBufferA = new THREE.WebGLRenderTarget(sizes.width, sizes.height, {
	// Below settings hold the uv coordinates and retain precision.
	minFilter: THREE.NearestFilter,
	magFilter: THREE.NearestFilter,
	format: THREE.RGBAFormat,
	type: THREE.FloatType,
	stencilBuffer: false
});

//Screen Material
const quadMaterial = new THREE.ShaderMaterial({
	uniforms: {
        //Now the screen material won't get a texture initially
        //The idea is that this texture will be rendered off-screen
		uTexture: { value: null },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
});

//off-screen Framebuffer will receive a new ShaderMaterial
// Buffer Material
const bufferMaterial = new THREE.ShaderMaterial({
	uniforms: {
		uTexture: { value: dataTexture },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	//For now this fragment shader does the same as the one used above
	fragmentShader: document.getElementById('fragmentShaderBuffer').textContent
});

/**
 * Animate
 */

const tick = () => {
	// Explicitly set renderBufferA as the framebuffer to render to
	//the output of this rendering pass will be stored in the texture associated with renderBufferA
	renderer.setRenderTarget(renderBufferA);
	// This will the off-screen texture
	renderer.render(bufferScene, camera);

	mesh.material.uniforms.uTexture.value = renderBufferA.texture;
	//This will set the default framebuffer (i.e. the screen) back to being the output
	renderer.setRenderTarget(null);
	//Render to screen
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
};

tick();

Now there’s nothing to be seen because, while the scene is rendered, it’s rendered to an off-screen buffer.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

We’ll need to access it as a texture in the animation loop to render the generated texture from the previous step to the fullscreen plane on our screen.

//In the animation loop before rendering to the screen
mesh.material.uniforms.uTexture.value = renderBufferA.texture;

And that’s all it takes to get back the noise, except now it’s rendered off-screen and the output of that render is used as a texture in the framebuffer that renders on to the screen.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Ping-Pong 🏓

Now that there’s data rendered to a texture, the shaders can be used to perform general computation using the texture data. Within GLSL, textures are read-only, and we can’t write directly to our input textures, we can only “sample” them. Using the off-screen framebuffer, however, we can use the output of the shader itself to write to a texture. Then, if we can chain together multiple rendering passes, the output of one rendering pass becomes the input for the next pass. So we create two off-screen buffers. This technique is called ping pong buffering. We create a kind of simple ring buffer, where after every frame we swap the off-screen buffer that is being read from with the off-screen buffer that is being written to. We can then use the off-screen buffer that was just written to, and display that to the screen. This lets us perform iterative computation on the GPU, which is useful for all kinds of effects.

To achieve it in THREE.js, first we need to create a second framebuffer. We will call it renderBufferB. Then the ping-pong technique is actually performed in the animation loop.

//Add another framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
    sizes.width,
    sizes.height,
    {
        minFilter: THREE.NearestFilter,
        magFilter: THREE.NearestFilter,
        format: THREE.RGBAFormat,
        type: THREE.FloatType,
        stencilBuffer: false
    }

    //At the end of each animation loop

    // Ping-pong the framebuffers by swapping them
    // at the end of each frame render
    // Now prepare for the next cycle by swapping renderBufferA and renderBufferB
    // so that the previous frame's *output* becomes the next frame's *input*
    const temp = renderBufferA
    renderBufferA = renderBufferB
    renderBufferB = temp
    //output becomes input
    bufferMaterial.uniforms.uTexture.value = renderBufferB.texture;
)

Now the render buffers are swapped every frame, it’ll look the same but it’s possible to verify by logging out the textures that get passed to the on-screen plane each frame for example. Here’s a more in depth look at ping pong buffers in WebGL.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Game Of Life

From here it’s about implementing the actual Game of Life. Since the rules are so simple, the resulting code isn’t very complicated either and there’s many good resources that go through coding it up, so I’ll only go over the key ideas. All the logic for this will happen in the fragment shader that gets rendered off-screen, which will provide the texture for the next frame.

As described earlier, we want to access neighbouring fragments (or pixels) via the texture that’s passed in. This is achieved in a nested for loop in the getNeighbours function. We skip our current cell and check the 8 surrounding pixels by sampling the texture at an offset. Then we check whether the pixels r value is above 0.5, which means it’s alive, and increment the count to represent the alive neighbours.

//GLSL in fragment shader
precision mediump float;
//The input texture
uniform sampler2D uTexture;
//Screen resolution
uniform vec3 uResolution;

// uv coordinates passed from vertex shader
varying vec2 vUvs;

float GetNeighbours(vec2 p) {
    float count = 0.0;

    for(float y = -1.0; y <= 1.0; y++) {
        for(float x = -1.0; x <= 1.0; x++) {

            if(x == 0.0 && y == 0.0)
                continue;

            // Scale the offset down
            vec2 offset = vec2(x, y) / uResolution.xy;
            // Apply offset and sample texture
            vec4 lookup = texture2D(uTexture, p + offset);
             // Accumulate the result
            count += lookup.r > 0.5 ? 1.0 : 0.0;
        }
    }

    return count;
}

Based on this count we can set the rules. (Note how we can use the standard UV coordinates here because the Texture we created in the beginning fills the screen. If we had initialised with an image texture of arbitrary dimensions, we’d need to scale coordinates according to its exact pixel size to get a value between 0.0 and 1.0)

//In the main function
    vec3 color = vec3(0.0);

    float neighbors = 0.0;

    neighbors += GetNeighbours(vUvs);

    bool alive = texture2D(uTexture, vUvs).x > 0.5;

    //cell is alive
    if(alive && (neighbors == 2.0 || neighbors == 3.0)) {

      //Any live cell with two or three live neighbours lives on to the next generation.
      color = vec3(1.0, 0.0, 0.0);

      //cell is dead
      } else if (!alive && (neighbors == 3.0)) {
      //Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
        color = vec3(1.0, 0.0, 0.0);

      }

    //In all other cases cell remains dead or dies so color stays at 0
    gl_FragColor = vec4(color, 1.0);

And that’s basically it, a working Game of Life using only GPU shaders, written in Three.js. The texture will get sampled every frame via the ping pong buffers, which creates the next generation in our cellular automaton, so no additional variable tracking the time or frames needs to be passed for it to animate.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

In summary, we first went over the basic ideas behind cellular automata, which is a very powerful model of computation used to generate complex behaviour. Then we were able to implement it in Three.js using ping pong buffering and framebuffers. Now there’s near endless possibilities for taking it further, try adding different rules or mouse interaction for example.


Source link

Leave a Reply