Monday, 1 July 2019

glium - instancing

I have spent a bit of time looking into instancing with glium and OpenGL. It turns out that glium makes it very easy to make use of the powerful tehcnique.


In my previous code I created an array of cubes by creating the actual vertices for every cube that I wanted to display. This is clearly incredibly inefficient and inflexible. Once the cubes have been created they can't easily be moved or adjusted and I am duplicating a lot of vertices. I could draw one cube per draw call but the overhead per call can become substantial if there are a lot of cubes ( or other objects).
Instancing reuses the same object data and redraws the same object where some of the data changes for each instance.
OpenGL has several ways of supporting instancing.
  • You can use a special glsl variable InstanceID that is incremented for each instance and use this to modify the vertex data ( for example, by looking up data by InstanceID ).
  • You can define a vertex type where some of the vertex data only changes for each instance. This is the approach I am going to take

Vertex types for Instancing

I am keeping my existing vertex Vertex3t2 type that defines the position and texture coordinates but I am also defining a new vertex type Attr that has all per-instance data. The new vertex will have the matrix for defining the instance position and orientation and its color;
#[derive(Copy, Clone)]
struct Attr {
    world_matrix: [[f32; 4]; 4],
glium::implement_vertex!(Attr, world_matrix);
This new "instance" vertex is defined using the exactly same type of code as ordinary vertices.

Setting up the vertex data

Because I am using instancing I build only one cube centered around the origin. I use the exactly same add_cube function I developed in the last post.
    let mut verts: Vec<Vertex3t2> = Vec::new();
    add_cube(&mut verts, &glm::Vec3::new(0.0, 0.0, 0.0));
    let indices = glium::index::NoIndices(glium::index::PrimitiveType::TrianglesList);
I also need to build the per-instance vertex data. Because the Attr vertices will be used for data changes for each frame I creating the vertex buffer using dynamic to let glium know that the vertices will be dynamcic ( whereas the cube vertices are stored in a regular vertex buffer )
    let mut positions: Vec<Attr> = Vec::new();
    build_object(&mut positions);
    let position_buffer = glium::VertexBuffer::dynamic(&display, &positions).unwrap();
The build object creates position and color vertices. For now I will simply recreate the same array of cubes that I created before.
fn build_object(attribs: &mut Vec<Attr>) {
    for x in -4..5 {
        for y in -4..5 {
            for z in -4..5 {
                let pos_matrix = glm::translate(&glm::identity(), 
                    &(glm::Vec3::new( x as f32, y as f32, z as f32 )*1.5f32 ) );
                attribs.push(Attr { world_matrix: pos_matrix.into() });

Updating the shader code

The vertex shader needs to be updated to make use of the additional instance specific data. The vertex shader sees the instance data just like any other vertex data but it only gets updated between instances;
    in vec3 position;
    in vec2 tex_coords;
    in mat4 world_matrix;
    out vec2 v_tex_position;
    uniform mat4 view_matrix;
    uniform mat4 perspective;
    void main() {
        v_tex_position = tex_coords; 
        gl_Position = perspective * view_matrix * world_matrix * vec4(position, 1.0);
There are a couple of changes here.
  • There is a new vertex input world_matrix which is the instance specific matrix that holds the object-to-world transformation
  • The old matrix uniform has been replaced by the view_matrix uniform. The previous code did not separate the world and view transforms but now we want to move both the instances and the camera independently so we separate out the view and world matrices
  • The calculation of the position has been changed to reflect the separate view and matrix transformations. The way to read the position calculation code is from right-to-left; First the position is transformed into world coordinates, then into view space and finally the perspective is applied.
The fragment shader does not need any changes.

Updating the rendering code

I have updated the code in the draw loop to make the part view matrix plays much explicit than before.
The view matrix is constructed around the idea that the camera always looks at the origin and rotates around it around the y- and x-axes.
Because the view matrix describes the transform from world space into view space it is constructed as the inverse of the camera-to-world transformation. I could construct the camera-to-world matrix and then take its inverse but I have chosen to manually construct the view matrix by applying the different transforms in opposite direction and order.
    let mut view_matrix_glm = glm::translate(&glm::identity(), &glm::Vec3::new(0.0, 0.0, -18.0));
    view_matrix_glm = glm::rotate_x(&view_matrix_glm, camera_angles.x);
    view_matrix_glm = glm::rotate_y(&view_matrix_glm, camera_angles.y);
    let view_matrix: [[f32; 4]; 4] = view_matrix_glm.into();

    let perspective_glm = glm::perspective(1.0, 3.14 / 2.0, 0.1, 1000.0);
    let perspective: [[f32; 4]; 4] = perspective_glm.into();
    let uniforms = glium::uniform! { view_matrix : view_matrix, perspective : perspective };
Finally, the actual render call needs to be updated to let glium know we now using two vertex buffers.
    target.draw( (&vertex_buffer, position_buffer.per_instance().unwrap()),
                &indices, &program, &uniforms, &params ).unwrap();
The difference is that now we pass in a tuple of vertex buffers. The first member contains the cube vertices and the seconds buffer contains the instance vertices.
The draw code now uses a unique matrix for each cube.

Updating the camera

I want to be able to move the camera. We already have the code for converting the camera angles into a view matrix so I just need to hook up to the mouse events to update the camera angles.
The camera should only be updated when the mouse is pressed down, so I need to capture the left mouse button state;
        glutin::WindowEvent::MouseInput { device_id: _device_id, state,  button, ..} => match button {
            glutin::MouseButton::Left => {
                mouse_down = state == glutin::ElementState::Pressed;
The mouse positions are posted via DeviceEvents so it needs to in its own match statement;
    glutin::Event::DeviceEvent { device_id, event } => match event {
        glutin::DeviceEvent::MouseMotion { delta } => {
            if mouse_down {
                camera_angles.y += delta.0 as f32 / 100.0f32;
                camera_angles.x += delta.1 as f32 / 100.0f32;
        _ => (),

Animating the cubes

To really see the instance based matrices in action they need to be animated. So I replace the build_object function with some code to allocate space for the matrices.
    let world_matrix : [[f32;4];4] = ( glm::Mat4x4::identity() ).into();
    let mut positions : Vec<Attr> = vec!( Attr { world_matrix: world_matrix }; 32*80);
This create 32*80 matrices ( this number is completely arbitrary and depends on the animation function. Something I will address in the future ) and sets them all to identity. Before the draw I call animate_object that recalculates all the matrices, overriding the previous matrices. Finally I call write on the vertex buffer object to push them to OpenGL.
    animate_object(&mut positions, t);
The code inside animate_object function that animates the matrices somewhat unimportant as long as the cubes get animated. It is a lot of fun to play with for the purpose of demonstrating but is fun to play with. The code below produces the pulsating arch at the top of this posting.
fn animate_object(attribs: &mut Vec<Attr>, iTime: f32) {
    let mut cursor: glm::Mat4x4 = glm::Mat4x4::identity();
    cursor = glm::translate(&cursor, &glm::Vec3::new( -5.0, -10.0f32, 0.0f32 ) );
    let mut idx : usize = 0; 
    for y in 0..80 {
        cursor = glm::translate(&cursor, &glm::Vec3::new(0.0, 1.5, 0.0));
        cursor = glm::rotate_x(&cursor, 0.04);

        let radius = 7.0 + f32::sin(iTime + y as f32 * 0.4f32) * 2.0f32 * glm::smoothstep(0.0, 0.2, (y as f32) / 20.0f32);
        let points = 32;
        for c in 0..points {
            let mut inner = glm::rotate_y(&cursor, std::f32::consts::PI * 2.0 * c as f32 / (points as f32));
            inner = glm::translate(&inner, &glm::Vec3::new(0.0, 0.0, radius));
            attribs[ idx ].world_matrix = inner.into();
            idx += 1;
The code is a reasonable demonstration of how powerful instancing is and how easy it is to do instancing with glium.
The cubes are pretty ugly as they don't really have any proper shading. In my next blog post I want to add better, more interesting shading

Monday, 17 June 2019

Using Glium

I have recently spent some time with glium, the safe, native Rust wrapper for OpenGL. The library has pretty good tutorials on its website but I thought I log my experience with it as I get to grips with it and try build something non-trivial. ( At the moment I have a vague notion of combining some glium code with my earlier mod-player crate into something )

Adding the dependency

It is very easy to set it up. All I need to do is to create a new rust project and add a glium dependency to it.
glium = "*"

Setting up a window

The glium tutorials go into a fair bit of detail about how the display is setup and gradually introduces concepts such as the z-buffer. I am going jump straight into a screen setup that makes sense for most applications that do some 3D rendering.
Glium ( with glutin ) makes it very easy to setup an OpenGL window ready for rendering. The code below is all that is required to setup a rendering windows.
use glium;
use glium::{glutin, Surface};

fn main() {
    let mut events_loop = glutin::EventsLoop::new();
    let wb = glutin::WindowBuilder::new();
    let cb = glutin::ContextBuilder::new().with_depth_buffer(24);
    let display = glium::Display::new(wb, cb, &events_loop).unwrap();

    let mut closed = false;
    while !closed {
        let mut target = display.draw();
        target.clear_color_and_depth((0.0, 0.0, 1.0, 1.0), 1.0);

        events_loop.poll_events(|ev| match ev {
            glutin::Event::WindowEvent { event, .. } => match event {
                glutin::WindowEvent::CloseRequested => closed = true,
                _ => (),
            _ => (),
The events_loop is a wrapper around the system's event queue. We need it to process events from the windowing system. Here the only event we care about is the event signalling the closing the window.
Glutin has a WindowBuilder for capturing all the window creation parameters. This is where we can set things like size, full-screen, title etc. The default parameters are fine for this program. Creating the WindowBuilder does not actually create the window but is an object that lets you create one.
The ContextBuilder is a builder for OpenGL contexts. All OpenGL calls are associated with a context that determines how the rendering is handled. The context controls things like; pixel format, depth buffer, stencil buffer, OpenGL version etc. Any information that controls how the rendering is performed and that needs to be known in advance goes into the context creation.
The only change to the defaults this program makes is to enable a 24-bit the depth buffer. ( I am using 24-bit depth buffer because many graphics cards would store the depth and stencil data into the same area allocating 24 bits for the depth and 8 bits for stencils. I am sticking by that convention here.)
The constructor of glium::Display takes the event loop and the contexts to create the actual window.
The code inside the while loop clears the display and checks for window close events. All OpenGL calls for rendering a frame have to be surrounded by code indicating the start and end of the graphics operations. In glium that is handled by the Display::draw and Frame::finish()
        let mut target = display.draw();
        // Rendering code goes here
Having previously spent a lot of timne writing OpenGL code with 'raw' OpenGL calls, the Glium/glutin abstraction is excellent and really helps handle all the tedious boiler plate code.

Setting up shaders

Modern OpenGL requires that we set up shaders for the different stages of the rendering pipeline. At minimum this requires us to set up the vertex shader and the fragment shader. There are loads of excellent resources that explain how different shaders work (like
My first vertex shader transforms the vertex from world position and applies a perspective projection matrix to it. In addition to transforming the position it also passes UV coordinates that can be used for texturing.
    #version 140

    in vec3 position;
    in vec2 tex_coords;
    out vec2 v_tex_position;

    uniform mat4 matrix;
    uniform mat4 perspective;
    void main() {
        v_tex_position = tex_coords; 
        gl_Position = perspective * matrix * vec4(position, 1.0);
The fragment shader is a also quite simple. It effectively creates a grid based on the texture coordinates passed into the shader. The nice thing about this shader is that it lets me see the rendering pipeline operating correctly without worrying about lighting or textures.
    #version 140

    in vec2 v_tex_position;
    out vec4 color;
    void main() {
        float dst = min( v_tex_position.y, v_tex_position.x);
        dst = min( dst, min( 1.0-v_tex_position.y, 1.0-v_tex_position.x) );

        float intensity = smoothstep( 0.1, 0.0, dst );
        vec3 col = vec3( 0.0, intensity, 0.0 );    
        color = vec4(col,1.0);
In Glium the shaders are setup using Program::from_source which takes all the shaders, compiles them and sets them up in a Program that can be used for rendering
 let program =
        glium::Program::from_source(&display, vertex_shader_src, fragment_shader_src, None)

Defining Vertex Structures

The vertex shader I created takes vertices with position and texture coordinates as its input so I need to create vertices with those features.
First I need to define the vertex structure. Glium and Rust make it very easy to define new it vertex types. The code below defines the vertex structure and auto-generates the code for the required traits.
#[derive(Copy, Clone)]
struct Vertex3t2 {
    position: [f32; 3],
    tex_coords: [f32; 2],
glium::implement_vertex!(Vertex3t2, position, tex_coords);

Using nalgebra-glm

When I started working with glium I was suprised to find that it didn't come with code for handling vectors and matrices. Initially I just used wrote up my own code but this became quite tedious and error-prone. This is when I discovered the excellent nalgebra-glm crate. Including the crate in my project made the everything much easier. I recommend using it ( or any other decent matrix math crate ) when working with OpenGL.
nalgebra-glm = "0.4.0"

Creating the objects

Creating objects for display requires vertex buffers and index buffers
  • Index buffers control how the vertices are connected to form triangles.
  • Vertex buffers define the actual vertices and any data that belongs to them ( like texture coordinates )
I am using triangle lists for my index information. This means that each set of 3 vertices defines a triangle. Because triangles in a triangle list do not share vertices this is quite an inefficient way to use vertices. It is also very simple because it doesn't require any indices. Essentially we create an index buffer that has no indices.
    let indices = glium::index::NoIndices(glium::index::PrimitiveType::TrianglesList);
The vertex buffers are created by passing glium an array of vertices. For this project I want to create some cubes to display. I break this down by first creating a function for creating a quad. Once I have the quad function I can use it to create the cube.
Because I want to reuse the quad for all the faces I define the quad by its bottom-left vertex and vectors pointing up and to the right. This means that I need to know whether I am working with a left or right handed coordinate system. Whilst writing the code for the cube builder I had to repeatedly check that I got the handedness the right way around.
fn add_quad(dest: &mut Vec<Vertex3t2>, bottom_left: glm::Vec3, up: glm::Vec3, right: glm::Vec3) {
    let top_left: glm::Vec3 = (bottom_left + up).into();
    let top_right: glm::Vec3 = (top_left + right).into();
    let bottom_right: glm::Vec3 = (bottom_left + right).into();
    dest.push(Vertex3t2 {
        position: bottom_left.into(),
        tex_coords: [0.0, 0.0],
    dest.push(Vertex3t2 {
        position: top_left.into(),
        tex_coords: [0.0, 1.0],
    dest.push(Vertex3t2 {
        position: top_right.into(),
        tex_coords: [1.0, 1.0],
The above takes a reference to the vector where the quad vertices are stored, calculates the positions for the quad corners and pushes the vertices into the vector and output one triangle. To complete the quad I output three more vertices.
    dest.push(Vertex3t2 {
        position: bottom_left.into(),
        tex_coords: [0.0, 0.0],
    dest.push(Vertex3t2 {
        position: top_right.into(),
        tex_coords: [1.0, 1.0],
    dest.push(Vertex3t2 {
        position: bottom_right.into(),
        tex_coords: [1.0, 0.0],
I need to be careful to make sure the triangle vertices are output in the same winding order. If I float above the visible side of the triangle and look down on it all the points should go in a clockwise order. This will allow me to use backface culling to reduce the number of rasterized triangles. It will also make it easy to generate normals for all the triangle vertices if they have the same winding order.
Once I have function for generating the quad I can use it generate a cube. A cube is just six quads with different orientations.
fn add_cube(verts: &mut Vec<Vertex3t2>, pos: &glm::Vec3) {
        glm::vec3(-0.5, -0.5, 0.5) + pos,
        glm::vec3(0.0, 1.0, 0.0),
        glm::vec3(1.0, 0.0, 0.0),
    //..5 more sides
...the cube generator lets me generate an array of cubes.
    let mut verts: Vec<Vertex3t2> = Vec::new();
    for x in -2..3 {
        for y in -2..3 {
            for z in -2..3 {
                    &mut verts,
                    &glm::Vec3::new(x as f32 * 2.0, y as f32 * 2.0, z as f32 * 2.0),
Once I have generated all the vertices glium can turn them into a vertex buffer object that can be used for rendering.
    let vertex_buffer = glium::VertexBuffer::new(&display, &verts).unwrap();

Setting up matrices

Before we can render the object we need to setup the world and perspective matrices. As discussed, I use nalgebra-glm for all matrix calculations.
        let mut model_view = glm::rotate_z(&glm::identity(), t);
        model_view = glm::translate(&model_view, &glm::vec3(0.0, 0.0, -12.0));
        model_view = glm::rotate_x(&model_view, t / 2.0);
        model_view = glm::rotate_y(&model_view, t / 2.0);
        let view: [[f32; 4]; 4] = model_view.into();
The first four lines create an identity matrix and apply translation and rotation to it. The last line copies the matrix from a matrix structure into a 4*4 f32 array which can be sent to OpenGL. The type of view must be defined so into knows what type the matrix needs to be converted into.
GLM has a helper function glm::perspective for creating the perspective matrix;
        let perspective = glm::perspective(1.0, 3.14 / 2.0, 0.1, 1000.0);
        let p: [[f32; 4]; 4] = perspective.into();
This create a projection matrix that is ~90 degrees wide ( the field of view argument is passed in radians. 3.14 / 2.0 ) with a minimum and maximum depth of 0.1 and 1000.
Both of the matrices are passed to OpenGL in a uniform block. Uniform is data that is passed into the shader and every instance of the rendered seems the same version of the data.
Glium has a glium::uniform! macro that create a uniform block from the data passed to it. The names of the fields in the uniform macro must match the names of the uniform variables in the shaders. This is how glium binds the uniform data to the shaders.
To create the uniform block we just use the macro.
    let uniforms = glium::uniform! {  matrix: view, perspective: p };

Drawing the object with the right parameters

The final structure we need before we calling the draw function is a glium::DrawParameters. Glium uses this to tell OpenGL about things like culling and z-buffering. Glium does provide a default DrawParameters block bit does not enable the z-buffer so we need to setup one.
    let params = glium::DrawParameters {
        depth: glium::Depth {
            test: glium::draw_parameters::DepthTest::IfLess,
            write: true,
        backface_culling: glium::draw_parameters::BackfaceCullingMode::CullCounterClockwise,
This sets up a DrawParameters structure with defaults except for enabling z-buffer writes, setting the test type and enabling the backface culling. I find the strange .. for using the defaults a bit odd and have not found any good documentation on this syntax.
We know have everything in place to draw the object.
        target.draw( &vertex_buffer, &indices, &program, &uniforms, &params ).unwrap();


So far glium has been very pleasant to work with, striking just the right balance between hiding complexity and getting out of the way when it is not needed. I will definitely continue to use it for my projects.
The code in this project is very simple as it does not make use of instancing or have any kind of lighting model. This is something I want to look at next.

glium - instancing

I have spent a bit of time looking into instancing with  glium  and OpenGL. It turns out that  glium  makes it very easy to make use of th...