For quite some time now, I have been wanting to make a visualization library in Rust with a focus on both performance and flexibility. I want it to be a library that allows me to take continuously updated streams of data and easily turn them into the visualizations that I want. I also want the syntax to be mostly declarative so that it is easy to read the intent.

I have been trying to build something on top of the wgpu library that would enable performant, yet simple, visualization. To avoid getting bottlenecked by CPU-operations I started looking into automatic modification of GPU shaders. Thankfully, doing so is possible with the naga library, which is made by the same people working on the wgpu project.

The project I am working on is called “Visula”. The syntax currently looks something like this:

impl visula::Simulation for Simulation {
    fn init(application: &mut visula::Application) -> Result<Simulation, Error> {
        let particles = generate_particles();
        let particle_buffer = Buffer::<Particle>::new_with_init(
            application,
            BufferUsages::UNIFORM | BufferUsages::VERTEX | BufferUsages::COPY_DST,
            particles,
            "particle",
        );
        let spheres = Spheres::new(
            application,
            &SphereDelegate {
                position: delegate!(particle.position),
                radius: delegate!(1.0),
                color: delegate!(particle.position / 40.0 + vec3::<f32>(0.5, 0.5, 0.5)),
            },
        )?;
        Ok(Simulation {
            particles,
            particle_buffer,
            spheres,
        })
    }

    fn update(&mut self, application: &visula::Application) {
        update_particles(&mut self.particles);
    }

    fn render(&mut self, data: &mut SimulationRenderData) {
        self.spheres.render(data);
    }
}

I have left out the implementation of generate and integrate. Those would be the functions that creates the initial data you want to render and performs subsequent updates.

The interesting part is anyways in the definition of the spheres:

let spheres = Spheres::new(
    application,
    &SphereDelegate {
        position: delegate!(particle.position),
        radius: delegate!(1.0),
        color: delegate!(particle.position / 40.0 + vec3::<f32>(0.5, 0.5, 0.5)),
    },
)?;

This is where the rendering of the spheres is tied to the data in the particle buffer. Simply by using the buffer in the SphereDelegate the Visula library automatically injects the necessary code into the GPU shader and ensures that the buffer data is properly bound when rendering.

Unfortunately, I have to rely on using macros to enable the above expressions. This is because I have yet to find a nice way to use operator overloading in Rust to build a computational graph. But more on that in a later post.

I am also considering whether I should replace the macro

delegate!(particle.position / 40.0 + vec3::<f32>(0.5, 0.5, 0.5)),

with a syntax like

particle.position.divide(40.0).add(vec3::<f32>(0.5, 0.5, 0.5)),

but I need to do some pretty big architectural changes to get there first.

If you want to see a live rendering using this library, you can check out my earlier post on Molecular dynamics and crystallization.