# Concurrency in Zig

Mar 19, 2021

That has been a long time since I've written anything. I was quite busy with the school work, playing around with Zig, rewriting this blog and so on. I hope you didn't miss me. Anyway, back to concurrency!

Recently I've been playing around with Zig programming language which in a nutshell, is like Rust but with less C++. And together with it I decided to try out parallel programming.

I know, I shouldn't use threads but in a safe context, where data races are usually not possible, using threads should be fine.

## Spawning a thread

Spawning a thread in Zig is done with Thread.spawn which is available in the standard library. This function takes an optional context and a thread start function and returns a Thread struct or an error.

Zig doesn't have lambdas or anonymous functions, so a thread should be a full-blown function.

const std = @import("std");

fn mythread(ctx: void) void {
    std.log.info("hello from another thread", .{});
}

pub fn main() !void {
    const thread = try std.Thread.spawn({}, mythread);

    thread.wait();
}

Context is void for now, it will be used later.

After creating a thread, .wait() is called on it to block until it finishes. This is needed because the program will terminate as soon as the main thread exits and will also kill all other threads.

## Thread contexts

Ok, what if I need to pass some data to another thread? That's where thread contexts come into play.

Because Zig doesn't have closures which capture variables (hi, Rust), that behaviour is achieved with thread contexts. They are given to the spawn function and then provided as the only argument to thread start function.

const std = @import("std");

fn countThread(max: usize) void {
    var i: usize = 0;

    while (i < max) : (i += 1) {
        std.log.info("counter is {d}", .{ i });
    }
}

pub fn main() !void {
    const thread = try std.Thread.spawn(@as(usize, 20), countThread);

    thread.wait();
}

Here we count until maximum value which is given as a context. "var i: usize = 0" and "@as(usize, 20)" are used to convert the default comptime_int to a runtime-accessible usize.

Of course, those examples are not very useful because they could be replicated in a single-threaded context but this is just to showcase how to create simple threads.

## Shared data and mutexes

Now it's time to talk about shared data between threads because that's what makes threads so great - they can share things in the same namespace.

But sharing data is sometimes tricky because there's no way to synchronize state between threads which could result in data races, which could maybe even cause a memory corruption.

To fix that, introducing mutex - MUTually EXclusive access lock. Mutexes are firstly acquired (locked) by one thread, then a thread do all the work and releases the mutex so others can use it.

Let's improve the counter a bit so it can be accessed within other threads:

const std = @import("std");
const time = std.time;

const CounterCtx = struct {
    lock: *std.Mutex,
    counter: usize,

    pub fn setCounter(self: *CounterCtx, value: usize) void {
        const held = self.lock.acquire();
        defer held.release();

        self.counter = value;
    }

    pub fn getCounter(self: *CounterCtx) usize {
        const held = self.lock.acquire();
        defer held.release();

        return self.counter;
    }
};

fn countThread(ctx: *CounterCtx) void {
    var i: usize = 0;

    while (i < 20) : (i += 1) {
        ctx.setCounter(i);

        time.sleep(time.ns_per_s * 2); // Each 2 seconds
    }
}

pub fn main() !void {
    var mutex = std.Mutex{};
    var ctx = CounterCtx{
        .lock = &mutex,
        .counter = 0,
    };
    const thread = try std.Thread.spawn(&ctx, countThread);

    while (true) {
        const counter = ctx.getCounter();

        std.log.info("counter is {d}", .{ counter });

        time.sleep(time.ns_per_s); // Each second
    }
}

Here we declare a CounterCtx struct that holds a mutex lock and the counter. There are two helper functions setCounter and getCounter to get and set the counter respectively but with mutex protection. defer here ensures that mutex is always released when the function returns.

=> Mutex in std

## Atomic values and operations

Besides mutexes, atomic operations can also be used for thread synchronization. These are operations that take exactly one CPU instruction and thus guarantee that only one thread can access a variable at a time.

There are three builtin functions for that:

Atomic values can be pointers, booleans, integers, floats or enums.

All those functions accept ordering as the last argument. Ordering itself is quite a complicated topic so I won't go over it now. We'll just use Sequentially Consistent (SeqCst) ordering which should work in 99% of cases.

Here's the counter example but this time using atomic values:

const std = @import("std");
const time = std.time;

const CounterCtx = struct {
    counter: *usize,

    pub fn incCounter(self: *CounterCtx) void {
        _ = @atomicRmw(usize, self.counter, .Add, 1, .SeqCst);
    }

    pub fn setCounter(self: *CounterCtx, value: usize) void {
        _ = @atomicStore(usize, self.counter, value, .SeqCst);
    }

    pub fn getCounter(self: *CounterCtx) usize {
        return @atomicLoad(usize, self.counter, .SeqCst);
    }
};

fn countThread(ctx: *CounterCtx) void {
    var i: usize = 0;

    while (i < 10) : (i += 1) {
        ctx.incCounter();

        time.sleep(time.ns_per_s * 2); // Each 2 seconds
    }

    i = 0;
    while (i < 10) : (i += 1) {
        ctx.setCounter(i);

        time.sleep(time.ns_per_s * 2); // Each 2 seconds
    }
}

pub fn main() !void {
    var counter: usize = 0;
    var ctx = CounterCtx{
        .counter = &counter,
    };
    const thread = try std.Thread.spawn(&ctx, countThread);

    while (true) {
        std.log.info("counter is {d}", .{ ctx.getCounter() });

        time.sleep(time.ns_per_s); // Each second
    }
}

Code stays almost the same, we just dropped the mutex and replaced it with @atomicStore/@atomicLoad and there's a new function incCounter which uses @atomicRmw's add operation.

One note that @atomicRmw returns the previous value which is not needed in this example so the result is assigned to underscore (ignored).

For reference, @atomicRmw supports the following operations:

=> Atomic builtins

## Spin locks

Yes, Zig has SpinLock in the standard library but I don't see a reason to cover it here because it behaves just like Mutex.acquire - loops while the value is locked.

=> SpinLock in std

## Channels

One common synchronization primitive is channels which Zig doesn't have (at least by the time of 0.7.1), but they could be implemented on top of other primitives. I'll post an update once I have a working implementation or they get added to the standard library.

## Async

I still haven't tried async in Zig so I can't say anything about it. Sorry!

## Resources

Some useful resources for learning about atomic ordering:

=> https://en.cppreference.com/w/cpp/atomic/memory_order

=> https://doc.rust-lang.org/nomicon/atomics.html

=> https://llvm.org/docs/Atomics.html