7 GB/s at 512 KB block size is only ~14,000 IO/s which is a whopping ~70 us/IO. That is a trivial rate for even synchronous IO. You should only need one inflight operation (prefetch 1) to overlap your memory copy (to avoid serializing the IO with the memory copy) to get the full IO bandwidth.
Their referenced previous post [1] demonstrates ~240,000 IO/s when using basic settings. Even that seems pretty low, but is still more than enough to completely trivialize this benchmark and saturate the hardware IO with zero tuning.
Zig is currently undergoing lots of breaking changes in the IO API and implementation. Any post about IO in zig should also mention the zig version used.
I see it’s 0.15.1 in the zon file, but that should also be part of the post somewhere.
I see you use a hard-coded constant ALIGN = 512. Many NVMe drives actually allow you to raise the logical block size to 4096 by re-formatting (nvme-format(1)) the drive.
Just opted to use fixed 512 byte alignment in this library since all decent SSDs I have encountered so far are ok with 512 byte file alignment and 64 byte memory alignment.
This makes the code a bit simpler both in terms of the allocator and the file operations.
It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.
In some situations, the “logical” block size can differ. For example, buffered writes use the page cache, which operates in PAGE_SIZE blocks (usually 4K). Or your RAID stripe size might be misconfigured, stuff like that. Otherwise they should be equal for best outcomes.
In general, we want it to be as small as possible!
> It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.
NVMe drives have at least three "hardware block sizes". There's the LBA size that determines what size IO transfers the OS must exchange with the drive, and that can be re-configured on some drives, usually 512B and 4kB are the options. There's the underlying page size of the NAND flash, which is more or less the granularity of individual read and write operations, and is usually something like 16kB or more. There's the underlying erase block size of the NAND flash that comes into play when overwriting data or doing wear leveling, and is usually several MB. There's the granularity of the SSD controller's Flash Translation Layer, which determines the smallest size write the SSD can handle without doing a read-modify-write cycle, usually 4kB regardless of the LBA format selected, but on some special-purpose drives can be 32kB or more.
And then there's an assortment of hints the drive can provide to the OS about preferred granularity and alignment for best performance, or requirements for atomic operations. These values will generally be a consequence of the the above values, and possibly also influenced by the stripe and parity choices the SSD vendor made.
Why would you want the block size to be as small as possible? You will only benefit from that for very small files, hence the sweet spot is somewhere between "as small as possible" and "small multiple of the hardware block size".
If you have bigger files, then having bigger blocks means less fixed overhead from syscalls and NVMe/SATA requests.
If your native device block size is 4KiB, and you fetch 512 byte blocks, you need storage side RAM to hold smaller blocks and you have to address each block independently. Meanwhile if you are bigger than the device block size you end up with fewer requests and syscalls. If it turns out that the requested block size is too large for the device, then the OS can split your large request into smaller device appropriate requests to the storage device, since the OS knows the hardware characteristics.
The most difficult to optimize case is the one where you issue many parallel requests to the storage device using asynchronous file IO for latency hiding. In that case, knowing the device's exact block size is important, because you are IOPs bottlenecked and a block size that is closer to what the device supports natively will mean fewer IOPs per request.
I'm not very familiar with zig and was kind a struggling to follow the code and maybe that's why I couldn't find where the setting was being set up, but in case it's not, be sure to also use registered file descriptors with io_uring as they make a fairly big difference.
I was doing heavier computations to create write buffers and this actually made a bit of a difference before. Reverting to page_allocator now since it doesn't make a difference with new code.
7 GB/s at 512 KB block size is only ~14,000 IO/s which is a whopping ~70 us/IO. That is a trivial rate for even synchronous IO. You should only need one inflight operation (prefetch 1) to overlap your memory copy (to avoid serializing the IO with the memory copy) to get the full IO bandwidth.
Their referenced previous post [1] demonstrates ~240,000 IO/s when using basic settings. Even that seems pretty low, but is still more than enough to completely trivialize this benchmark and saturate the hardware IO with zero tuning.
[1] https://steelcake.com/blog/comparing-io-uring/
IOPS depends on the size of the individual IO and also the SSD. I was just trying to see if it is possible to reach max disk READ/WRITE.
Planning to add random reads with 4K and 512 blocksize to the example so I can measure IOPS too
Zig is currently undergoing lots of breaking changes in the IO API and implementation. Any post about IO in zig should also mention the zig version used.
I see it’s 0.15.1 in the zon file, but that should also be part of the post somewhere.
This doesn't use std except the IoUring API so it doesn't depend on the new changes
I see you use a hard-coded constant ALIGN = 512. Many NVMe drives actually allow you to raise the logical block size to 4096 by re-formatting (nvme-format(1)) the drive.
I realise this and also implemented it properly here before: https://github.com/steelcake/io2/blob/fd8b3d13621256a25637e3...
Just opted to use fixed 512 byte alignment in this library since all decent SSDs I have encountered so far are ok with 512 byte file alignment and 64 byte memory alignment.
This makes the code a bit simpler both in terms of the allocator and the file operations.
It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.
In some situations, the “logical” block size can differ. For example, buffered writes use the page cache, which operates in PAGE_SIZE blocks (usually 4K). Or your RAID stripe size might be misconfigured, stuff like that. Otherwise they should be equal for best outcomes.
In general, we want it to be as small as possible!
> It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.
NVMe drives have at least three "hardware block sizes". There's the LBA size that determines what size IO transfers the OS must exchange with the drive, and that can be re-configured on some drives, usually 512B and 4kB are the options. There's the underlying page size of the NAND flash, which is more or less the granularity of individual read and write operations, and is usually something like 16kB or more. There's the underlying erase block size of the NAND flash that comes into play when overwriting data or doing wear leveling, and is usually several MB. There's the granularity of the SSD controller's Flash Translation Layer, which determines the smallest size write the SSD can handle without doing a read-modify-write cycle, usually 4kB regardless of the LBA format selected, but on some special-purpose drives can be 32kB or more.
And then there's an assortment of hints the drive can provide to the OS about preferred granularity and alignment for best performance, or requirements for atomic operations. These values will generally be a consequence of the the above values, and possibly also influenced by the stripe and parity choices the SSD vendor made.
I've run into (specialized) flash hardware with 512 kB for that 3rd size.
Why would you want the block size to be as small as possible? You will only benefit from that for very small files, hence the sweet spot is somewhere between "as small as possible" and "small multiple of the hardware block size".
If you have bigger files, then having bigger blocks means less fixed overhead from syscalls and NVMe/SATA requests.
If your native device block size is 4KiB, and you fetch 512 byte blocks, you need storage side RAM to hold smaller blocks and you have to address each block independently. Meanwhile if you are bigger than the device block size you end up with fewer requests and syscalls. If it turns out that the requested block size is too large for the device, then the OS can split your large request into smaller device appropriate requests to the storage device, since the OS knows the hardware characteristics.
The most difficult to optimize case is the one where you issue many parallel requests to the storage device using asynchronous file IO for latency hiding. In that case, knowing the device's exact block size is important, because you are IOPs bottlenecked and a block size that is closer to what the device supports natively will mean fewer IOPs per request.
> The result I get from fio is 4.083 GB/s write, … The result for the diorw example in zig is 3.802 GB/s write….
4.083 / 3.802 = 1.0739
2^30 / 10^9 = 1.0737
I think the same rate was likely achieved but there is confusion between GiB and GB.
Yeah, seems like fio might be mixing them up or it is just faster than the zig code.
Fio seems to interpret `16g` as 16GiB so it creates a 16GiB ~= 17.2GB file. But not sure if it is reading/writing the whole thing.
It seems like the max performance of the SSD is 7GB/s in spec so it is kind of confusing.
I'm not very familiar with zig and was kind a struggling to follow the code and maybe that's why I couldn't find where the setting was being set up, but in case it's not, be sure to also use registered file descriptors with io_uring as they make a fairly big difference.
I added registering file descriptors now.
It didn't make any difference when I was benchmarking with fio but I didn't use many threads so not sure.
I added it anyway since I saw this comment and also the io_uring document says it should make a difference.
Interesting how an implementation using FreeBSDs AIO would compare.
You should be able to compare AIO vs io_uring using fio if it works on FreeBSD.
There are some config examples here [0] but they would be different for AIO so need to check fio documentation to find corresponding config for AIO.
[0]: https://steelcake.com/blog/comparing-io-uring/
why not use page allocator to get aligned memory instead of overallocating?
I was doing heavier computations to create write buffers and this actually made a bit of a difference before. Reverting to page_allocator now since it doesn't make a difference with new code.