Since GFS2 started to use SMAPIv3 and qcow2 file format, we decided to do some performance tests.
In order to keep it simple as possible, only the filelevel based SR is tested:
Few minor issues: name-label and name-description aren't pushed correctly to XAPI, the SR is named "SR NAME" with "FILEBASED SR" decription. It's only a small glitch, but at least it's reported Otherwise, I can confirm the disk file is created and is a valid QCOW2 file.
I did a benchmark on a Samsung 850 EVO SSD, on the same VM. Before benching, I did a local 'ext' SR on the same disk and still the same VM, so I could compare it.
Here is the results:
- Sequential booth read then write (queue depth 32, 1 thread): SMAPIv3 is 3 times slower than "ext" SR. Also with SMAPIv3, note that `tapdisk` process seems to be at 100%.
- Random Read (4KiB, queue depth 8, 8 threads): SMAPIv3 is 150 times slower than "ext" SR
- Random Write (4KiB, queue depth 8, 8 threads): SMAPIv3 is 95 times slower than "ext" SR
If you want the detailed numbers, let me know.
Did I missed something during the SR creation? Since GFS2 is basically filelevel + GFS2 FS + cluster on top, I should expect roughly the same result I suppose.
Note: XCP-ng team and I would like to assist on this new storage stack which sounds better (in terms of architecture). I'm not sure this place is the best to discuss technical details, but XAPI/XS MLs and IRC channels are dead. Any hint welcome