Hi Willy,
I’ve found a bottleneck in my VMs!
I use log stdout format raw local0 err
to minimize the logs to only errors and critical ones, and no formatting to not waste extra time. I even tried with log level emerg
but still HAProxy logged entries like:
00000001:test_fe.clireq[001c:ffffffff]: GET /haproxy-load HTTP/1.1
00000002:test_fe.clihdr[001e:ffffffff]: host: 192.168.0.206:8080
00000006:test_fe.accept(0004)=0020 from [::ffff:192.168.0.72:46768] ALPN=<none>
00000001:test_fe.clihdr[001c:ffffffff]: host: 192.168.0.206:8080
00000002:test_fe.clihdr[001e:ffffffff]: content-type: application/json
00000004:test_fe.accept(0004)=0024 from [::ffff:192.168.0.72:46754] ALPN=<none>
00000005:test_fe.accept(0004)=001f from [::ffff:192.168.0.72:46766] ALPN=<none>
00000001:test_fe.clihdr[001c:ffffffff]: content-type: application/json
00000007:test_fe.accept(0004)=0025 from [::ffff:192.168.0.72:46776] ALPN=<none>
00000003:test_fe.clireq[0022:ffffffff]: GET /haproxy-load HTTP/1.1
00000008:test_fe.accept(0004)=001d from [::ffff:192.168.0.72:46756] ALPN=<none>
00000004:test_fe.clireq[0024:ffffffff]: GET /haproxy-load HTTP/1.1
00000003:test_fe.clihdr[0022:ffffffff]: host: 192.168.0.206:8080
00000004:test_fe.clihdr[0024:ffffffff]: host: 192.168.0.206:8080
and a lot of those — few GBs in 30secs. I wasn’t able to find a way to suppress those logs so I just redirected them into a file:
$HAPROXY_HOME/haproxy -p $PID_FILE -f haproxy.conf -d 2>&1 > /tmp/haproxy-load.log
Today I replaced /tmp/haproxy-load.log
with /dev/null
and with this simple change I started getting such results:
- aarch64, HTTP: 80 500 reqs/sec
- x86_64, HTTP: 64 900 reqs/sec
- aarch64, HTTPS: 75 400 reqs/sec
- x86_64, HTTPS: 63 400 reqs/sec
It seems there is something different in the disk IO management on the two VMs after all. The disks are the same model from the same manufacturer, same IOPS, same file system (ext4).
I’ve ran my tests many times and the results are consistent.