I have a workload that I'd like to limit with qos, so I tested qos on a non-production SVM with a single policy (cluster wide) with a single volume in the qos policy. Whether I specify an iops limit or a througput limit, the actual, observed limit is exactly half of the specified limit.
The doc shows a nice example where a workload is clamped right at the set limit. The doc also says a 10% overage is not uncommon...but 50% low seems way off.
Example (statistics output clipped to show only the volume of interest):
No qos set:
clusterX::> qos statistics workload performance show
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
home01-wid2228 2228 3127 97.73MB/s 1341.00us
home01-wid2228 2228 3171 99.08MB/s 1288.00us
home01-wid2228 2228 3206 100.17MB/s 1.70ms
(The unthrottled load is capable of ~3,200 iops at 100 MB/s.)
qos bandwidth limit set:
clusterX::> qos policy-group modify -policy-group test_qos -max-throughput 10mb/s
clusterX::> qos statistics workload performance show
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
home01-wid2228 2228 159 4.96MB/s 673.04ms
home01-wid2228 2228 160 4.99MB/s 727.68ms
home01-wid2228 2228 159 4.95MB/s 787.47ms
qos iops limit set:
clusterX::> qos policy-group modify -policy-group test_qos -max-throughput 1000iops
clusterX::> qos statistics workload performance show
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
home01-wid2228 2228 494 15.44MB/s 242.15ms
home01-wid2228 2228 505 15.76MB/s 241.72ms
home01-wid2228 2228 494 15.45MB/s 245.38ms
(I realize as well that the limit is a maximum limit, so anything under that is "techincally" correct, but still...)
The system is a 2-node AFF cluster running 8.3.2P5.