This does what I can think of so far: 1) Make sure low latency < high latency, low loss < high loss 2) Loss interval must be at least latencyhigh otherwise every packet that was high latency would be counted as lost before it came back. (see note below) 3) averaging time period must be at least 2 times probe interval - it is not much of an "average" if it averages less than 2 probes :) 4) alert interval must be at least probe interval - there is no point recalculating the average latency and loss more often than once every probe interval. 5) Criteria for showing or hiding the advanced options on page load fixed up to account for all the fields now in the advanced section. 6) Additional information - I have written some stuff that I think is now helpful.
Note: Currently the default loss interval is 500 and latencyhigh is also 500. This makes no sense to me. If a probe comes back in > 500ms then the thread that is waiting for the reply will have given up (loss interval has expired). So any packets with an RTT > 500ms will be considered lost. Therefore there will be no packets recorded with an RTT > 500ms. Therefore the average latency can never exceed latencyhigh. It seems to me that "loss interval" needs to be reasonably higher than "latencyhigh" in any sensible configuration. Thoughts?
Gateway advanced parameter validation
This does what I can think of so far:
1) Make sure low latency < high latency, low loss < high loss
2) Loss interval must be at least latencyhigh otherwise every packet that was high latency would be counted as lost before it came back. (see note below)
3) averaging time period must be at least 2 times probe interval - it is not much of an "average" if it averages less than 2 probes :)
4) alert interval must be at least probe interval - there is no point recalculating the average latency and loss more often than once every probe interval.
5) Criteria for showing or hiding the advanced options on page load fixed up to account for all the fields now in the advanced section.
6) Additional information - I have written some stuff that I think is now helpful.
Note: Currently the default loss interval is 500 and latencyhigh is also 500. This makes no sense to me. If a probe comes back in > 500ms then the thread that is waiting for the reply will have given up (loss interval has expired). So any packets with an RTT > 500ms will be considered lost. Therefore there will be no packets recorded with an RTT > 500ms. Therefore the average latency can never exceed latencyhigh.
It seems to me that "loss interval" needs to be reasonably higher than "latencyhigh" in any sensible configuration.
Thoughts?