cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1151
Views
5
Helpful
4
Replies

QOS Query

Hello
Can anyone conform the following, I cannot seem to find a valid answer from CCO documentation.
I am aware the below used to by default however not so sure this is still valid?

- 75 %of the interface bandwidth can be used for queueing etc..
- 25% of BW (class class-default) which was unused BW left for applications that are not critical but required and if this 25% is not used it shared between the other classes.

Now that default 75% could be increased with max-reserved bw statement thus allocating more BW to the specific classes and decrease the overall class-default, however new ios (XE) this feature seems depreciated and its BW remaining ratios that are used, but do those qos interface percentage defaults still hold value?


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul
1 Accepted Solution

Accepted Solutions

". . .  however my conscious reverts me back the cisco recommendation of no more than 33% guaranteed BW for a PQ ( assuming this still stands?) . . ."

Yea, I also recall Cisco does recommend not exceeding 33%.  Further, recall, they recommend this to insure there's sufficient bandwidth for non LLQ/PQ traffic.

I consider Cisco's "why" (if I remember their "why" correctly) nonsense.  Basically, whatever bandwidth other traffic "needs" should be determined just as if it were the only bandwidth available for that traffic.  (Unfortunately, I believe, not many really know how to do that either.)

That said, a really worthwhile reason to keep LLQ/PQ traffic at 1/3 or less, is, if this is a generic queuing situation, generally because there's a low probability of queuing when your average utilization is 1/3 or less.  Above 2/3, you can start to run into significant queuing.  Between 1/3 and 2/3, you often have some queuing, but often not "a lot".   (More or less, average utilization vs. queuing is a somewhat exponential curve.)  In my experience, often, with real-time traffic like VoIP, you can even get up to about 50% utilization before you start to adversely impact the LLQ/PQ traffic.  For real-time traffic like video, as it's much more variable in bandwidth demand, I would try to avoid exceeding 1/3,  However, I, personally, have usually found up to about 35%, "safe" (with a mix of VoIP and video).

To your question about setting PIR, first as that traffic will be transmitted with LLQ/PQ priority, it would just be considered another part of your overall LLQ/PQ traffic allocation.  For example, if your LLQ priority class 1 is 5 Mbps, and your LLQ priority class 2 is 50 CIR and 10 PIR, you total LLQ/PQ traffic allocation is 65 Mbps, which would be 65/200 = 32.5 %.  Still within 33%, and, in my experience, you could push the PIR a bit more.

As to how much bandwidth your non LLQ/PQ traffic needs, again, it might be fine with just 1%, or it might need more.  Depends on what that traffic needs.

For real-time traffic, we generally don't want drops, additional latency, and too much jitter.  To guarantee that, again, you want, usually at least two (minimally) to three (safer) times the average bandwidth usage as available bandwidth for that traffic.  As LLQ/PQ pushes other traffic "aside", effectively it's able to obtain the whole port's bandwidth for a burst.  (I.e. 2 to 3 times, the inverse of 1/2 to 1/3.)

BTW, the above holds true for pre and post HQF QoS.

Possible


police  cir 50mb pir 139

And with your other 5 Mbps, that's a total of 194/200 = 97%.

Yes, you can do that, if 1) all the other traffic is "happy" with 3% of bandwidth, and 2) you're not expecting any real benefit for using LLQ/PQ, with that kind of allocation, to minimize drops/latency/jitter issues.

Paul, if above is unclear, please let me know.

QoS often appears complicated.  Actually it's not, but much has to be understood for it to make sense.  I.e. it's not so much a deep subject but a wide subject.

Don't know whether this will help, but if we're supporting real-time video, like video conferencing, again, we probably want it in a LLQ/PQ class where it's average utilization is about 1/3 of possible (i.e. port's/link's maximum) bandwidth.  The 2/3 bandwidth we can burst into, because of LLQ/PQ, it's what insures low drops/latency/jitter.

If we're dealing with streaming video, which often has multi-second buffers, we still want to very much avoid drops, but variable latency/jitter are, more-or-less, non-issues.  I.e. we can queue those packets.  For such traffic, we might be fine with supporting an average utilization of up to about 90 to 95% of possible bandwidth.  We want some "excess" bandwidth, because when average usage approaches 100% we can start to get almost "infinitely" deep queues.

View solution in original post

4 Replies 4

Joseph W. Doherty
Hall of Fame
Hall of Fame

Paul, what you have in mind changed with HQF CBWFQ.

Also, BTW, pre-HQF didn't limit actual bandwidth to 25/75 (which your post describes), by default, it limited bandwidth ratio settings (a slightly different thing).  (I.e. even with the 25/75 setting, class-default could use [physically] up to 100%, and non-class-default could use [physically] up to 100%.)

"Now that default 75% could be increased with max-reserved bw statement thus allocating more BW to the specific classes and decrease the overall class-default, . . ."

Correct pre-HQF.  Again (also both pre/post-HQF), "allocating" is really a minimum guarantee, it's not actually allocated as in reserved as in set aside for exclusive use.  Also, the guarantee only really matches the configuration when 100% has been configured across all the classes, and all the classes, "want" the configured, or more, bandwidth.  I find it easier to keep in mind what's actually happening is you're setting dequeuing ratios.

E.g. you get, effectively, the same results for:

policy-map alike
class named
bandwidth percent 50
class class-default
bandwidth percent 50

policy-map alike
class named
bandwidth percent 33
class class-default
bandwidth percent 33

policy-map alike
class named
bandwidth percent 1
class class-default
bandwidth percent 1

Oh, I also recall, post HQF precludes allocating 100%, aggregate for named classes, as 1% (logically) needs to be available for class-default.

BTW, there are other changes between pre and post HQF.  A major change is FQ is supported within other "bandwidth" classes.  A subtle change is FQ was WFW in pre-HQF, but just FQ in post-HQF.

Paul, I've edited this reply, to be, hopefully, more comprehensive, but if your question is still unanswered, or you have another, please post it.

Hello Joseph
Okay so this is my query.
I want tweak a current qos policy that has multi-level priority queuing to include a definitive 2 colour policer (cir/pir) so to not stave any lower no priority classes and to keep the current defined minimal BW values on its other non priority classes which you cannot do unless at some point you drop the excess or violate traffic from the PQs.

Now with this policer I have been asked to provide as much PIR on this priority class as we can afford, however my conscious reverts me back the cisco recommendation of no more than 33% guaranteed BW for a PQ ( assuming this still stands?) which we are exceeding already and as stated if my BW calculations are correct or not.

So my bw calculations (I thought) needed to include all the specific class allocations plus the default class-default reservation, then what's left over I could possibly give it this PQ PIR.

example 200MB CDR

 

Policy-map child
class EF
priority level 1
police  5mb confirm-action transmit exceed-action drop
class AFx
priority level2
police  cir 50mb pir ??mb confirm-action transmit exceed-action set-dscp-transmit csx violate-action drop
class B
bandwidth 2mb
class c
bandwidth 2m
class d
bandwidth 2m
class class-default
fair-queue

Policy-map parent
class class-default
shape average 200000000
service-policy child


So my calculation which I am now thinking is incorrect (hence this post)
25% of 200 = 50Mb (class default -same as pre-Hqos)
50+61mb= 111MB  (all classes)
89 MB remaining

Possible


police  cir 50mb pir 139 (high llq %)



I guess the question is would 1% of  a CDR within a Hqos MLPQ policy be adequate for the class-default  or would it be beneficial  go with a pre-Hqos value?


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

". . .  however my conscious reverts me back the cisco recommendation of no more than 33% guaranteed BW for a PQ ( assuming this still stands?) . . ."

Yea, I also recall Cisco does recommend not exceeding 33%.  Further, recall, they recommend this to insure there's sufficient bandwidth for non LLQ/PQ traffic.

I consider Cisco's "why" (if I remember their "why" correctly) nonsense.  Basically, whatever bandwidth other traffic "needs" should be determined just as if it were the only bandwidth available for that traffic.  (Unfortunately, I believe, not many really know how to do that either.)

That said, a really worthwhile reason to keep LLQ/PQ traffic at 1/3 or less, is, if this is a generic queuing situation, generally because there's a low probability of queuing when your average utilization is 1/3 or less.  Above 2/3, you can start to run into significant queuing.  Between 1/3 and 2/3, you often have some queuing, but often not "a lot".   (More or less, average utilization vs. queuing is a somewhat exponential curve.)  In my experience, often, with real-time traffic like VoIP, you can even get up to about 50% utilization before you start to adversely impact the LLQ/PQ traffic.  For real-time traffic like video, as it's much more variable in bandwidth demand, I would try to avoid exceeding 1/3,  However, I, personally, have usually found up to about 35%, "safe" (with a mix of VoIP and video).

To your question about setting PIR, first as that traffic will be transmitted with LLQ/PQ priority, it would just be considered another part of your overall LLQ/PQ traffic allocation.  For example, if your LLQ priority class 1 is 5 Mbps, and your LLQ priority class 2 is 50 CIR and 10 PIR, you total LLQ/PQ traffic allocation is 65 Mbps, which would be 65/200 = 32.5 %.  Still within 33%, and, in my experience, you could push the PIR a bit more.

As to how much bandwidth your non LLQ/PQ traffic needs, again, it might be fine with just 1%, or it might need more.  Depends on what that traffic needs.

For real-time traffic, we generally don't want drops, additional latency, and too much jitter.  To guarantee that, again, you want, usually at least two (minimally) to three (safer) times the average bandwidth usage as available bandwidth for that traffic.  As LLQ/PQ pushes other traffic "aside", effectively it's able to obtain the whole port's bandwidth for a burst.  (I.e. 2 to 3 times, the inverse of 1/2 to 1/3.)

BTW, the above holds true for pre and post HQF QoS.

Possible


police  cir 50mb pir 139

And with your other 5 Mbps, that's a total of 194/200 = 97%.

Yes, you can do that, if 1) all the other traffic is "happy" with 3% of bandwidth, and 2) you're not expecting any real benefit for using LLQ/PQ, with that kind of allocation, to minimize drops/latency/jitter issues.

Paul, if above is unclear, please let me know.

QoS often appears complicated.  Actually it's not, but much has to be understood for it to make sense.  I.e. it's not so much a deep subject but a wide subject.

Don't know whether this will help, but if we're supporting real-time video, like video conferencing, again, we probably want it in a LLQ/PQ class where it's average utilization is about 1/3 of possible (i.e. port's/link's maximum) bandwidth.  The 2/3 bandwidth we can burst into, because of LLQ/PQ, it's what insures low drops/latency/jitter.

If we're dealing with streaming video, which often has multi-second buffers, we still want to very much avoid drops, but variable latency/jitter are, more-or-less, non-issues.  I.e. we can queue those packets.  For such traffic, we might be fine with supporting an average utilization of up to about 90 to 95% of possible bandwidth.  We want some "excess" bandwidth, because when average usage approaches 100% we can start to get almost "infinitely" deep queues.

Hello Joseph
Thank you, Its very much appreciated.


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul
Review Cisco Networking for a $25 gift card