on 17-12-2022 13:24
Hi all
Had my M600 upgraded to Gig1 few days ago, and got a SH5
SH4 never had a attenuator fitted so was a easy swap.
I set up BQM and see lots of high latency spikes (yellow), is this normal?
SH5 Stats:
Downstream bonded channels
1 | 330000000 | 4.1 | 40 | QAM 256 | 25 |
2 | 210000000 | 4.6 | 41 | QAM 256 | 10 |
3 | 218000000 | 4.5 | 41 | QAM 256 | 11 |
4 | 226000000 | 4.6 | 41 | QAM 256 | 12 |
5 | 234000000 | 4.9 | 41 | QAM 256 | 13 |
6 | 242000000 | 5 | 41 | QAM 256 | 14 |
7 | 250000000 | 5 | 41 | QAM 256 | 15 |
8 | 258000000 | 4.9 | 41 | QAM 256 | 16 |
9 | 266000000 | 4.8 | 41 | QAM 256 | 17 |
10 | 274000000 | 4.6 | 40 | QAM 256 | 18 |
11 | 282000000 | 4.6 | 40 | QAM 256 | 19 |
12 | 290000000 | 4.5 | 40 | QAM 256 | 20 |
13 | 298000000 | 4.7 | 40 | QAM 256 | 21 |
14 | 306000000 | 4.8 | 40 | QAM 256 | 22 |
15 | 314000000 | 4.7 | 40 | QAM 256 | 23 |
16 | 322000000 | 4.4 | 40 | QAM 256 | 24 |
17 | 338000000 | 3.9 | 40 | QAM 256 | 26 |
18 | 346000000 | 4 | 40 | QAM 256 | 27 |
19 | 354000000 | 4 | 40 | QAM 256 | 28 |
20 | 362000000 | 4.2 | 40 | QAM 256 | 29 |
21 | 370000000 | 4.1 | 40 | QAM 256 | 30 |
22 | 378000000 | 4 | 40 | QAM 256 | 31 |
23 | 386000000 | 3.5 | 39 | QAM 256 | 32 |
24 | 394000000 | 3.1 | 39 | QAM 256 | 33 |
25 | 402000000 | 2.6 | 39 | QAM 256 | 34 |
26 | 410000000 | 2.6 | 39 | QAM 256 | 35 |
27 | 418000000 | 2.7 | 39 | QAM 256 | 36 |
28 | 426000000 | 2.9 | 39 | QAM 256 | 37 |
29 | 434000000 | 3 | 39 | QAM 256 | 38 |
30 | 442000000 | 2.9 | 39 | QAM 256 | 39 |
31 | 450000000 | 2.7 | 39 | QAM 256 | 40 |
Downstream bonded channels
1 | Locked | 40 | 5 | 0 |
2 | Locked | 41 | 2 | 0 |
3 | Locked | 41 | 9 | 0 |
4 | Locked | 41 | 7 | 0 |
5 | Locked | 41 | 5 | 0 |
6 | Locked | 41 | 8 | 0 |
7 | Locked | 41 | 6 | 0 |
8 | Locked | 41 | 3 | 0 |
9 | Locked | 41 | 5 | 0 |
10 | Locked | 40 | 14 | 0 |
11 | Locked | 40 | 5 | 0 |
12 | Locked | 40 | 8 | 0 |
13 | Locked | 40 | 9 | 0 |
14 | Locked | 40 | 12 | 0 |
15 | Locked | 40 | 6 | 0 |
16 | Locked | 40 | 10 | 0 |
17 | Locked | 40 | 8 | 0 |
18 | Locked | 40 | 3 | 0 |
19 | Locked | 40 | 8 | 0 |
20 | Locked | 40 | 7 | 0 |
21 | Locked | 40 | 10 | 0 |
22 | Locked | 40 | 7 | 0 |
23 | Locked | 39 | 11 | 0 |
24 | Locked | 39 | 8 | 0 |
25 | Locked | 39 | 8 | 0 |
26 | Locked | 39 | 7 | 0 |
27 | Locked | 39 | 9 | 0 |
28 | Locked | 39 | 11 | 0 |
29 | Locked | 39 | 11 | 0 |
30 | Locked | 39 | 10 | 0 |
31 | Locked | 39 | 9 | 0 |
Upstream bonded channels
0 | 49600000 | 45.3 | 5120 | QAM 64 | 1 |
1 | 43100000 | 45.3 | 5120 | QAM 64 | 2 |
2 | 36600000 | 44.8 | 5120 | QAM 64 | 3 |
3 | 30100000 | 44.3 | 5120 | QAM 64 | 4 |
4 | 23600000 | 43.8 | 5120 | QAM 64 | 5 |
Upstream bonded channels
0 | ATDMA | 0 | 0 | 0 | 0 |
1 | ATDMA | 0 | 0 | 0 | 0 |
2 | ATDMA | 0 | 0 | 0 | 0 |
3 | ATDMA | 0 | 0 | 0 | 0 |
4 | ATDMA | 0 | 0 | 0 | 0 |
Does everything look ok?
Thanks
05-01-2023 16:47 - edited 05-01-2023 17:02
Understood. As mentioned no more RF spectrum upgrades. Some areas won't even get the 3.1 upstream channel others have right now as there's nowhere for it to go: their return paths end at 50 MHz.
Maximum realistic upload speeds for a single customer of 100 Mbit/s with the 5 * SC-QAM + 1 * OFDMA the 65 MHz return areas have, would need to remove SC-QAM and replace with OFDMA to get more which is likely.
The 50 MHz areas are maxed out until spectrum is reallocated to OFDMA.
The 85 MHz areas could go to 200+ without rearrangement.
The RFoG areas can't have OFDMA. 100 Mbit realistic maximum.
Downstream wise a 2.2 Gbit/s service is perfectly doable, however it would likely at least for a while have different upload speeds depending on the area: as low as 50 and as high as 200 is feasible.
100 up in areas with the 5 * SC-QAM + OFDMA should be a thing in the next few months.
As a higher proportion of the customer base use 3.1 modems spectrum both upstream and down will be reallocated from regular QAMs to OFDM/A and even on spectrum restricted networks another OFDM block can be lit downstream.
on 05-01-2023 16:57
07-01-2023 11:24 - edited 07-01-2023 11:28
just an update, had a engineer come out today, did some tests on the SH5, all looked fine, no faults and power levels looked ok and speeds were good, hes never seen 56mb upload speeds before on his device.
he did check utilisation and at 10am it was 89% in my area, so could be this.
before he left he saw that i had a powered splitter ftted (from when we had extra tv boxes fitted) and that a 4db equaliser fitted, he took this off and said it was needed on the splitter end.
i showed him the BQMs and he was quite shocked at how high the latency was in the evenings
he called his network mate who took the mac address of the SH5 and checked his Grafana charts to compare, showed similar results to the BQM
in the end, he said see how it goes and to monitor the graphs to see if it performs better, if not then maybe next step is to change cables coming into my house.
he did mention SH6 is coming soon and 2.5G speeds coming this year too (his words)
will give it a week to see if BQMS improve, until then, pretty much stuck with VM until BRSK arrive in our area which is soon (900mb/900mb speeds)
for comparison, here are the router stats
Downstream bonded channels
1 | 330000000 | 5.2 | 40 | QAM 256 | 25 |
2 | 210000000 | 6.7 | 41 | QAM 256 | 10 |
3 | 218000000 | 6.5 | 41 | QAM 256 | 11 |
4 | 226000000 | 6.5 | 41 | QAM 256 | 12 |
5 | 234000000 | 6.7 | 41 | QAM 256 | 13 |
6 | 242000000 | 6.8 | 41 | QAM 256 | 14 |
7 | 250000000 | 6.7 | 41 | QAM 256 | 15 |
8 | 258000000 | 6.6 | 41 | QAM 256 | 16 |
9 | 266000000 | 6.4 | 41 | QAM 256 | 17 |
10 | 274000000 | 6.2 | 41 | QAM 256 | 18 |
11 | 282000000 | 6 | 40 | QAM 256 | 19 |
12 | 290000000 | 6 | 40 | QAM 256 | 20 |
13 | 298000000 | 6.1 | 40 | QAM 256 | 21 |
14 | 306000000 | 6.2 | 40 | QAM 256 | 22 |
15 | 314000000 | 6 | 40 | QAM 256 | 23 |
16 | 322000000 | 5.6 | 40 | QAM 256 | 24 |
17 | 338000000 | 5.1 | 40 | QAM 256 | 26 |
18 | 346000000 | 5.1 | 40 | QAM 256 | 27 |
19 | 354000000 | 5 | 40 | QAM 256 | 28 |
20 | 362000000 | 5.1 | 40 | QAM 256 | 29 |
21 | 370000000 | 4.9 | 40 | QAM 256 | 30 |
22 | 378000000 | 4.8 | 40 | QAM 256 | 31 |
23 | 386000000 | 4.3 | 39 | QAM 256 | 32 |
24 | 394000000 | 3.8 | 39 | QAM 256 | 33 |
25 | 402000000 | 3.3 | 39 | QAM 256 | 34 |
26 | 410000000 | 3.1 | 39 | QAM 256 | 35 |
27 | 418000000 | 3.2 | 39 | QAM 256 | 36 |
28 | 426000000 | 3.5 | 39 | QAM 256 | 37 |
29 | 434000000 | 3.5 | 39 | QAM 256 | 38 |
30 | 442000000 | 3.4 | 39 | QAM 256 | 39 |
31 | 450000000 | 3.2 | 39 | QAM 256 | 40 |
Downstream bonded channels
1 | Locked | 40 | 0 | 0 |
2 | Locked | 41 | 0 | 0 |
3 | Locked | 41 | 0 | 0 |
4 | Locked | 41 | 0 | 0 |
5 | Locked | 41 | 0 | 0 |
6 | Locked | 41 | 0 | 0 |
7 | Locked | 41 | 0 | 0 |
8 | Locked | 41 | 0 | 0 |
9 | Locked | 41 | 0 | 0 |
10 | Locked | 41 | 0 | 0 |
11 | Locked | 40 | 0 | 0 |
12 | Locked | 40 | 0 | 0 |
13 | Locked | 40 | 0 | 0 |
14 | Locked | 40 | 0 | 0 |
15 | Locked | 40 | 0 | 0 |
16 | Locked | 40 | 0 | 0 |
17 | Locked | 40 | 0 | 0 |
18 | Locked | 40 | 0 | 0 |
19 | Locked | 40 | 0 | 0 |
20 | Locked | 40 | 0 | 0 |
21 | Locked | 40 | 0 | 0 |
22 | Locked | 40 | 0 | 0 |
23 | Locked | 39 | 0 | 0 |
24 | Locked | 39 | 0 | 0 |
25 | Locked | 39 | 0 | 0 |
26 | Locked | 39 | 0 | 0 |
27 | Locked | 39 | 0 | 0 |
28 | Locked | 39 | 0 | 0 |
29 | Locked | 39 | 0 | 0 |
30 | Locked | 39 | 0 | 0 |
31 | Locked | 39 | 0 | 0 |
Upstream bonded channels
0 | 49600000 | 42.8 | 5120 | QAM 64 | 1 |
1 | 43100000 | 42.8 | 5120 | QAM 64 | 2 |
2 | 36600000 | 42.3 | 5120 | QAM 64 | 3 |
3 | 30100000 | 41.8 | 5120 | QAM 64 | 4 |
4 | 23600000 | 41.3 | 5120 | QAM 64 | 5 |
on 07-01-2023 11:51
Changing the cables won't help, as there's no evidence of any noise in the hub stats, which are excellent. This is simply a lack of capacity, and the only fix for that other than changing ISP is for VM to invest serious time and skill in analysing the local network, and then re-segmenting - possibly with additional CMTS capacity. That's not something with the capability of the field technicians that visit customer properties, it requires a referral to VM's Network team, then for that team to analyse the performance. If they decide performance is acceptable (even if it's not acceptable to you) nothing will be done. If they agree there's a problem, it gets a fault reference and a review date. But that doesn't mean that actual work to both plan a solution and implement a solution will occur. Your area problem sits in a big pile of jobs, all competing for funds and skills. If the CEO's teenage son is complaining about latency, that'll go to the top of the pile, apart from that it's who shouts loudest, chances are your area will bounce around in the stack of jobs, and maybe it'll get into those planned to be done. There will be cost benefit analysis, if there's no payback then it will never get done.
You'll never know what the situation is unless and until it gets fixed.
07-01-2023 15:41 - edited 07-01-2023 15:46
At some point there will be a point where no amount of money (well to the extreme) will fix it in which case it will never be fixed. the only way is to QoS/BWM the line with limited capacity.
on 07-01-2023 16:15
@legacy1 wrote:At some point there will be a point where no amount of money (well to the extreme) will fix it in which case it will never be fixed. the only way is to QoS/BWM the line with limited capacity.
Look, QoS, LLD, DOCSIS 4 and all the rest of it, we've been round this loop and they're all pipe dreams, they aren't going to happen any time soon, if ever. If these improvements were as simple and feasible as some make out, don't you think VM would have been delighted to have done this years ago?
For VM, there's only a few fixes to over-utilisation:
1) Spend money to increase capacity (but only if the investment meets the rate of return threshold)
2) Put a stop on new sales, and then wait for churn to reduce customer numbers on the segment, although that takes a year or several to be effective, especially if the alternative ISP options are not very good. Then, as soon as the stop on sales is lifted, the commercial team sign up a load more customers and recreate the problem, which is why some areas seesaw in and out of capacity problems for years on end.
on 07-01-2023 16:57
@Andrew-G wrote:
Look, QoS, LLD, DOCSIS 4 and all the rest of it, we've been round this loop and they're all pipe dreams, they aren't going to happen any time soon, if ever. If these improvements were as simple and feasible as some make out, don't you think VM would have been delighted to have done this years ago?
LLD is not that old. Besides getting familiar with it, there is also application support. A latency sensitive application must mark its packets (DSCP). There is not widespread support for that yet. The operator can create custom rules, though.
Some of the stuff used by LLD is old... The concept of service flows was introduced in DOCSIS 1.1, so all modems support LLD in the downstream, because the CMTS handles this direction. To get LLD in the upstream, the modem must support it in the firmware.
on 07-01-2023 18:05
@legacy1 wrote:At some point there will be a point where no amount of money (well to the extreme) will fix it in which case it will never be fixed. the only way is to QoS/BWM the line with limited capacity.
If only there were some full fibre overlay network using, say, XGSPON that they could offer heavy users migration to to relieve the HFC network.
If there were plans to build such a network anyway they could even prioritise areas where the HFC network is problematic, be it due to RF or capacity, for earlier build.
That'd provide a future-proof full fibre infrastructure and ensure they remain compliant with advertising rules while still advertising unlimited.
on 07-01-2023 21:19
@gitty wrote:LLD is not that old. Besides getting familiar with it, there is also application support. A latency sensitive application must mark its packets (DSCP). There is not widespread support for that yet. The operator can create custom rules, though.
That's all correct, but with all the equipment, firmware, application and testing needs we are still years away from VM being able to implement this even if they wanted, so it doesn't help the OP, or the other customers who are currently suffering from VM's self inflicted over-utilisation.
07-01-2023 23:28 - edited 07-01-2023 23:29
The thread went this direction, because there was some misunderstanding about upstream lanes and the CMTS not being able to do anything. LLD is not a tool to fix congestion, but it can help get latency sensitive traffic through faster in such an environment (the point here is that the CMTS can do something about).
Let's resume this topic later this year, or next year when VM has deployed it. 🙂