on 02-11-2022 18:38
Hi,
Following planned maintenance on 2/11 in my area (Gloucester, GL1), I am now seeing packet loss which wasn't there previously. The difference is clearly shown after the ~9 a.m. The VM service status is now showing "fixed" (all green).
Can this please be raised with network engineering (as it's almost certainly not a local issue)?
I attach a BQM as of 18:30 2/11, but the live data can be found here:
https://www.thinkbroadband.com/broadband/monitoring/quality/share/3724cb7b1e81e3d599c85c69d85cdda0563079e8
Hub statistics are all within spec, and haven't changed significantly in the last 24 hours (I took readings beforehand) but I include the relevant logs. The Hub has been rebooted, but as you can observe, the packet loss is still present.
Downstream bonded channels
Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID
1 203000000 0.2 33 256 qam 9
2 211000000 0.2 34 256 qam 10
3 219000000 0.2 34 256 qam 11
4 227000000 0 35 256 qam 12
5 235000000 -0.2 35 256 qam 13
6 243000000 -0.4 35 256 qam 14
7 251000000 -0.2 36 256 qam 15
8 259000000 -0.2 37 256 qam 16
9 267000000 -0.2 37 256 qam 17
10 275000000 -0.2 37 256 qam 18
11 283000000 0.2 37 256 qam 19
12 291000000 0.4 37 256 qam 20
13 299000000 0.9 36 256 qam 21
14 307000000 1.2 35 256 qam 22
15 315000000 1.2 35 256 qam 23
16 323000000 1.4 35 256 qam 24
17 331000000 1.7 36 256 qam 25
18 339000000 1.5 36 256 qam 26
19 347000000 1.5 36 256 qam 27
20 355000000 1.5 36 256 qam 28
21 363000000 1.4 37 256 qam 29
22 371000000 1.2 37 256 qam 30
23 379000000 1.2 37 256 qam 31
24 387000000 0.9 37 256 qam 32
Downstream bonded channels
Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors
1 Locked 33.3 13788 0
2 Locked 34.3 1455 0
3 Locked 34.4 1066 0
4 Locked 35 247 0
5 Locked 35.7 29 8
6 Locked 35.5 27 0
7 Locked 36.3 13 0
8 Locked 37.3 19 0
9 Locked 37.3 18 0
10 Locked 37.6 20 0
11 Locked 37.3 7 0
12 Locked 37.3 7 0
13 Locked 36.6 13 0
14 Locked 35.7 8 0
15 Locked 35.7 16 0
16 Locked 35.7 15 0
17 Locked 36.3 29 0
18 Locked 36.3 32 0
19 Locked 36.6 33 0
20 Locked 36.6 20 0
21 Locked 37.3 10 0
22 Locked 37.3 15 0
23 Locked 37.6 7 0
24 Locked 37.6 8 0
Upstream bonded channels
Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID
1 49600000 45.3 5120 64 qam 1
2 43100000 45.3 5120 32 qam 2
3 30100000 44.5 5120 64 qam 4
4 36599932 44.8 5120 16 qam 3
5 23600020 44.5 5120 16 qam 5
Upstream bonded channels
Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts
1 ATDMA 0 0 1 0
2 ATDMA 0 0 0 0
3 ATDMA 0 0 0 0
4 ATDMA 0 0 0 0
5 ATDMA 0 0 10 0
Network Log
Time Priority Description
02/11/2022 18:19:57 notice LAN login Success;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
01/01/1970 00:01:44 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 09:09:52 critical Received Response to Broadcast Maintenance Request, But no Unicast Maintenance opportunities received - T4 time out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 09:07:33 Warning! Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 09:07:28 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 09:07:28 critical SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 09:07:28 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 09:07:28 critical SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 09:07:27 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 02:07:2 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
02/11/2022 00:23:7 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
31/10/2022 17:43:59 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
29/10/2022 12:23:7 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
27/10/2022 20:44:38 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
26/10/2022 16:27:48 Warning! LAN login FAILED : Incorrect Username / Password / ConnectionType;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
26/10/2022 00:23:7 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
22/10/2022 20:48:4 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
22/10/2022 12:23:7 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
20/10/2022 22:37:47 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
19/10/2022 00:23:7 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
on 04-11-2022 12:10
Hi,
Does the fact that the Upstream modulations aren't all 64 QAM point to a potential issue, or is this acceptable? The connection has been rock-solid until the maintenance occurred (I can provide historical BQM data if required).
04-11-2022 13:31 - edited 04-11-2022 13:33
on 04-11-2022 15:46
Yes, from a speedtest, my upload has reduced by half: 20Mbps to ~10Mbps, so definitely something amiss.
Hopefully one of the VM team will be able to advise.
on 05-11-2022 12:31
Quite a few T3 timeouts on the Upstream channels:
1 | 43100009 | 45 | 5120 | 32 qam | 2 |
2 | 30100000 | 44.5 | 5120 | 64 qam | 4 |
3 | 36599995 | 44.8 | 5120 | 16 qam | 3 |
4 | 23600062 | 44.5 | 5120 | 16 qam | 5 |
5 | 49599995 | 45.3 | 5120 | 64 qam | 1 |
1 | ATDMA | 0 | 0 | 2 | 0 |
2 | ATDMA | 0 | 0 | 0 | 0 |
3 | ATDMA | 0 | 0 | 0 | 0 |
4 | ATDMA | 0 | 0 | 29 | 0 |
5 | ATDMA | 0 | 0 | 1 | 0 |
on 06-11-2022 19:59
I though this has been fixed, as I was seeing 64QAM on upstream channels and no packet loss between around 10 a.m. and 3 p.m. today (6/11), but the packet loss has since returned and the Upsteams are not all 64 QAM.
on 06-11-2022 23:28
Would the VM techs be able to check whether there's a known fault in the area which is affecting upstream connectivitiy? It's odd that it should appear fixed for ~5 hours, and then go faulty again.
on 09-11-2022 15:59
I have tried the online VM Status Checker, and provided postcode/account details, but then elected to perform an equipment test (after all looks green).
The results of this are: "Looks like there’s an intermittent signal issue in your area."
And then goes on to state: "Don’t worry, we’re looking into this issue. These connection issues are usually fixed quickly. Check back here after 24 hours, and if there’s still an issue we’ll help you book a technician."
It's been over 24 hours now (and what appears to be an upstream issue has been present since 2/11).
Could one of the VM Techs confirm whether there is indeed a known issue in the area, or what the best course of action should be to get it resolved?
The packet loss is definitely noticeable on interactive SSH sessions I use for the day job, and if it is more widespread, I suspect it's playing havoc with gamers.
on 09-11-2022 17:02
Try the automated Service Status number 0800 561 0061. This often gives details of more known local issues down to postcode level, along with an estimated date and time it will be fixed.
on 09-11-2022 17:13
Thanks, I re-tried that, and the automated service line states there are no known issues in the area.