ContributionsMost RecentMost LikesSolutionsRe: Packet loss affecting Upstream Thanks for the tips. Cables are all connected securely (and the system has been rock-solid for months). I am not sure I would have noticed this is I hadn't spotted my upstream wasn't what it should be, which then led into looking at the router stats and the BQM. I suspect the limited packet loss has been masked by upper-level protocols (though we can never be sure where the packet loss actually is; e.g. it could be anywhere on the path from the VM core network towards the monitoring host). Things do appear to be more stable at the moment, and all Upstream channels are back to 64QAM, and there has been no packet loss in the BQM for the last 24 hours. Re: Packet loss affecting Upstream From the BQM linked above, I am still seeing issues of reduced upstream bandwidth and occasional packet loss. I now have a high number of post-RS errors in the Downstream channels, but suspect this may well be due to transient spikes as they are not incrementing significantly. Are you able to see any issues? Downstream bonded channels Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID 1 139000000 0.7 37 256 qam 1 2 211000000 0.2 37 256 qam 10 3 219000000 0 36 256 qam 11 4 227000000 0 37 256 qam 12 5 235000000 -0.2 37 256 qam 13 6 243000000 -0.5 37 256 qam 14 7 251000000 -0.4 38 256 qam 15 8 259000000 -0.2 37 256 qam 16 9 267000000 0 37 256 qam 17 10 275000000 0 37 256 qam 18 11 283000000 0.4 38 256 qam 19 12 291000000 0.5 38 256 qam 20 13 299000000 1 37 256 qam 21 14 307000000 1.2 38 256 qam 22 15 315000000 1.4 38 256 qam 23 16 323000000 1.5 38 256 qam 24 17 331000000 1.7 38 256 qam 25 18 339000000 1.5 38 256 qam 26 19 347000000 1.5 38 256 qam 27 20 355000000 1.4 38 256 qam 28 21 363000000 1 38 256 qam 29 22 371000000 0.7 38 256 qam 30 23 379000000 0.5 38 256 qam 31 24 387000000 0.2 38 256 qam 32 Downstream bonded channels Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors 1 Locked 37.6 1049 7686 2 Locked 37.3 987 7606 3 Locked 36.6 906 8722 4 Locked 37.3 806 8630 5 Locked 37.6 917 7421 6 Locked 37.6 1610 7083 7 Locked 38.6 932 7414 8 Locked 37.6 1403 7203 9 Locked 37.6 845 7400 10 Locked 37.6 947 7385 11 Locked 38.6 1167 594 12 Locked 38.6 832 7100 13 Locked 37.6 891 7150 14 Locked 38.9 864 2166 15 Locked 38.9 894 4682 16 Locked 38.6 896 6334 17 Locked 38.6 866 5385 18 Locked 38.6 874 6247 19 Locked 38.9 1033 5086 20 Locked 38.6 991 13314 21 Locked 38.6 1028 13294 22 Locked 38.6 1002 13233 23 Locked 38.6 987 13585 24 Locked 38.6 1083 13716 Upstream bonded channels Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID 1 49600000 46 5120 64 qam 1 2 23600781 44.8 5120 16 qam 5 3 30100087 45 5120 16 qam 4 4 36600000 45.3 5120 64 qam 3 5 43100000 45.8 5120 64 qam 2 Upstream bonded channels Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts 1 ATDMA 0 0 2 0 2 ATDMA 0 0 1 0 3 ATDMA 0 0 0 0 4 ATDMA 0 0 0 0 5 ATDMA 0 0 1 0 Network Log Time Priority Description 10/08/2024 06:12:45 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 10/08/2024 06:12:42 Warning! Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 10/08/2024 06:12:37 critical SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 10/08/2024 06:12:36 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 10/08/2024 06:12:36 critical SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 09/08/2024 09:55:51 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 07/08/2024 08:21:16 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 06/08/2024 16:55:54 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 01/08/2024 17:31:18 critical No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 01/08/2024 13:55:49 Error DHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 13:14:3 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 13:13:45 Warning! Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 13:13:40 critical SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 13:13:40 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 13:13:40 critical SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 13:13:40 Warning! RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 13:12:21 critical Started Unicast Maintenance Ranging - No Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 11:17:34 critical Received Response to Broadcast Maintenance Request, But no Unicast Maintenance opportunities received - T4 time out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 11:15:24 Warning! Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; 29/07/2024 11:15:19 critical SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; Re: Packet loss affecting Upstream I am still seeing packet loss in the BQM, and reduced upstream bandwidth. What would cause fallback to 16QAM on certain upstream channels; could this be noise in the RF system? It's hard to know whether this is cause or effect, i.e. noise causing channel fallback to compensate? Packet loss affecting Upstream Hi, Following a couple of recent total loss of service events, I am now seeing some packet loss at times, which is affecting upstream bandwidth. It correlates to Upstream channels operating at 16QAM rather than 64QAM. A live BQM: https://www.thinkbroadband.com/broadband/monitoring/quality/share/3724cb7b1e81e3d599c85c69d85cdda0563079e8 For reference, the faults in question were: F011415205 & F011411948. All power levels (etc) look fine, but I include them for completeness: Downstream bonded channels Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID 1 139000000 1 37 256 qam 1 2 147000000 1.2 37 256 qam 2 3 219000000 0.5 37 256 qam 11 4 227000000 0.2 37 256 qam 12 5 235000000 0 37 256 qam 13 6 243000000 -0.2 37 256 qam 14 7 251000000 0 38 256 qam 15 8 259000000 0 38 256 qam 16 9 267000000 0.2 38 256 qam 17 10 275000000 0 37 256 qam 18 11 283000000 0.5 38 256 qam 19 12 291000000 0.7 38 256 qam 20 13 299000000 1.2 37 256 qam 21 14 307000000 1.5 38 256 qam 22 15 315000000 1.5 38 256 qam 23 16 323000000 1.7 38 256 qam 24 17 331000000 2 38 256 qam 25 18 339000000 1.7 38 256 qam 26 19 347000000 1.7 38 256 qam 27 20 355000000 1.5 38 256 qam 28 21 363000000 1.4 38 256 qam 29 22 371000000 1 38 256 qam 30 23 379000000 1 38 256 qam 31 24 387000000 0.7 38 256 qam 32 Downstream bonded channels Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors 1 Locked 37.6 1358 5161 2 Locked 37.3 886 5350 3 Locked 37.3 66 0 4 Locked 37.3 71 0 5 Locked 37.6 82 0 6 Locked 37.6 87 0 7 Locked 38.6 137 0 8 Locked 38.6 110 0 9 Locked 38.6 229 1 10 Locked 37.6 299 20 11 Locked 38.6 150 0 12 Locked 38.6 84 10 13 Locked 37.3 49 0 14 Locked 38.6 75 0 15 Locked 38.9 79 0 16 Locked 38.6 43 0 17 Locked 38.9 58 0 18 Locked 38.9 64 0 19 Locked 38.6 107 0 20 Locked 38.6 50 0 21 Locked 38.6 43 0 22 Locked 38.9 33 0 23 Locked 38.6 34 0 24 Locked 38.9 250 89 Upstream bonded channels Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID 1 49600000 46 5120 64 qam 1 2 23599593 45 5120 16 qam 5 3 30100073 45.2 5120 16 qam 4 4 36600015 45.5 5120 64 qam 3 5 43100000 45.8 5120 64 qam 2 Upstream bonded channels Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts 1 ATDMA 0 0 0 0 2 ATDMA 0 0 0 0 3 ATDMA 0 0 2 0 4 ATDMA 0 0 1 0 5 ATDMA 0 0 10 0 Re: Still problems accessing Google DNS Now working for me.... but for how long? 🙂 Re: Still problems accessing Google DNS Still boken in Gloucester, but working via VM service up the road (see other thread), so it's not a geographical area issue per-se; it's more subtle than that. I was thinking some weird ECMP issue, or some potential blacklisting via Google for certain IPs/ranges. But if was blacklisting, that doesn't explain the brief time it was working this afternoon.... Re: routing problems to Google DNS? Now this is interesting. I've just been to our local pub and I can access Google services there on their VM service. Their IP is not within my /24 but hard to believe we're not on the same CMTS. So, maybe to help those trying to fix the problem, I am within 80.192.105.0/24 and the working service is 80.194.156.x. Maybe this is a weird ECMP issue. I take it others are definitely still seeing issues? Re: routing problems to Google DNS? This seemed to be working in the Gloucester area around 13:50, but has now stopped again. Definitely feels like a routing issue between VM and Google, as it's not (just) DNS resolution (though I note the comments about other non-Google services). However, having been on the wrong end of some Cisco/Juniper BGP and RIB/FIB weirdness, I'd be fairly confident that it's more involved than first appears. Re: PSTN fault results in migration to VoIP FYI, I moved the Hub this weekend (the old coax outlet was still active) and the DECT base station (with the answerphone) at the same time, so everything is back as it was. Thanks for all the ideas and suggestions. I guess there are some installations where this is easier (indeed it is trivial for my own installation), but this was just one of those awkward corner-cases involving equipment locations and user expectations! Special thanks to @Lee_R for the ongoing support. Re: PSTN fault results in migration to VoIP They are already hinting that it could be chargeable to move the Hub. The issue will be that if the EBUL has to be relocated with the Hub, I can't see this option being favoured, as that equipment will need to be moved to the living room - and they're already objecting to it being in a spare bedroom! If the EBUL just needs to be plugged into the phone line, then maybe it could be moved elsewhere. Actually, if it's an EBULv3, then it looks like it connects to the Hub via the phone line, so maybe it doesn't need to be next to the Hub, if the existing extension wiring can be preserved.