Forum Discussion

mmelbourne's avatar
mmelbourne
Dialled in
6 months ago

Packet loss affecting Upstream

Hi,

Following a couple of recent total loss of service events, I am now seeing some packet loss at times, which is affecting upstream bandwidth. It correlates to Upstream channels operating at 16QAM rather than 64QAM.

A live BQM: https://www.thinkbroadband.com/broadband/monitoring/quality/share/3724cb7b1e81e3d599c85c69d85cdda0563079e8

For reference, the faults in question were: F011415205 & F011411948.

All power levels (etc) look fine, but I include them for completeness:

Downstream bonded channels

Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID

1139000000137256 qam1
21470000001.237256 qam2
32190000000.537256 qam11
42270000000.237256 qam12
5235000000037256 qam13
6243000000-0.237256 qam14
7251000000038256 qam15
8259000000038256 qam16
92670000000.238256 qam17
10275000000037256 qam18
112830000000.538256 qam19
122910000000.738256 qam20
132990000001.237256 qam21
143070000001.538256 qam22
153150000001.538256 qam23
163230000001.738256 qam24
17331000000238256 qam25
183390000001.738256 qam26
193470000001.738256 qam27
203550000001.538256 qam28
213630000001.438256 qam29
22371000000138256 qam30
23379000000138256 qam31
243870000000.738256 qam32



Downstream bonded channels

Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors

1Locked37.613585161
2Locked37.38865350
3Locked37.3660
4Locked37.3710
5Locked37.6820
6Locked37.6870
7Locked38.61370
8Locked38.61100
9Locked38.62291
10Locked37.629920
11Locked38.61500
12Locked38.68410
13Locked37.3490
14Locked38.6750
15Locked38.9790
16Locked38.6430
17Locked38.9580
18Locked38.9640
19Locked38.61070
20Locked38.6500
21Locked38.6430
22Locked38.9330
23Locked38.6340
24Locked38.925089

 

Upstream bonded channels

Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID

14960000046512064 qam1
22359959345512016 qam5
33010007345.2512016 qam4
43660001545.5512064 qam3
54310000045.8512064 qam2



Upstream bonded channels

Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts

1ATDMA0000
2ATDMA0000
3ATDMA0020
4ATDMA0010
5ATDMA00100
  • Tudor's avatar
    Tudor
    Very Insightful Person

    Check with Area faults on 0800 561 0061 If you have a VM landline 150 this goes down to post code level. You could also try the web page status, but this is not recommended as it only covers issues that affect a very large number of customers.

    VM will not dispatch any technicians while an area fault exists.

    If no area faults found:

    The primary place to report faults or for service requests is Customer Services on 0345 454 1111/150 if you have a VM landline or wait two or three days for a VM staff member to get to your post. This board is not a fault reporting system.

  • Client62's avatar
    Client62
    Alessandro Volta

    Operating at 16QAM rather than 64QAM results in stable connection but with reduced bandwidth.

    Packet loss is a different matter & much harder to pin down as none of the online tools report where the packets are lost / delayed.

  • Hey mmelbourne, thank you for reaching out and I am sorry to hear this.

    I've taken a look and I can see there was an area outage however this has now ended.

    How has the connection been since?

  • I am still seeing packet loss in the BQM, and reduced upstream bandwidth. What would cause fallback to 16QAM on certain upstream channels; could this be noise in the RF system? It's hard to know whether this is cause or effect, i.e. noise causing channel fallback to compensate?

    • Zach_R's avatar
      Zach_R
      Forum Team

      Hi mmelbourne,

      Thanks for getting back to us here and expanding. I've checked over things on our systems and I'm unable to detect any known faults currently that would explain this.

      How are things for you today? Any better at all?

      Thanks,

       

  • From the BQM linked above, I am still seeing issues of reduced upstream bandwidth and occasional packet loss. I now have a high number of post-RS errors in the Downstream channels, but suspect this may well be due to transient spikes as they are not incrementing significantly. Are you able to see any issues?

    Downstream bonded channels

    Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID

    11390000000.737256 qam1
    22110000000.237256 qam10
    3219000000036256 qam11
    4227000000037256 qam12
    5235000000-0.237256 qam13
    6243000000-0.537256 qam14
    7251000000-0.438256 qam15
    8259000000-0.237256 qam16
    9267000000037256 qam17
    10275000000037256 qam18
    112830000000.438256 qam19
    122910000000.538256 qam20
    13299000000137256 qam21
    143070000001.238256 qam22
    153150000001.438256 qam23
    163230000001.538256 qam24
    173310000001.738256 qam25
    183390000001.538256 qam26
    193470000001.538256 qam27
    203550000001.438256 qam28
    21363000000138256 qam29
    223710000000.738256 qam30
    233790000000.538256 qam31
    243870000000.238256 qam32



    Downstream bonded channels

    Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors

    1Locked37.610497686
    2Locked37.39877606
    3Locked36.69068722
    4Locked37.38068630
    5Locked37.69177421
    6Locked37.616107083
    7Locked38.69327414
    8Locked37.614037203
    9Locked37.68457400
    10Locked37.69477385
    11Locked38.61167594
    12Locked38.68327100
    13Locked37.68917150
    14Locked38.98642166
    15Locked38.98944682
    16Locked38.68966334
    17Locked38.68665385
    18Locked38.68746247
    19Locked38.910335086
    20Locked38.699113314
    21Locked38.6102813294
    22Locked38.6100213233
    23Locked38.698713585
    24Locked38.6108313716

    Upstream bonded channels

    Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID

    14960000046512064 qam1
    22360078144.8512016 qam5
    33010008745512016 qam4
    43660000045.3512064 qam3
    54310000045.8512064 qam2



    Upstream bonded channels

    Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts

    1ATDMA0020
    2ATDMA0010
    3ATDMA0000
    4ATDMA0000
    5ATDMA0010

    Network Log

    Time Priority Description

    10/08/2024 06:12:45Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    10/08/2024 06:12:42Warning!Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    10/08/2024 06:12:37criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    10/08/2024 06:12:36Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    10/08/2024 06:12:36criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    09/08/2024 09:55:51ErrorDHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    07/08/2024 08:21:16criticalNo Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    06/08/2024 16:55:54ErrorDHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    01/08/2024 17:31:18criticalNo Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    01/08/2024 13:55:49ErrorDHCP RENEW WARNING - Field invalid in response v4 option;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 13:14:3Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 13:13:45Warning!Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 13:13:40criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 13:13:40Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 13:13:40criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 13:13:40Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 13:12:21criticalStarted Unicast Maintenance Ranging - No Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 11:17:34criticalReceived Response to Broadcast Maintenance Request, But no Unicast Maintenance opportunities received - T4 time out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 11:15:24Warning!Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
    29/07/2024 11:15:19criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
  • Sephiroth's avatar
    Sephiroth
    Alessandro Volta

    They don’t regard 16QAM as a fault because, as the OP has deduced, is the pre-provisioned fallback to cover for RF noise.  But the downstream also appears to be suffering.  Are your coax cables tightly screwed in both ends, inside and outside?  Neighbours having similar issues?

  • Thanks for the tips. Cables are all connected securely (and the system has been rock-solid for months). I am not sure I would have noticed this is I hadn't spotted my upstream wasn't what it should be, which then led into looking at the router stats and the BQM. I suspect the limited packet loss has been masked by upper-level protocols (though we can never be sure where the packet loss actually is; e.g. it could be anywhere on the path from the VM core network towards the monitoring host).

    Things do appear to be more stable at the moment, and all Upstream channels are back to 64QAM, and there has been no packet loss in the BQM for the last 24 hours.

    • Robert_P's avatar
      Robert_P
      Forum Team

      Thanks for the update mmelbourne, we're pleased to hear this has settled down and improved over recent days. If you monitor it going forward and experience further issues, let us know and we can take a look.