on 10-02-2020 11:00
I'm getting intermittent internet connectivity problems and this has been going on for a few weeks. VM support supremely unhelpful, I've wasted a lot of my own time troubleshooting and they keep on saying it must be a problem at my end. Finally I had a look at the network log on my superhub and I noticed repeated entries for "SYNC Timing Synchronization failure" which I googled and it brought me here as it does seem they are an ISP issue.
I'm posting screenshots below as others have done - please can anyone from VM advise?
Time Priority Description
10/02/2020 10:46:25 | notice | LAN login Success;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
09/02/2020 23:23:25 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
09/02/2020 22:36:26 | critical | No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
09/02/2020 11:25:45 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
09/02/2020 11:11:56 | critical | No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
08/02/2020 09:34:51 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
08/02/2020 09:12:15 | critical | No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
08/02/2020 06:56:39 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
Downstream bonded channels
Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID
1 | 187000000 | 2.7 | 40 | 256 qam | 7 |
2 | 195000000 | 2.7 | 40 | 256 qam | 8 |
3 | 203000000 | 2.2 | 40 | 256 qam | 9 |
4 | 211000000 | 1.7 | 40 | 256 qam | 10 |
5 | 219000000 | 1.5 | 40 | 256 qam | 11 |
6 | 227000000 | 1.7 | 40 | 256 qam | 12 |
7 | 235000000 | 1.5 | 40 | 256 qam | 13 |
8 | 243000000 | 1.4 | 40 | 256 qam | 14 |
9 | 251000000 | 1.7 | 40 | 256 qam | 15 |
10 | 259000000 | 1.2 | 40 | 256 qam | 16 |
11 | 267000000 | 1.7 | 40 | 256 qam | 17 |
12 | 275000000 | 1.5 | 40 | 256 qam | 18 |
13 | 283000000 | 1.2 | 40 | 256 qam | 19 |
14 | 291000000 | 1.5 | 40 | 256 qam | 20 |
15 | 299000000 | 1.5 | 40 | 256 qam | 21 |
16 | 307000000 | 1.2 | 40 | 256 qam | 22 |
17 | 315000000 | 0.5 | 40 | 256 qam | 23 |
18 | 323000000 | 0 | 40 | 256 qam | 24 |
19 | 443000000 | 0.2 | 40 | 256 qam | 25 |
20 | 451000000 | -0.4 | 40 | 256 qam | 26 |
21 | 459000000 | -0.4 | 40 | 256 qam | 27 |
22 | 467000000 | -0.2 | 40 | 256 qam | 28 |
23 | 475000000 | -1 | 40 | 256 qam | 29 |
24 | 483000000 | -0.5 | 40 | 256 qam | 30 |
Downstream bonded channels
Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors
1 | Locked | 40.3 | 17983 | 664273 |
2 | Locked | 40.3 | 4551555 | 544 |
3 | Locked | 40.3 | 5692 | 194 |
4 | Locked | 40.9 | 3828 | 158 |
5 | Locked | 40.3 | 23197 | 271 |
6 | Locked | 40.3 | 1417 | 74 |
7 | Locked | 40.9 | 1005 | 31 |
8 | Locked | 40.3 | 1509 | 124 |
9 | Locked | 40.3 | 1452 | 152 |
10 | Locked | 40.9 | 2308 | 246 |
11 | Locked | 40.3 | 3468 | 296 |
12 | Locked | 40.9 | 2598 | 260 |
13 | Locked | 40.3 | 7477 | 300 |
14 | Locked | 40.9 | 10796 | 418 |
15 | Locked | 40.9 | 24098 | 394 |
16 | Locked | 40.3 | 4544 | 309 |
17 | Locked | 40.3 | 4647 | 270 |
18 | Locked | 40.9 | 3480 | 220 |
19 | Locked | 40.3 | 2611 | 192 |
20 | Locked | 40.3 | 3000 | 142 |
21 | Locked | 40.3 | 3238 | 166 |
22 | Locked | 40.9 | 3551 | 167 |
23 | Locked | 40.3 | 3889 | 92 |
24 | Locked | 40.3 | 2888 | 2 |
Answered! Go to Answer
on 10-02-2020 11:23
Nothing stands out in your power and SNR levels for downstream, or power levels & modulation for upstream - upstream power levels are on the high side, but I believe still within limits. But the log proves there's a fault of some sort on VM's cable connection, and the error levels are further proof.
Do a restart of the hub (this resets the error counters), and then see what happens over the next 24 hours. If a similar profile of error numbers is starting to appear, if any channels have disproportionate pre-RS errors, or more than an absolute handful of post-RS errors at all, then there's a problem with the connection, and that will most likely require a technician visit. In real world operation, a good cable connection would expect to see no more than about 20 pre-RS errors per channel and maybe 1 post-RS error across all channels after an entire week's up-time. Restarting the hub often provokes a tiny handful of pre-RS errors, what matters is how fast errors accumulate, and whether any uncorrected (post-RS) errors are breaking through.
on 10-02-2020 12:53
Your hub uses software to check for errors on data, and that software implements what's called the Reed-Solomon error correction method. This uses very clever mathematics and can, up to a certain point, reconstruct missing data. It's the same maths that enables compact discs and DVDs to work, despite the fact that dust and fingerprints always obscure some of the encoded data.
If the hub identifies an error in incoming data that is successfully corrected by the Reed-Solomon code, then that gets measured as a pre-RS error. If the software knows there's an error, but it is beyond the capabilities to fix it, that's a post-RS error. I would expect the hub would re-request uncorrected data, but that only works for non-time sensitive uses (general web browsing, downloading files), whereas for streaming, gaming or voice/video conferencing, the delays in running error correction and still needing to re-request the data mean it arrives too late to be useful, causing all manner of glitches and dropouts.
As you rebooted the hub on Friday, you're collecting over half a million uncorrected errors on channel 1 in about three days, and the hub has been working its little socks off correcting four and a half million pre-RS errors. This needs investigating, but you're wasting with your time with VM's Third world, Third rate call centre - they mostly have no understanding of the script they're following, and they're only interested in closing the call and chalking it up as "resolved", regardless of fixing the real problem.
Your options are:
1) Give it a while and wait for our lovely VM forum staff to spot your post and advise. The forum staff are UK based and helpful, this is the easiest option, but sometimes takes a day or two.
2) Try your luck with the offshore call centre again. I'd advise not, but it is an option.
3) Ring up, select options for "about my account" and "thinking of leaving". This used to get you through directly to the UK based customer retention team, but it appears that some idiot in VM's senior management has now routed these calls through the crap offshore call centre, so if that happens, don't engage, just say you want to leave, be firm that the service doesn't work and you wish to be passed to the customer retentions team, you don't want to speak to "technical support". When you get through to retentions, explain that the offshore team have been useless, and that if VM can't fix the poor performance that you wish to leave. You'll find that they will be more than happy to arrange a technician visit (unless there's an existing area fault - those have to take priority over individual calls for obvious reasons).
Potentially this can be fixed quickly and easily. If that doesn't happen, and either the fault isn't permanently fixed, or it becomes too time consuming to try and get VM to resolve it, then it may be time to seriously consider why you want to keep paying the company.
on 10-02-2020 11:05
Can you post your Upstream data please.
on 10-02-2020 11:08
Hi MikeRobbo. Here it is:
Upstream bonded channels
Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID
1 | 39400059 | 4.95 | 5120 | 64 qam | 6 |
2 | 46200000 | 4.925 | 5120 | 64 qam | 5 |
3 | 32600000 | 4.95 | 5120 | 64 qam | 7 |
4 | 25799991 | 4.875 | 5120 | 64 qam | 8 |
Upstream bonded channels
Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts
1 | ATDMA | 0 | 0 | 0 | 0 |
2 | ATDMA | 0 | 0 | 0 | 0 |
3 | ATDMA | 0 | 0 | 0 | 0 |
4 | ATDMA | 0 | 0 | 0 | 0 |
on 10-02-2020 11:12
Cheers, a Guru will be along in the not too distant future to check your data and advise further actions.
on 10-02-2020 11:23
Nothing stands out in your power and SNR levels for downstream, or power levels & modulation for upstream - upstream power levels are on the high side, but I believe still within limits. But the log proves there's a fault of some sort on VM's cable connection, and the error levels are further proof.
Do a restart of the hub (this resets the error counters), and then see what happens over the next 24 hours. If a similar profile of error numbers is starting to appear, if any channels have disproportionate pre-RS errors, or more than an absolute handful of post-RS errors at all, then there's a problem with the connection, and that will most likely require a technician visit. In real world operation, a good cable connection would expect to see no more than about 20 pre-RS errors per channel and maybe 1 post-RS error across all channels after an entire week's up-time. Restarting the hub often provokes a tiny handful of pre-RS errors, what matters is how fast errors accumulate, and whether any uncorrected (post-RS) errors are breaking through.
on 10-02-2020 12:02
Thanks for your advice Andruser. Can I ask for clarification on what pre- and post-RS means? (I dont think its ReSet). I have tried multiple reboots, last one was on Friday so it does look like the problem is here to stay. The very first message in the log post reboot was:
10/02/2020 11:48:21 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
so not looking good so far.
Any ideas on how to get a technician visit? I tried to get one when I last call VM support on Friday and they insisted I had to wait 24 hours and call again?
on 10-02-2020 12:35
on 10-02-2020 12:41
Pre RS errors are one that the hub has corrected, they are not that bad, but can still indicate noise on the circuit. Post RS errors are ones that cannot be corrected and are therefor very bad it there are lots of them. On a Hub3 the counters are accumulative since the have was last power cycled.
on 10-02-2020 12:53
Your hub uses software to check for errors on data, and that software implements what's called the Reed-Solomon error correction method. This uses very clever mathematics and can, up to a certain point, reconstruct missing data. It's the same maths that enables compact discs and DVDs to work, despite the fact that dust and fingerprints always obscure some of the encoded data.
If the hub identifies an error in incoming data that is successfully corrected by the Reed-Solomon code, then that gets measured as a pre-RS error. If the software knows there's an error, but it is beyond the capabilities to fix it, that's a post-RS error. I would expect the hub would re-request uncorrected data, but that only works for non-time sensitive uses (general web browsing, downloading files), whereas for streaming, gaming or voice/video conferencing, the delays in running error correction and still needing to re-request the data mean it arrives too late to be useful, causing all manner of glitches and dropouts.
As you rebooted the hub on Friday, you're collecting over half a million uncorrected errors on channel 1 in about three days, and the hub has been working its little socks off correcting four and a half million pre-RS errors. This needs investigating, but you're wasting with your time with VM's Third world, Third rate call centre - they mostly have no understanding of the script they're following, and they're only interested in closing the call and chalking it up as "resolved", regardless of fixing the real problem.
Your options are:
1) Give it a while and wait for our lovely VM forum staff to spot your post and advise. The forum staff are UK based and helpful, this is the easiest option, but sometimes takes a day or two.
2) Try your luck with the offshore call centre again. I'd advise not, but it is an option.
3) Ring up, select options for "about my account" and "thinking of leaving". This used to get you through directly to the UK based customer retention team, but it appears that some idiot in VM's senior management has now routed these calls through the crap offshore call centre, so if that happens, don't engage, just say you want to leave, be firm that the service doesn't work and you wish to be passed to the customer retentions team, you don't want to speak to "technical support". When you get through to retentions, explain that the offshore team have been useless, and that if VM can't fix the poor performance that you wish to leave. You'll find that they will be more than happy to arrange a technician visit (unless there's an existing area fault - those have to take priority over individual calls for obvious reasons).
Potentially this can be fixed quickly and easily. If that doesn't happen, and either the fault isn't permanently fixed, or it becomes too time consuming to try and get VM to resolve it, then it may be time to seriously consider why you want to keep paying the company.
on 13-02-2020 17:38
Final update (hopefully) - I called their offshore support again, got angry and demanded to speak to a supervisor, got angry with him and finally he relented and booked in an engineer visit. Engineer arrived today and found that one of the coax cables was damaged. Quick replacement and now no errors. So the lesson for anyone else in same situation is to keep demanding an engineer visit and escalate if you're not getting anywhere.