cancel
Showing results for 
Search instead for 
Did you mean: 

SYNC Timing Synchronization failure

troweir
Tuning in

I'm getting intermittent internet connectivity problems and this has been going on for a few weeks. VM support supremely unhelpful, I've wasted a lot of my own time troubleshooting and they keep on saying it must be a problem at my end. Finally I had a look at the network log on my superhub and I noticed repeated entries for "SYNC Timing Synchronization failure" which I googled and it brought me here as it does seem they are an ISP issue.

I'm posting screenshots below as others have done - please can anyone from VM advise? 

Time Priority Description

10/02/2020 10:46:25noticeLAN login Success;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
09/02/2020 23:23:25criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
09/02/2020 22:36:26criticalNo Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
09/02/2020 11:25:45criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
09/02/2020 11:11:56criticalNo Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
08/02/2020 09:34:51criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
08/02/2020 09:12:15criticalNo Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
08/02/2020 06:56:39criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;

 

Downstream bonded channels

Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID

11870000002.740256 qam7
21950000002.740256 qam8
32030000002.240256 qam9
42110000001.740256 qam10
52190000001.540256 qam11
62270000001.740256 qam12
72350000001.540256 qam13
82430000001.440256 qam14
92510000001.740256 qam15
102590000001.240256 qam16
112670000001.740256 qam17
122750000001.540256 qam18
132830000001.240256 qam19
142910000001.540256 qam20
152990000001.540256 qam21
163070000001.240256 qam22
173150000000.540256 qam23
18323000000040256 qam24
194430000000.240256 qam25
20451000000-0.440256 qam26
21459000000-0.440256 qam27
22467000000-0.240256 qam28
23475000000-140256 qam29
24483000000-0.540256 qam30



Downstream bonded channels

Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors

1Locked40.317983664273
2Locked40.34551555544
3Locked40.35692194
4Locked40.93828158
5Locked40.323197271
6Locked40.3141774
7Locked40.9100531
8Locked40.31509124
9Locked40.31452152
10Locked40.92308246
11Locked40.33468296
12Locked40.92598260
13Locked40.37477300
14Locked40.910796418
15Locked40.924098394
16Locked40.34544309
17Locked40.34647270
18Locked40.93480220
19Locked40.32611192
20Locked40.33000142
21Locked40.33238166
22Locked40.93551167
23Locked40.3388992
24Locked40.328882
2 ACCEPTED SOLUTIONS

Accepted Solutions

Andrew-G
Alessandro Volta

Nothing stands out in your power and SNR levels for downstream, or power levels & modulation for upstream - upstream power levels are on the high side, but I believe still within limits.  But the log proves there's a fault of some sort on VM's cable connection, and the error levels are further proof.

Do a restart of the hub (this resets the error counters), and then see what happens over the next 24 hours.  If a similar profile of error numbers is starting to appear, if any channels have disproportionate pre-RS errors, or more than an absolute handful of post-RS errors at all,  then there's a problem with the connection, and that will most likely require a technician visit.  In real world operation, a good cable connection would expect to see no more than about 20 pre-RS errors per channel and maybe 1 post-RS error across all channels after an entire week's up-time.  Restarting the hub often provokes a tiny handful of pre-RS errors, what matters is how fast errors accumulate, and whether any uncorrected (post-RS) errors are breaking through.

See where this Helpful Answer was posted

Andrew-G
Alessandro Volta

Your hub uses software to check for errors on data, and that software implements what's called the Reed-Solomon error correction method. This uses very clever mathematics and can, up to a certain point, reconstruct missing data.  It's the same maths that enables compact discs and DVDs to work, despite the fact that dust and fingerprints always obscure some of the encoded data.

If the hub identifies an error in incoming data that is successfully corrected by the Reed-Solomon code, then that gets measured as a pre-RS error.  If the software knows there's an error, but it is beyond the capabilities to fix it, that's a post-RS error.  I would expect the hub would re-request uncorrected data, but that only works for non-time sensitive uses (general web browsing, downloading files), whereas for streaming, gaming or voice/video conferencing, the delays in running error correction and still needing to re-request the data mean it arrives too late to be useful, causing all manner of glitches and dropouts.

As you rebooted the hub on Friday, you're collecting over half a million uncorrected errors on channel 1 in about three days, and the hub has been working its little socks off correcting four and a half million pre-RS errors.  This needs investigating, but you're wasting with your time with VM's Third world, Third rate call centre - they mostly have no understanding of the script they're following, and they're only interested in closing the call and chalking it up as "resolved", regardless of fixing the real problem.

Your options are:

1) Give it a while and wait for our lovely VM forum staff to spot your post and advise.  The forum staff are UK based and helpful, this is the easiest option, but sometimes takes a day or two.

2) Try your luck with the offshore call centre again.  I'd advise not, but it is an option.

3) Ring up, select options for "about my account" and "thinking of leaving".  This used to get you through directly to the UK based customer retention team, but it appears that some idiot in VM's senior management has now routed these calls through the crap offshore call centre, so if that happens, don't engage, just say you want to leave, be firm that the service doesn't work and you wish to be passed to the customer retentions team, you don't want to speak to "technical support".  When you get through to retentions, explain that the offshore team have been useless, and that if VM can't fix the poor performance that you wish to leave.  You'll find that they will be more than happy to arrange a technician visit (unless there's an existing area fault - those have to take priority over individual calls for obvious reasons).  

Potentially this can be fixed quickly and easily.  If that doesn't happen, and either the fault isn't permanently fixed, or it becomes too time consuming to try and get VM to resolve it, then it may be time to seriously consider why you want to keep paying the company.

See where this Helpful Answer was posted

20 REPLIES 20

MikeRobbo
Alessandro Volta

Can you post your Upstream data please.


*********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
BT Smart Hub 2 with 70Mbs Download,18Mbs Upload, 9.17ms Latency & 0.35ms Jitter.

Hi MikeRobbo. Here it is:

Upstream bonded channels

Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID

1394000594.95512064 qam6
2462000004.925512064 qam5
3326000004.95512064 qam7
4257999914.875512064 qam8



Upstream bonded channels

Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts

1ATDMA0000
2ATDMA0000
3ATDMA0000
4ATDMA0000

Cheers, a Guru will be along in the not too distant future to check your data and advise further actions.


*********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
BT Smart Hub 2 with 70Mbs Download,18Mbs Upload, 9.17ms Latency & 0.35ms Jitter.

Andrew-G
Alessandro Volta

Nothing stands out in your power and SNR levels for downstream, or power levels & modulation for upstream - upstream power levels are on the high side, but I believe still within limits.  But the log proves there's a fault of some sort on VM's cable connection, and the error levels are further proof.

Do a restart of the hub (this resets the error counters), and then see what happens over the next 24 hours.  If a similar profile of error numbers is starting to appear, if any channels have disproportionate pre-RS errors, or more than an absolute handful of post-RS errors at all,  then there's a problem with the connection, and that will most likely require a technician visit.  In real world operation, a good cable connection would expect to see no more than about 20 pre-RS errors per channel and maybe 1 post-RS error across all channels after an entire week's up-time.  Restarting the hub often provokes a tiny handful of pre-RS errors, what matters is how fast errors accumulate, and whether any uncorrected (post-RS) errors are breaking through.

Thanks for your advice Andruser. Can I ask for clarification on what pre- and post-RS means? (I dont think its ReSet). I have tried multiple reboots, last one was on Friday so it does look like the problem is here to stay. The very first message in the log post reboot was:

10/02/2020 11:48:21criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;

so not looking good so far.

Any ideas on how to get a technician visit? I tried to get one when I last call VM support on Friday and they insisted I had to wait 24 hours and call again?

Sorry for previous post I can now see Pre RS Errors and Post RS Errors on my Downstream bonded channels section. I already have 3960 Post RS Errors on Channel 1 and 855 Pre RS Errors on Channel 2 (on other channels as well). I'll get on phone to their tech support shortly.

Tudor
Very Insightful Person
Very Insightful Person

Pre RS errors are one that the hub has corrected, they are not that bad, but can still indicate noise on the circuit. Post RS errors are ones that cannot be corrected and are therefor very bad it there are lots of them. On a Hub3 the counters are accumulative since the have was last power cycled.


Tudor
There are 10 types of people: those who understand binary and those who don't and F people out of 10 who do not understand hexadecimal c1a2a285948293859940d9a49385a2

Andrew-G
Alessandro Volta

Your hub uses software to check for errors on data, and that software implements what's called the Reed-Solomon error correction method. This uses very clever mathematics and can, up to a certain point, reconstruct missing data.  It's the same maths that enables compact discs and DVDs to work, despite the fact that dust and fingerprints always obscure some of the encoded data.

If the hub identifies an error in incoming data that is successfully corrected by the Reed-Solomon code, then that gets measured as a pre-RS error.  If the software knows there's an error, but it is beyond the capabilities to fix it, that's a post-RS error.  I would expect the hub would re-request uncorrected data, but that only works for non-time sensitive uses (general web browsing, downloading files), whereas for streaming, gaming or voice/video conferencing, the delays in running error correction and still needing to re-request the data mean it arrives too late to be useful, causing all manner of glitches and dropouts.

As you rebooted the hub on Friday, you're collecting over half a million uncorrected errors on channel 1 in about three days, and the hub has been working its little socks off correcting four and a half million pre-RS errors.  This needs investigating, but you're wasting with your time with VM's Third world, Third rate call centre - they mostly have no understanding of the script they're following, and they're only interested in closing the call and chalking it up as "resolved", regardless of fixing the real problem.

Your options are:

1) Give it a while and wait for our lovely VM forum staff to spot your post and advise.  The forum staff are UK based and helpful, this is the easiest option, but sometimes takes a day or two.

2) Try your luck with the offshore call centre again.  I'd advise not, but it is an option.

3) Ring up, select options for "about my account" and "thinking of leaving".  This used to get you through directly to the UK based customer retention team, but it appears that some idiot in VM's senior management has now routed these calls through the crap offshore call centre, so if that happens, don't engage, just say you want to leave, be firm that the service doesn't work and you wish to be passed to the customer retentions team, you don't want to speak to "technical support".  When you get through to retentions, explain that the offshore team have been useless, and that if VM can't fix the poor performance that you wish to leave.  You'll find that they will be more than happy to arrange a technician visit (unless there's an existing area fault - those have to take priority over individual calls for obvious reasons).  

Potentially this can be fixed quickly and easily.  If that doesn't happen, and either the fault isn't permanently fixed, or it becomes too time consuming to try and get VM to resolve it, then it may be time to seriously consider why you want to keep paying the company.

Final update (hopefully) - I called their offshore support again, got angry and demanded to speak to a supervisor, got angry with him and finally he relented and booked in an engineer visit. Engineer arrived today and found that one of the coax cables was damaged. Quick replacement and now no errors. So the lesson for anyone else in same situation is to keep demanding an engineer visit and escalate if you're not getting anywhere.