cancel
Showing results for 
Search instead for 
Did you mean: 

Persistent SYNC losses for weeks - thus service dropouts (1/2)

palmyra08
Up to speed

Hello in despair...

After many years of luckily spotless service, and many months since VM replaced my old hub with Hub 3: just a couple of weeks ago I started getting SYNC failures that keep making my connection drop sometimes every other minute on bad days, on better days several times per hour.

It varies, goes from good periods to extremely bad, as per attached BQM graphs, and it can "switch" even overnight - so not peak times. Whilst nothing whatsoever was changed at my end in any way (hub, cables, etc.) it makes me think this is some connection out there (cabinet or cabling). 

Calling 150 is absolutely pointless, the "support" is just vastly below any civilised word I can think of for it. Their diagnostic / test keeps saying service is fine, area check keeps saying no problems - which is ludicrous. 

Please see attached network logs, upstream and downstream stats (in this and following post), plus the BQM examples. This is a non-usable connection I am paying for. 

Is there any chance of somehow via forum VM support or anything to reach actual support for a clear technical failure of the service somewhere between my house and whatever cabinet (or beyond)?

Many thanks in advance.

 

Network Log

Time Priority Description

05/02/2023 12:43:4notice

LAN login Success;CM-

 

MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;

05/02/2023 12:31:54Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:31:52criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:31:52Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:31:52criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:31:52Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:08:44Warning!Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:08:44criticalNo Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:08:40criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:08:40Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:08:40criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:08:40Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:04:28noticeLAN login Success;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 12:04:16Warning!LAN login FAILED : Incorrect Username / Password / ConnectionType;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 11:56:22Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 11:56:22criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 11:56:21Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 11:56:21criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 11:44:5Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
05/02/2023 11:44:5criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;

 

Upstream bonded channels

Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID

14960000035.3512064 qam1
22360000034512064 qam5
34310000035512064 qam2
43010000034.3512064 qam4
53660000034.8512064 qam3



Upstream bonded channels

Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts

1ATDMA0030
2ATDMA0070
3ATDMA0030
4ATDMA0040
5ATDMA0020

 

19c34846c332b7104feff8d8c7dbabfdca6a4fff-05-02-2023 (2).png9ced974730e9aa585a18f087c023200ac6070a0e-04-02-2023.png92df824320be1d993bf06398829bc6e02a119029-30-01-2023 (1).png

24 REPLIES 24

Tudor
Very Insightful Person
Very Insightful Person

SNR issues are extremely difficult to trace, sometimes a technician has to visit every user on the street cabinet. This is the reason why the estimated time to fix often changes and gets extended. They are mainly caused by noise ingress and often due to users modifying their setup or using non standard VM cables. The web pages for faults only seems to cover problems where 100s or 1000s of users are affected, for local issues that go down to postcode level the freephone telephone number is always best. I never ever bother with the web page.


Tudor
There are 10 types of people: those who understand binary and those who don't and F people out of 10 who do not understand hexadecimal c1a2a285948293859940d9a49385a2

Thank you, that explains why this so-called media/communications company's service checker is utterly useless. For same post code area and account holder surname, it gives nonstop "no issues" whilst fault team and the automated line keeps apologising and bumping estimates. 

Thing is, in my line of work an estimate is not made-up arbitrarily, it is taking some conservative calculated guess on e.g. worst case scenario - for example, VM's total number of visits in the area. But to just bump it for weeks and weeks in an endless sequence of farcical miscommunications and contradictions is absolutely astounding. 

Plus I bet the "total service loss" is something one will NOT qualify for, despite registering, because in their mind it is total lack of internet - I would very much like to ask them to try using a "network" that cuts out every other minute for weeks, from work to payment and bank transactions and searches, not to mention goodbye any streaming of any kind or video calls for work or personal reason. 


@Tudor wrote:


This is the reason why the estimated time to fix often changes and gets extended. 

Snip…


Or alternatively, the reason is that although VM have indeed identified an issue, they haven’t actually allocated any resources to rectifying it, It’s on someone’s job list to do as and when, but other that that, basically ‘tough’. The wording of VM’s statements often give the impression of a crack team of engineers barrelling down the motorway en route to work 24/7 to ‘fix’ the issue - the reality is somewhat slightly different, no?

Unless, of course you know better, in which case I will certainly apologise and withdraw my above comment.

’Users modifying their setup’, possibly, although I wouldn’t have thought it was a very common issue, I have no doubt, though that you will provide some evidence for this being a common mitigating factor, otherwise, well, sorry but I have to call BS on this claim!

Dear @Matt,

whilst I was given 15 Feb fix estimate after today's Nth estimat expired, and you kindly mentioned 12 Feb this morning, this evening I got notification of fault being fixed. This was while, once again, the same issues continued - and are continuing as per below BQM graph, see after 21:30 when I got "fixed" update. 

The problem, once again, is far from fixed - somebody is randomly closing tickets when nothing was done:

180fc202d898eaed93b6abf10d5d755bee432aa2-10-02-2023 (4 during the fixed notification).png

They sent me a "fixed" notification via the app - WHILE I had no connection and... it continued as per below, see after 21:00 the relentless packet losses.

Nothing was fixed, this is 2nd time in  few days that an admitted area fault (that takes days to be admitted) is randomly closed down with nothing being fixed. The Hub 3 network logs show every single minute the same SYNC loss and huge PostRS errors accumulating.

Either someone is dangerously incompetent and/or lying randomly and in self-contradicting manner for the 2nd week running or fraudulently closing tickets to get paid without any work done:

180fc202d898eaed93b6abf10d5d755bee432aa2-10-02-2023 (4 during the fixed notification).png

They sent me a "fixed" notification on the area fault - WHILE I had no connection on the Hub, and same SYNC losses. It continued, exactly as it did few days ago - see below BWQM shot after 21:00 when it was allegedly "fixed".

Someone is either lying, incompetent, or fraudulently closing tickets with no work done. Or any combination of these. It is the 2nd time in a few days that I went through same loop - and nothing was solved, and takes now again days to convince them that it is area fault - the person in "fault team" could not see anything in terms of logs & history about this issue. 

This is not a media company. A teenage hobbyists knows to make notes on a system, I had to read back their own fault number and he had no sign of this on his system. Absolutely criminal incompetence and IT system for fault tracking that is non-existent.

180fc202d898eaed93b6abf10d5d755bee432aa2-10-02-2023 (4 during the fixed notification).png

 


@palmyra08 wrote:

They sent me a "fixed" notification on the area fault - WHILE I had no connection on the Hub, and same SYNC losses. It continued, exactly as it did few days ago - see below BWQM shot after 21:00 when it was allegedly "fixed".

Someone is either lying, incompetent, or fraudulently closing tickets with no work done. Or any combination of these. It is the 2nd time in a few days that I went through same loop - and nothing was solved, and takes now again days to convince them that it is area fault - the person in "fault team" could not see anything in terms of logs & history about this issue. 

This is not a media company. A teenage hobbyists knows to make notes on a system, I had to read back their own fault number and he had no sign of this on his system. Absolutely criminal incompetence and IT system for fault tracking that is non-existent.

180fc202d898eaed93b6abf10d5d755bee432aa2-10-02-2023 (4 during the fixed notification).png

 


Well, yes not really ‘fixed’ at all was it?

Lying, incompetence, fraud, well there is historic evidence that VM’s somewhat less than ‘stellar’ customer service provision, has indeed been cited as doing just that!

I raised complaint and phone retention department, just to track that I tick all boxes of process before I go to OFCOM and leave VM. 

I have to say, the retention department staff is infinitely more professional, and they see entire history PLUS they can see disconnect statistics in the thousands for just last few days. Which proves again that the outsourced "fault team", who stated have zero visibility, could not even see ongoing fault tickets' reference numbers I had from app etc., have virtually zero access to our account details. They go through robotic and nonsensical scripted stuff, for the Nth time. 

Overnight it was again opened as an area fault, with a different fault number (!) - and another estimate. So good news is they realised it was not "fixed". 

But bad news is they are either incompetently, intentionally, or unintentionally closing tickets on still ongoing, not re-tested faults, then raising new ones with a never-ending repeating cycle that leads nowhere. 

Complaint was promised to be escalated by the person I spoke to today (who did not sound like the Far-Eastern call centre), and one key aspect is that closing tickets without proof of actual remedy is not only unprofessional, it is a fundamental requirement - and VM are systematically failing at this. 

If they had actually run a merely 2 minutes long test, they would have immediately seen last night that the fault was still very much ongoing, as my Hub was logging sync losses every single minute.

Hey palmyra08, thank you for reaching out and I am so sorry to see you are having some connection issues.

I can see from our end that there is an SNR outage in the local area which is due to last until the 14th Feb 2023.

These outages can affect the overall connection so I am sorry about this.

I can see you have also spoke to the team after posting this, did they manage to give you any advice at all? Thanks 

Matt - Forum Team


New around here?

Well, I would like to give an update that then might be useful for others, too in terms of clues vs. root cause & solution, but still can't...

The "fix" estimate moved again, and whilst an area fault is still in effect, it only shows via the "test hub" button - all top-level pages (website and app status views) says "no issues" and... so does the automated fault helpline's status reporting robot voice. 

At the same time, the network is like this, and 2 VM support persons re-confirmed the area fault - so there is no real end in sight:

630d2ebfcb933a39e1d9ca666b08ef7ad45b50f7-14-02-2023.png