on 05-02-2023 13:00
Hello in despair...
After many years of luckily spotless service, and many months since VM replaced my old hub with Hub 3: just a couple of weeks ago I started getting SYNC failures that keep making my connection drop sometimes every other minute on bad days, on better days several times per hour.
It varies, goes from good periods to extremely bad, as per attached BQM graphs, and it can "switch" even overnight - so not peak times. Whilst nothing whatsoever was changed at my end in any way (hub, cables, etc.) it makes me think this is some connection out there (cabinet or cabling).
Calling 150 is absolutely pointless, the "support" is just vastly below any civilised word I can think of for it. Their diagnostic / test keeps saying service is fine, area check keeps saying no problems - which is ludicrous.
Please see attached network logs, upstream and downstream stats (in this and following post), plus the BQM examples. This is a non-usable connection I am paying for.
Is there any chance of somehow via forum VM support or anything to reach actual support for a clear technical failure of the service somewhere between my house and whatever cabinet (or beyond)?
Many thanks in advance.
Time Priority Description
05/02/2023 12:43:4 | notice | LAN login Success;CM-
MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:31:54 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:31:52 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:31:52 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:31:52 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:31:52 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:08:44 | Warning! | Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:08:44 | critical | No Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:08:40 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:08:40 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:08:40 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:08:40 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:04:28 | notice | LAN login Success;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 12:04:16 | Warning! | LAN login FAILED : Incorrect Username / Password / ConnectionType;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 11:56:22 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 11:56:22 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 11:56:21 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 11:56:21 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 11:44:5 | Warning! | RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
05/02/2023 11:44:5 | critical | SYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0; |
Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID
1 | 49600000 | 35.3 | 5120 | 64 qam | 1 |
2 | 23600000 | 34 | 5120 | 64 qam | 5 |
3 | 43100000 | 35 | 5120 | 64 qam | 2 |
4 | 30100000 | 34.3 | 5120 | 64 qam | 4 |
5 | 36600000 | 34.8 | 5120 | 64 qam | 3 |
Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts
1 | ATDMA | 0 | 0 | 3 | 0 |
2 | ATDMA | 0 | 0 | 7 | 0 |
3 | ATDMA | 0 | 0 | 3 | 0 |
4 | ATDMA | 0 | 0 | 4 | 0 |
5 | ATDMA | 0 | 0 | 2 | 0 |
on 16-02-2023 18:39
Hi Disrupted,
Thank you for your post. I can see you have been on another thread and in PM with someone from our forum team 📩
I can see this looks to be resolved from the most recent interaction.
Thanks,
Zoie
on 16-02-2023 20:33
It looks resolved, but I shall follow up on the open complaint ticket as it is beyond belief that it took this many weeks, plus revealed some shocking failures of elementary fault management process, communication, and IT infrastructure issues - which I have recorded.
If "normality" is that a VM customer can be without usable internet for weeks, whilst even VM's own status reports are constantly contradicting each other for weeks, plus twice an area fault is closed WHILE it is still going on & continues... this raises very serious questions about competence, awareness of facts, and managing an classic error / fault process from start to end.
Not to mention the breathtaking incompetence of the out-of-hours remote & subcontracted "support staff"... astonishing how elementary details of router logs are not grasped by them, nor do they understand what an area fault vs. a personal connection issue is. They should not be in this job, full stop.
on 19-02-2023 13:43
Thanks for the reply on the forums @disrupted. 🙂
I understand the frustration SNR (Signal to Noise Ratio) faults can cause, the nature of these faults is that it takes a while due to the entire network being searched for what is causing the issues, sometimes it may be that a neighbour hasn't plugged in their equipment properly, other cases it may be a faulty cable that needs to be searched for one by one.
Keep an eye out on the connection and we'll then be able to see if it all has been resolved.
Kind regards,
Ilyas.
on 21-02-2023 15:28
Indeed, and as firmware systems specialist, I fully appreciate the nature of such faults.
The key frustration however, as reported earlier, came from the absolutely nonsensical fault & communication management. All have been recorded and date-by-date catalogued:
If at the same time, for days on end, several VM IT systems report to customer completely contradictory status about the very same, continued area-wide fault, plus fault tickets are twice closed as "fixed" WHILE they are occurring (and continue second by second for days), one cannot have any trust whatsoever in any piece of information obtained via app, status check website, and automated status reporting phone line.
This went on and on for 2 1/2 weeks, and it is something I have never seen in any organisation, let alone a communications / media company of this size.
Not to mention the also describe frankly shocking incompetence of out-of-hours, overseas support staff who not only could not understand the difference between an individual connection and an area-wide fault, but also had no access to crucial information relating to the history of the fault.
Anyway, summary of the key procedural and management failures has been picked up by VM executive team, too and I hope in the long term some lessons are learnt from such sustained tragicomedy.
on 23-02-2023 15:58
Hi disrupted,
We completely see your frustration with this and understand the inconvenience it's caused, and we are sorry for the experience you've had up to now.
We can see that you have been able to speak with our Executive Team yesterday, can you please let us know if they've been able to reach a satisfactory resolution with you?
Thanks