cancel
Showing results for 
Search instead for 
Did you mean: 

Hub3, nonstop broadband dropouts since 30 Jan

disrupted
On our wavelength

Hi all,

since 30 Jan, on Hub 3, 250Mbs package, no wiring or any other changes (after having had perfect connection for ages):

- broadband connection comes & goes, some periods are bareable, but often it dies every literally every other minute. I have set up a BQM and recent pic is below.

- Hub3 every time logs what I exemplify below as recent logs;

I have logged a fault, I was treated to "improving services" message after that - and this morning SMS about everything being fine / fixed. It isn't, as illustrated below. Phone support has been surprisingly unhelpful, they read off the notes that some work in the area fixed all issues.

I just don't know what to do? please it has become unusable - this message was typed during 4 attempts.

a3e0b8549b52e92790986a52b81ea53507c0ab3d-07-02-2023 (2).png

Network Log

Time Priority Description

07/02/2023 14:47:54criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:47:54Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:47:54criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:47:53Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:47:20noticeLAN login Success;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:46:32Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:46:32criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:46:31Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:46:31criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:45:13criticalNo Ranging Response received - T3 time-out;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:45:9criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:45:9Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:45:9criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:43:47Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:43:47criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:43:47Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:43:46criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:42:28Warning!Lost MDD Timeout;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:42:25Warning!RCS Partial Service;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;
07/02/2023 14:42:25criticalSYNC Timing Synchronization failure - Loss of Sync;CM-MAC=**:**:**:**:**:**;CMTS-MAC=**:**:**:**:**:**;CM-QOS=1.1;CM-VER=3.0;

 

13 REPLIES 13

disrupted
On our wavelength
arrgh sorry, sorry, it cannot take the full logs, trying the up/downstream as addons below:

Upstream bonded channels
Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID
1 49600000 37.5 5120 64 qam 1
2 23600000 36 5120 64 qam 5
3 36600000 37 5120 64 qam 3
4 30100000 36.5 5120 64 qam 4
5 43100000 37.3 5120 64 qam 2


Upstream bonded channels
Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts
1 ATDMA 0 0 1 0
2 ATDMA 0 0 0 0
3 ATDMA 0 0 0 0
4 ATDMA 0 0 0 0
5 ATDMA 0 0 2 0



Refresh data

Downstream bonded channels
Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID
1 203000000 5 37 256 qam 9
2 211000000 4.9 37 256 qam 10
3 219000000 4 37 256 qam 11
4 227000000 3.7 37 256 qam 12
5 235000000 4 37 256 qam 13
6 243000000 3.5 36 256 qam 14
7 251000000 2.5 36 256 qam 15
8 259000000 2.5 36 256 qam 16
9 267000000 3.2 37 256 qam 17
10 275000000 3.5 37 256 qam 18
11 283000000 3.5 37 256 qam 19
12 291000000 3.5 37 256 qam 20
13 299000000 3.7 37 256 qam 21
14 307000000 4 37 256 qam 22
15 315000000 3.5 37 256 qam 23
16 323000000 3.7 37 256 qam 24
17 331000000 3.7 37 256 qam 25
18 339000000 3.5 37 256 qam 26
19 347000000 3.5 37 256 qam 27
20 355000000 3.5 37 256 qam 28
21 363000000 3.5 37 256 qam 29
22 371000000 2.9 37 256 qam 30
23 379000000 2.2 37 256 qam 31
24 387000000 2 37 256 qam 32


Downstream bonded channels
Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors
1 Locked 37.3 638 6899
2 Locked 37.6 639 7940
3 Locked 37.3 1102 4332
4 Locked 37.3 634 4884
5 Locked 37.3 640 7850
6 Locked 36.6 719 4391
7 Locked 36.6 1163 7100
8 Locked 36.6 1200 3871
9 Locked 37.3 1072 4268
10 Locked 37.3 946 4939
11 Locked 37.3 1035 4710
12 Locked 37.3 872 5061
13 Locked 37.3 538 4045
14 Locked 37.6 848 11124
15 Locked 37.6 1433 10608
16 Locked 37.3 1456 9748
17 Locked 37.6 1443 2670
18 Locked 37.6 1534 2635
19 Locked 37.6 1172 2964
20 Locked 37.3 1509 11005
21 Locked 37.6 1518 10610
22 Locked 37.6 1425 10400
23 Locked 37.3 1932 17167
24 Locked 37.3 947 10871

disrupted
On our wavelength

argh, sorry sorry it didn't accept the full length, trying to add the other logs now:

 

 

Upstream bonded channels

Channel Frequency (Hz) Power (dBmV) Symbol Rate (ksps) Modulation Channel ID
14960000037.5512064 qam1
22360000036512064 qam5
33660000037512064 qam3
43010000036.5512064 qam4
54310000037.3512064 qam2


Upstream bonded channels

Channel Channel Type T1 Timeouts T2 Timeouts T3 Timeouts T4 Timeouts
1ATDMA0010
2ATDMA0000
3ATDMA0000
4ATDMA0000
5ATDMA0020
 
 

Downstream bonded channels

Channel Frequency (Hz) Power (dBmV) SNR (dB) Modulation Channel ID
1203000000537256 qam9
22110000004.937256 qam10
3219000000437256 qam11
42270000003.737256 qam12
5235000000437256 qam13
62430000003.536256 qam14
72510000002.536256 qam15
82590000002.536256 qam16
92670000003.237256 qam17
102750000003.537256 qam18
112830000003.537256 qam19
122910000003.537256 qam20
132990000003.737256 qam21
14307000000437256 qam22
153150000003.537256 qam23
163230000003.737256 qam24
173310000003.737256 qam25
183390000003.537256 qam26
193470000003.537256 qam27
203550000003.537256 qam28
213630000003.537256 qam29
223710000002.937256 qam30
233790000002.237256 qam31
24387000000237256 qam32


Downstream bonded channels

Channel Locked Status RxMER (dB) Pre RS Errors Post RS Errors
1Locked37.36386899
2Locked37.66397940
3Locked37.311024332
4Locked37.36344884
5Locked37.36407850
6Locked36.67194391
7Locked36.611637100
8Locked36.612003871
9Locked37.310724268
10Locked37.39464939
11Locked37.310354710
12Locked37.38725061
13Locked37.35384045
14Locked37.684811124
15Locked37.6143310608
16Locked37.314569748
17Locked37.614432670
18Locked37.615342635
19Locked37.611722964
20Locked37.3150911005
21Locked37.6151810610
22Locked37.6142510400
23Locked37.3193217167
24Locked37.394710871

 

Good Afternoon @disrupted, thanks for your post and very warm welcome to you! 😊

Sorry to hear of the recent broadband issues, can you possibly provide us with an update on how the services are currently performing?

Have you checked our Service Status Checker or called our Service Status Line 0800 561 0061 for an update on any outages we may be experiencing?

Kindest regards,

David_Bn

Hi,

Well, whilst the automated fault phone line tells me there is a 'complex' problem in my area, it confirms intermittent continuing issues - also, phoning the fault team on a daily basis confirms there is an area issue, BUT the service checker website seems to be in a parallel Universe as it goes from "no issues" (whilst at the same time phone line says what I described) to area issues - then latter vanishes again, but with ZERO correlation with actual facts. 

This has been the story from the start, automated line + fault team have been telling me they could see more than 800 disconnects in just one day, they could tell me a yet again moved estimate of fixing it (it;s been moving forever with no end in sight), BUT the website and app keep telling me no issues.

Since 30 January, with all my logs and quality monitor diagrams, being constantly faced with the total contradiction between several systems and the faul team's updates, I just don't know what to do any more  - apart from running out of patience and leaving VM for good, after more than a decade of spotless internet service. 

==> In short: it is still cutting out regularly, during some periods does this very often, this afternoon only few times per hour... but it is completely random, and everything at my end plus what the fault team confirmed shows network issues on VM side. 

I just cannot get a time estimate, after almost 2 weeks, that will actually result in a working stable internet again.

Dear @David_Bn

further to my earlier reply, it has become absolutely unusable overnight, this morning it cut out dozens of times in the past 40 minutes only. 

Fault helpline is telling me still ongoing area issue, but both app and website status checker pages tell me everything is fine. 

Is there ANY way of tracking this, a fault reference, and accurate status with fix estimates? (Latter, when I managed to get one, are just moving into the future and there is no end to this). 

 

Thanks for the reply on the forums @disrupted.

Sometimes the information doesn't match up on the service checker from what we have 😞
I will look in to this for you and assist you on this.
I will send a private message - watch out for the purple envelope inviting you in.

Kind regards,
Ilyas.

Ilyas_Y
Forum Team

New around here? Check out the do's and don'ts, in our Community FAQs


bhose
Joining in

Hi. I've been getting the same thing. No idea why. The service status does say some issues in my are but.....my iPadPro and iPhone are still getting the fastest 300mbs service ! It's only my MacBook Pro that keeps cutting out and when I do get a connection it's a relatively slow 30mps.

 

legacy1
Alessandro Volta

@bhose wrote:

Hi. I've been getting the same thing. No idea why. The service status does say some issues in my are but.....my iPadPro and iPhone are still getting the fastest 300mbs service ! It's only my MacBook Pro that keeps cutting out and when I do get a connection it's a relatively slow 30mps.

 


Likely a fault with your MacBook Pro test by wire.

---------------------------------------------------------------

disrupted
On our wavelength

@Ilyas_Y  thank you, I replied in private.

A public (and soon to be much more public) summary of the revealed & consistently recorded fundamental failures in VM processes, fault management, subcontractor management and communication issues are at the moment looking like this:

1. The most serious and raising very heavy questions about VM and their subcontractors doing network maintenance: several times same persistent area fault is closed as "fixed" while the fault is ongoing and continues for days.

a) When again area fault is flagged, the F-number of the fault ticket differs. Therefore somebody created new ticket instead of reopening and tracking previous one - also, when closing it, did not perform tests that would have clearly shown the continued fault. 

When one night they did this for the Nth time, at the very time the "fixed" news came, there were so frequent disruptions that 1 minute (!) of pings would have shown anybody if tried. 

==> Based on captured data, either subcontractors are deliberately doing this & lying to VM (and ultimately customers), or they are this incompetent and the process allows this farce to take place without any accountability.

2. Customer reports of fault, recorded over days and relentlessly occurring, can be contradicted by VM system that lags behind sometimes by hours or even days, whilst at the same time another VM system can state unreal and/or contradictory infomation about very same issue at the exact same time. 

For example, status check site vs. fault helpline postcode-based status check vs. fault team. 

This eminently shows total lack of communication & synchronisation of tools, databases, humans - when it comes to systematic and process-based tracking of the same fault over time. 

3. If/when constantly present fault (shown by ping statistics & Hub logs) is eventually escalated to an area fault, the above absurd contradictions can continue for hours and days, until all information aligns.

Again, total chaos and lack of processes and/or gross incompetence in applying them.

4. Overseas, out of UK office hours fault team staff has virtually no visibility on account holder's key information & history related to the continually reported, reopened, escalated fault. They do not even see e.g. ping statistics that UK team immediately confirms to customer about thousands of disconnects and area faults as root causes. 

This leads to completely absurd conversations and futile attempts to have a consistent timeline with tracking of the same fault over time. 

5. Overseas fault team's competence in understanding elementary aspects is basically zero (e.g. network SNR-induced sync losses, PostRS error stats etc.). They claim they cannot even see ping statistics or previous fault ticket F-numbers (see previous point). They go through script, despite area fault (!) asking customer to factory reset and/or power cycle the router. 

This gross incompetence and/or lack of care/involvement with information provided by customer leads to e.g. several times booking technician (for an area fault!!!!) that is soon after (rightly) cancelled by VM system or persons with reason given via app notification: area fault, not individual connection issue - so obviously pointless technician booking. 

6. There is no way for customer to connect the dots via fault reference number - not even VM staff can connect the entire history that goes on for weeks as one continuous fault record. 

Different people via calls rerouted from different teams are handling everything from zero, with a few exceptions (e.g. UK fault team, when reachable). Also, formal complaints are created without a systematic custody chain and contactability from customer's end. 

This is something I at least in my entire professional life in related sector have not seen in any organisation. It is beyond absurd, and after tracking a fault since 30 January, and having raised it as a eminent network issue since 4 Feb, I cannot find any civilised term to describe VM internal processes and fault management.