@Ilyas_Y thank you, I replied in private.
A public (and soon to be much more public) summary of the revealed & consistently recorded fundamental failures in VM processes, fault management, subcontractor management and communication issues are at the moment looking like this:
1. The most serious and raising very heavy questions about VM and their subcontractors doing network maintenance: several times same persistent area fault is closed as "fixed" while the fault is ongoing and continues for days.
a) When again area fault is flagged, the F-number of the fault ticket differs. Therefore somebody created new ticket instead of reopening and tracking previous one - also, when closing it, did not perform tests that would have clearly shown the continued fault.
When one night they did this for the Nth time, at the very time the "fixed" news came, there were so frequent disruptions that 1 minute (!) of pings would have shown anybody if tried.
==> Based on captured data, either subcontractors are deliberately doing this & lying to VM (and ultimately customers), or they are this incompetent and the process allows this farce to take place without any accountability.
2. Customer reports of fault, recorded over days and relentlessly occurring, can be contradicted by VM system that lags behind sometimes by hours or even days, whilst at the same time another VM system can state unreal and/or contradictory infomation about very same issue at the exact same time.
For example, status check site vs. fault helpline postcode-based status check vs. fault team.
This eminently shows total lack of communication & synchronisation of tools, databases, humans - when it comes to systematic and process-based tracking of the same fault over time.
3. If/when constantly present fault (shown by ping statistics & Hub logs) is eventually escalated to an area fault, the above absurd contradictions can continue for hours and days, until all information aligns.
Again, total chaos and lack of processes and/or gross incompetence in applying them.
4. Overseas, out of UK office hours fault team staff has virtually no visibility on account holder's key information & history related to the continually reported, reopened, escalated fault. They do not even see e.g. ping statistics that UK team immediately confirms to customer about thousands of disconnects and area faults as root causes.
This leads to completely absurd conversations and futile attempts to have a consistent timeline with tracking of the same fault over time.
5. Overseas fault team's competence in understanding elementary aspects is basically zero (e.g. network SNR-induced sync losses, PostRS error stats etc.). They claim they cannot even see ping statistics or previous fault ticket F-numbers (see previous point). They go through script, despite area fault (!) asking customer to factory reset and/or power cycle the router.
This gross incompetence and/or lack of care/involvement with information provided by customer leads to e.g. several times booking technician (for an area fault!!!!) that is soon after (rightly) cancelled by VM system or persons with reason given via app notification: area fault, not individual connection issue - so obviously pointless technician booking.
6. There is no way for customer to connect the dots via fault reference number - not even VM staff can connect the entire history that goes on for weeks as one continuous fault record.
Different people via calls rerouted from different teams are handling everything from zero, with a few exceptions (e.g. UK fault team, when reachable). Also, formal complaints are created without a systematic custody chain and contactability from customer's end.
This is something I at least in my entire professional life in related sector have not seen in any organisation. It is beyond absurd, and after tracking a fault since 30 January, and having raised it as a eminent network issue since 4 Feb, I cannot find any civilised term to describe VM internal processes and fault management.