23-12-2021 02:11 - edited 23-12-2021 02:18
Hello,
So I put off writing this for a long while - but not its driving me more and more insane. So, when we're just gamiing, everything is fine - ping is between 20ms and 30ms which is quite fine. Problem begins when someone tries to upload or download something, even if its only downloading at 300Mbps or uploading at 30Mbps - the ping goes to 80ms - 100ms... Completely cripples the broadband. On a 1Gbps service this is ridicoulous, this should be able to download at 900Mbps and still not touch the pings.... I have a 2Gbps backbone in the house going to the main switch, which is then connected to a pfsense box and that goes to the hub 4. Pings on the local network up to the pfsense box are <1ms even with IPerf running at full speed - so no bottlenecks on my LAN.
Also, the RTT and RTTsd on the WAN are extremely high, 10ms / 3ms on average - lowest I've seen it go was 6ms / 1.8ms. I have tried 3 different NICs and the built in LAN port on the pfsense box and all produce the same results.
Is it the broadband that literally is so bad, or is it the hub 4? Perhaps its faulty? At this point, I'm literally willing to pay the penalty for leaving early and go back 80/20 from BT. You can't replace reliability and stability with just speed - which isn't 1Gbps at all either, most of the time it peaks out at 700Mbps....
And to confirm, the Hub 4 is running in Modem Mode - Router mode 1/2 my IoT WiFi devices refuse to connect to it lol
3.0 Downstream channels
Channel | Frequency (Hz) | Power (dBmV) | SNR (dB) | Modulation | Channel ID |
25 | 331000000 | 0.7 | 39 | QAM256 | 25 |
1 | 139000000 | 2.8 | 40.4 | QAM256 | 1 |
2 | 147000000 | 2.7 | 40.4 | QAM256 | 2 |
3 | 155000000 | 2.9 | 40.4 | QAM256 | 3 |
4 | 163000000 | 2.6 | 40.9 | QAM256 | 4 |
5 | 171000000 | 2.2 | 40.4 | QAM256 | 5 |
6 | 179000000 | 2 | 40.4 | QAM256 | 6 |
7 | 187000000 | 2.1 | 40.4 | QAM256 | 7 |
8 | 195000000 | 2 | 40.4 | QAM256 | 8 |
9 | 203000000 | 2 | 40.4 | QAM256 | 9 |
10 | 211000000 | 1.9 | 40.4 | QAM256 | 10 |
11 | 219000000 | 2 | 40.4 | QAM256 | 11 |
12 | 227000000 | 1.9 | 40.4 | QAM256 | 12 |
13 | 235000000 | 1.9 | 40.4 | QAM256 | 13 |
14 | 243000000 | 2.1 | 40.9 | QAM256 | 14 |
15 | 251000000 | 1.9 | 40.4 | QAM256 | 15 |
16 | 259000000 | 2 | 40.4 | QAM256 | 16 |
17 | 267000000 | 1.7 | 40.4 | QAM256 | 17 |
18 | 275000000 | 1.5 | 40.4 | QAM256 | 18 |
19 | 283000000 | 1.1 | 40.9 | QAM256 | 19 |
20 | 291000000 | 1 | 40.4 | QAM256 | 20 |
21 | 299000000 | 1.3 | 40.4 | QAM256 | 21 |
22 | 307000000 | 1.1 | 40.4 | QAM256 | 22 |
23 | 315000000 | 1 | 40.9 | QAM256 | 23 |
24 | 323000000 | 0.7 | 39 | QAM256 | 24 |
26 | 339000000 | 0.7 | 40.4 | QAM256 | 26 |
27 | 347000000 | 0.5 | 39 | QAM256 | 27 |
28 | 355000000 | 0.6 | 40.4 | QAM256 | 28 |
29 | 363000000 | 0.6 | 40.4 | QAM256 | 29 |
30 | 371000000 | 0.5 | 40.4 | QAM256 | 30 |
31 | 379000000 | 0.5 | 40.4 | QAM256 | 31 |
3.0 Downstream channels
Channel | Lock Status | RxMER (dB) | Pre RS Errors | Post RS Errors |
25 | Locked | 38.983261 | 0 | 0 |
1 | Locked | 40.366287 | 0 | 0 |
2 | Locked | 40.366287 | 0 | 0 |
3 | Locked | 40.366287 | 0 | 0 |
4 | Locked | 40.946209 | 0 | 0 |
5 | Locked | 40.366287 | 0 | 0 |
6 | Locked | 40.366287 | 0 | 0 |
7 | Locked | 40.366287 | 0 | 0 |
8 | Locked | 40.366287 | 0 | 0 |
9 | Locked | 40.366287 | 0 | 0 |
10 | Locked | 40.366287 | 0 | 0 |
11 | Locked | 40.366287 | 0 | 0 |
12 | Locked | 40.366287 | 0 | 0 |
13 | Locked | 40.366287 | 0 | 0 |
14 | Locked | 40.946209 | 0 | 0 |
15 | Locked | 40.366287 | 0 | 0 |
16 | Locked | 40.366287 | 0 | 0 |
17 | Locked | 40.366287 | 0 | 0 |
18 | Locked | 40.366287 | 0 | 0 |
19 | Locked | 40.946209 | 0 | 0 |
20 | Locked | 40.366287 | 0 | 0 |
21 | Locked | 40.366287 | 0 | 0 |
22 | Locked | 40.366287 | 0 | 0 |
23 | Locked | 40.946209 | 0 | 0 |
24 | Locked | 38.983261 | 0 | 0 |
26 | Locked | 40.366287 | 0 | 0 |
27 | Locked | 38.983261 | 0 | 0 |
28 | Locked | 40.366287 | 0 | 0 |
29 | Locked | 40.366287 | 0 | 0 |
30 | Locked | 40.366287 | 0 | 0 |
31 | Locked | 40.366287 | 0 | 0 |
3.1 Downstream channels
Channel | Channel Width (MHz) | FFT Type | Number of Active Subcarriers | Modulation (Active Profile) | First Active Subcarrier (Hz) |
33 | 96 | 4K | 1880 | QAM4096 | 759 |
3.1 Downstream channels
Channel ID | Lock Status | RxMER Data (dB) | PLC Power (dBmV) | Correcteds (Active Profile) | Uncorrectables (Active Profile) |
33 | Locked | 42 | 1.9 | 111001074 | 0 |
3.0 Upstream channels
Channel | Frequency (Hz) | Power (dBmV) | Symbol Rate (ksps) | Modulation | Channel ID |
1 | 39400000 | 36.5 | 5120 KSym/sec | 64QAM | 8 |
2 | 46200000 | 36.3 | 5120 KSym/sec | 64QAM | 7 |
3 | 53700000 | 36.3 | 5120 KSym/sec | 64QAM | 6 |
4 | 60300000 | 36.5 | 5120 KSym/sec | 64QAM | 5 |
3.0 Upstream channels
Channel | Channel Type | T1 Timeouts | T2 Timeouts | T3 Timeouts | T4 Timeouts |
1 | US_TYPE_STDMA | 0 | 0 | 1 | 0 |
2 | US_TYPE_STDMA | 0 | 0 | 0 | 0 |
3 | US_TYPE_STDMA | 0 | 0 | 1 | 0 |
4 | US_TYPE_STDMA | 0 | 0 | 0 | 0 |
The average BMQ - I have no idea how this doesn't capture the ping spikes....
Answered! Go to Answer
on 12-01-2022 13:48
Good afternoon @bartkus05,
Thank you for coming back to us.
I have managed to locate your account & I have managed to run some diagnostics from our side & from what I can see everything looks to be within the expected spec.
Kind regards,
Zak_M
on 23-12-2021 07:02
80-100ms loaded latency seems well within the realms of possibility. Lower would be nice, but I can't see that a VDSL 80/20 connection would do better when somebody pulls down a Cleveland steamer. What do a few fast.com tests report for loaded latency? For comparison my connection reports 60-190ms, and usually in a range 90-120ms.
At the time you took the hub status data everything looks normal - power levels seem OK, SNR, modulation, error counts all where they'd be expected to be. The BQM looks fairly normal for a DOCSIS connection - a few spikes, but not the sort of thing I'd expect to create routinely observable problems. Do you see the same thing with the hub in router mode and your own router in access point mode? I'm a tad puzzled by the 700 Mbps speeds you're seeing, perhaps the forum staff might spot something not visible to us, but they'll need the hub running in router mode to check the status.
I assume that you're concerned about the effect on gaming? In which case if the forum staff can't see anything, maybe your only option is to run in a 35 Mbps Openreach connection purely for gaming?
on 23-12-2021 15:34
Hey, thanks for your reply
I checked with fast.com, 850Mbps - 11ms unloaded 118ms loaded. I will check in router mode when I get a chance.
I have never seen anything like this on my vdsl connection - hence why the post. Even with someone downloading and saturating the bandwidth, ping never went above 20ms!! Only downside was 80Mbps speed.
I honestly thought it would be a faulty hub or another issue down the line. We had VM installed in March I believe and the cabinet is pure fibre, and fibre is up to the house then about 5m of coax. I was expecting much lower pings hehe but to see these spikes when download reaches 350mbps is ridiculous.
Another thing, pinging the gateway on the hub 4, so not the 192.168.100.1 but the 80...1, pings are in the 10ms average range. To 100.1 <1ms.
on 24-12-2021 16:49
Hey,
Here's another fast.com result - directly connected to Hub 4 on LAN. Exactly the same as on Modem Mode (Maybe even slightly worse) 😞
on 26-12-2021 00:42
on 06-01-2022 11:14
Hey, thanks for your reply. Sorry for a late come back to this - but I was tracking the speeds over a few days to put here.
I very rarely see over 900Mbps - no matter the time of day. Its exactly the same in router mode. I have also upgraded all my networking gear past the Hub 4 to 2.5Gbps and CAT6A shielded cables (grounded properly). New cables between the Hub 4 and pfsense box as well. Absolutely no difference when it comes to the speed tests.
Date | Time | Ping Unloaded | Ping Loaded | Download | |
28/12/2021 | Morning | 11ms | 80ms | 800Mbps | |
Afternoon | 14ms | 111ms | 910Mbps | ||
Evening | 14ms | 23ms | 700Mbps | ||
30/12/2021 | Morning | 21ms | 74ms | 310Mbps | |
Afternoon | 15ms | 250ms | 890Mbps | ||
Evening | 14ms | 56ms | 800Mbps | ||
02/01/2022 | Morning | 9ms | 97ms | 1Gbps | Pages took ages to load this day - swapped to Google DNS from Cloudflare |
Afternoon | 21ms | 60ms | 910Mbps | ||
Evening | 20ms | 111ms | 910Mbps | ||
Upgraded to 2.5Gbps networking. All equipment replaced. Iperf running on pfsense shows full 2.5Gbps | |||||
05/01/2022 | Morning | 9ms | 122ms | 560Mbps | |
Afternoon | 11ms | 119ms | 700Mbps | ||
Evening | 12ms | 141ms | 650Mbps |
I didn't specify this earlier but I'm on the Gig1 + Home phone service, however the phone is not used at all, we just got this one to keep the number. Any chance someone from VM can run some tests on the line and see from their end?
on 06-01-2022 19:55
on 07-01-2022 00:13
Hey mate, here's the result. Ran it just now.
I have also taken a read into bufferbloat - and out of curiosity set it up on my pfsense box using the combination of traffic limiters and download/upload queues. Exactly the same result on both runs - suggesting its not my pfsense box. Better results on this one compared to fast.com
on 07-01-2022 09:28
I remain puzzled by the erratic download speeds (vast majority of the time VM deliver expected speeds very consistently), but in terms of latency and bufferbloat, what you're seeing is probably the limitations of the mix of analogue coax connections and the optical implementation. A possible cause here is optical beat interference on the fibre side of things, that is a problem known to occur "in the wild" and affects the upstream and causes data corruption and packet loss, whether this is occurring for you, and whether VM have the expertise to detect and resolve it I couldn't say.
OBI is a greater risk on a D3.1 upstream ( VM currently don't use D3.1 on the upstream), but lab tests have shown it occurring at low levels on D3.0 4 channel upstream setups, which is how VM setup their systems, and that might explain the erratic performance and latency issues that don't hit the BQM very obviously. OBI is more likely on a high contention network, and VM do like to run as close as possible to available capacity, so it seems a credible guess.
Perhaps somebody with real experience of VM's RFoG networks may be able to offer better informed comment.
on 07-01-2022 15:37
So... I decided to go out and tighten all the coax connections around - including inside the brown box outside - made sure everything is plugged in properly.
Coax connection was a little loose on the splitter where the power goes in to the Boostral unit - but not overly.
I tried again replugging into different ports on the hub 4 - and the results I got were quite shocking tbh - Port 2/3 are dead in modem mode and router mode, I had nothing on them even after a restart, Port 1 works fine and now here's the results from Port 4 which I'm plugged into atm.
Possibly a faulty hub?