cancel
Showing results for 
Search instead for 
Did you mean: 

IPv6 support on Virgin media

dgcarter
Dialled in

Does anyone know whether (and if so when) Virgin plan to implement IPv6 on its network?

1,493 REPLIES 1,493

Fair enough.

It will be difficult, given the niche case of 6in4 and being on the newer Hub4 to test, but happy to try and help even if it invalidates my own theory, happy to be wrong!

Here's a couple of threads asking the question in a couple of forums to see what the response might be on it. No guarantees it will yield a sizable response but one can try!

There is anecdotal evidence that that SH3 isn't very good at handling large amounts of UDP traffic in certain situations.

A recent (Jan / Feb of this year) software update introduced all sorts of latency issues with users running the SH3 in modem mode hooked up to a pfsense router. Eventually someone worked out that if you turn of DNS resolution (and just use DNS forwarding) in pfsense, then the issue with latency goes away. The theory is that doing DNS resolution to the root servers generates more or a different type of traffic, that the SH3 isn't very good at handling.

I can show you graphs where I have a perfectly usable connection one moment, then the hub goes offline briefly to carry out the software update, then comes back with awful latency spikes. Then a month or two later someone finally worked out how to work around the issue, and the spikes all but disappear once a single checkbox is changed in pfsense.

This does suggest that there's a potential performance issue in the SH3 even when it's running in modem mode. It's far from conclusive though.

Andy

That certainly sounds like an SH3 firmware issue, but it seems pretty specific. I've been doing all my external DNS lookups through my SH2/SH3/Hitron for several years and I never observed that issue. I'm not using pfsense though.

Not sure this is really related/relevant to the 6in4 issue other than as a general indicator that VM modems/routers are often a bit flaky 🙂


@ChrisJenkins wrote:

That certainly sounds like an SH3 firmware issue, but it seems pretty specific. I've been doing all my external DNS lookups through my SH2/SH3/Hitron for several years and I never observed that issue. I'm not using pfsense though.

Not sure this is really related/relevant to the 6in4 issue other than as a general indicator that VM modems/routers are often a bit flaky 🙂


Yes, it seemed to be something specific to pfsense that was triggering this issue. A small number of people had a similar setup and had seen the same thing. They made the same change as I was suggested to try and it also had a similarly beneficial effect.

And yes, not necessarily relevant to 6in4, but I just wanted to offer it up as a potentially similar issue, showing that it could indeed be something in the SH3 itself, even when running in modem mode.

Andy

So, there may be something in this... I ran some speed tests (iperf3) over IPv4 against a public server using both TCP and UDP. One would expect broadly similar results but in fact there is a huge difference...

With TCP I can consistently get ~490 Mbit/s download speed using a single 'stream'. With UDP I max out at ~ 1 Mbit/s per stream, though it does scale somewhat (16 concurrent streams gives me ~16 Mbit/s throughput). So for some reason UDP is much, much slower than TCP, at least for this test. This *might* indicate some issue with UDP traffic either in the VM network or possible related to the modem (Hitron in my case). I need to try and do some more detailed testing to dig into this.

I wonder if 6in4 tunnels use UDP or TCP... I need to also look into that. Strangely, my VPN provider supports OpenVPN over both TCP and UDP and for that UDP is much, much faster (~3x) the TCP (I can get ~150 Mbit/s download via VPN over UDP versus around 50 Mbit/s via VPN over TCP.

All very strange and needs more investigation I think.

 

> With UDP I max out at ~ 1 Mbit/s per stream, though it does scale somewhat (16 concurrent streams gives me ~16 Mbit/s throughput). So for some reason UDP is much, much slower than TCP, at least for this test.

you are testing it wrong. iperf is using 1Mbit/s as a default stream bandwidth, it is UDP, a loss tolerant protocol! use `-b` switch to adjust bandwidth

Anonymous
Not applicable

6in4 uses IP protocol 41 which is neither UDP (IP protocol 17) or TCP (IP protocol 6). This is likely the source of the problem whether it be an issue with the VM network or the CPE. Once upon a time there were many IP protocols these days anything outside of TCP, UDP and ICMP is pretty unusual. Network HW vendors have heavily optimised their hardware and software on this basis. For example many home routers have HW accelerated NAT that only speed up UDP, TCP and ICMP, everything else raises an exception that has to be processed by CPU. Sometimes this exception handling even works...😁 (OpenWRT avoided HW accelerated NAT for many years because of this).

"I wonder if 6in4 tunnels use UDP or TCP" <-- actually neither, UDP is protocol 17, tcp is protocol 6, 6in4 is protocol 41, does not utilize TCP or UDP.. one thing that's been mentioned is that there are optimized handling for TCP and UDP in the data path (priority, better treatment, etc, who knows), but since 6in4 is neither tcp or udp it has suboptimal handling somewhere along the data path (whether this be in the CPE or internal to VM's network)

Great catch @ksim I hand't noticed that in the man page but yes, the default bitrate for iperf3 is unlimited for TCP and 1 Mbit/s for UDP. Weird, but hey...

Okay, when I use a limit of 1000Mbit/s (i.e. no practical limit) I get ~360 Mbit/.s over UDP, so noticeable slower than TCP but certainly not enough to explain the 6in4 tunnel issue (even if they use UDP, which I haven't confirmed yet).

Ah okay, thanks for the education!