How do servers/computers/users/applications know to request a resource over the Internet with an IPv6 address vs. and IPv4 address?
Answer
The first thing that a client determines is which protocols are available. Let's assume that both IPv4 and IPv6 are available (otherwise the answer to which protocol to choose is trivial ;) It will then do a DNS lookup for both the A (IPv4 address) and AAAA (IPv6 address) records. If only one type is returned then it will use that. If both IPv4 and IPv6 addresses are returned the default behaviour depends a bit on the client software. Usually RFC 3484 is used.
According to the official standards it should prefer IPv6, but because there are some (0.01% or less) machines that have misconfigured IPv6 the clients have become smarter. Most browsers these days will try to connect over IPv6, but if they don't get a working connection within 300 milliseconds they will try to connect over IPv4 in parallel. The first connection that succeeds is then used. This is covered in the Happy Eyeballs RFC.
Apple changed this in Lion. There the operating system actually keeps track of the performance of all connections, and if it determines that the IPv4 connection has lower latency than the IPv6 connection it will start preferring IPv4. But if the IPv4 connection becomes slower it might switch back to IPv6. Take a look at this mailing list thread for a discussion of this feature.
For the user it shouldn't matter if IPv4 or IPv6 is used, as long as it works. IPv4 and IPv6 should be provided equally well. Websites should work exactly the same over IPv4 as over IPv6, etc.
IPv4 will remain is use for many years to come. It will become unusable once new services (websites, games, etc) are deployed only over IPv6 because there are no more new IPv4 addresses to be used. And at some point everything that works over IPv4 will also work over IPv6. At that point in time disabling IPv4 will save time and money (why maintain two protocols when one is enough?).
No comments:
Post a Comment