Although we still haven't gotten PCI Express 5.0 devices in our hot little hands yet, believe it or not, the PCIe 6.0 specification has already been released. And as you may have guessed, it's insanely fast.
At 126GB/s one way on an X16 link, it's twice as fast as 5.0, four times as fast as 4.0, which is fairly new on the market, and eight times faster than the 3.0 devices many of us idiots are currently using. But how do they make it so fast? And more importantly, does it have real relevance for you the home user or is it just overkill?
To find out, we spoke with Debendra Das Sharma, an Intel Fellow in their data platforms group, and we'd like to thank him for lending his time and expertise.
So unsurprisingly, PCI Express 6.0 is backwards compatible with all the previous generations of PCIe, but if you go all the way back to version 1.0, you could only get up to 4GB/s one way from an X16 slot. Now we're pushing 32 times as much data through the same link.
Older revisions of PCI Express got faster and faster because they increased their transmission frequencies, but it turns out you can only do this so much before the signal becomes super unstable.
It's kind of similar to how a 5.0 GHz WiFi connection is faster than a 2.4 gig link, but it's also not as stable at long distances. So instead PCIe 6.0 uses a technique called PAM4, which can actually carry two bits of data at the same time instead of just one.
Unlike traditional signaling where one voltage represented a zero and a second voltage represented a one, PAM4 uses four different voltages and each one corresponds to either zero-zero, zero-one, one-zero or one-one, meaning twice as much data is sent per unit of time.
However, shoving more stuff through the pipes is just like in regular life, not the greatest idea. In this case, it increases the rate of errors. And even with the PCI Special Interest Group adding a few nanoseconds of latency to reduce the error rate to about one bit per million, that's still a lot of potential errors when you consider how much data flows through a typical PCI Express link.
However, a few bytes in each chunk of data that's sent are reserved for error checking and correction. If the receiving device sees that a packet is incorrect, it can ask for it again using just a few bytes of data. Because this error correcting scheme is quite lightweight, it only adds a very small amount of latency. So this way, PCI 6.0 can operate at very high speeds without constantly losing signal integrity.
In fact, it's estimated that instead of an error rate of one bit in a million, PCI 6.0 can operate with only one unfixed error, every billion, billion hours. So you'll probably never have to worry about it unless you're an elf from Middle-earth.
But, hold on a minute, all this performance is fine and good but my graphics card doesn't even saturate the PCI Express slot of my computer today. Why should I care about this? Well, one reason is that as we continue to ask for more and more out of our devices, having the fattest pipe possible will ensure that we can do things like hit our SSDs with mega sized downloads, stream 4K and 8K HDR video, and keep up with the ever increasing demands video games put on our graphics cards all at the same time.
But going beyond just your home PC, think of all the cloud services you utilize on a daily basis for applications from voice assistants to IoT devices to self-driving cars, well assuming you can afford one. All these gadgets need a high bandwidth interface that can process data with minimal latency. Think of an autonomous car that quickly has to get data from a camera to a CPU to a 5G modem, which then goes into a server somewhere that has to respond quickly to aid in hazard recognition, and is also moving data around internally for machine learning. That's a lot of data on both ends of that connection.
PCI Express 6.0 could deliver enough speed to make the experience seamless, and maybe even tell you about an icy road ahead before you end up in the ditch.
No comments yet
Sign In / Sign Up