Cloud native EDA tools & pre-optimized hardware platforms
Pop quiz! Which section of the PCI Express Base Specification covers bifurcation? Here, I¡¯ll even wait while you look¡.
Did you give up yet? Good. First of all, you won¡¯t find the word ¡°bifurcation¡± in the spec at all! The term is commonly understood to mean splitting a set of PCI Express lanes into multiple links ¨C and it¡¯s most common on Root Complexes. So for example, our friendly neighborhood Root Complex (RC) vendor builds an RC with 16 lanes ¨C but he knows that in some uses the RC will connect to a single x16 link, while in others his customers only want x4 links. Rather than waste the 12 now unused lanes, wouldn¡¯t it be nice to instead be able to configure those 16 total lanes as 4 links each of x4 width?
Since I sent you on a bit of a wild goose chase earlier, I¡¯ll suggest you check section 4.2.4.12 of the PCIe 3.1 Base spec for what little is covered around ¡°bifurcation¡± ¨C it¡¯s all of about 12 lines and I can sum up what you need to know in two quotes as follows:
So you see, as far as the PCIe spec is concerned, this process is really an implementation decision regarding allocating PHY lanes to controller(s). Notice that the spec is very clear here that these are each completely independent links since it calls out that they have separate LTSSMs. That means of course that each link negotiates its own independent width, data rate, credits, etc. The other key point is that there¡¯s no negotiation for bifurcation, it¡¯s a configuration choice made at power-up time. (Usually it¡¯s system-configuration-specific, so a board designer externally configures the SoC to provide the number and width of links desired for that particular implementation.)
Consider our earlier example of a x16 RC ¨C let¡¯s assume that¡¯s a Synopsys customer so he already has Gen4. When the RC is configured as 4 links of Gen4 x4 width (call them A, B, C, D) ¨C link A might be connected to another Synopsys customer¡¯s Gen4 x4 SSD and so negotiates Gen4 x4, link B and link C might connect to some older Gen3 x4 devices, and link D might connect to a slow Gen1 x1 device. Now we can see our RC designer has four different streams of data to deal with at some very different bandwidths. Where should he find a PCIe controller which support bifurcation into 4 links like this?
Answer: he shouldn¡¯t! He should use four separate controllers each incorporating their separate configuration space, LTSSM, buffering and credit logic, etc. We just established that bifurcation results in separate independent links, so why would our RC designer try to force them together? If he did manage to share those four links through a single application interface he would find that his high-speed Gen4 x4 SSD (on link A) was being throttled waiting for packets to trickle in and out from that Gen1 x1 device on link D.
Wait then Richard, so what does it mean to have a PCIe controller ¡°support bifurcation¡±?
Well, honestly ¨C not much. Partitioning the lanes of a PHY into one or more links is pretty much invisible to the controller ¨C it just sees a PHY interface of some width. So *ANY* controller can ¡°support bifurcation¡± in that sense. I suppose a controller might claim to ¡°support bifurcation¡± through a single application interface, but as we established above that wouldn¡¯t be a good idea due to the bandwidth variations. Since each link needs an entire PCIe protocol stack, you wouldn¡¯t save very much (if any) area over simply using multiple controllers ¨C and you¡¯d be stuck with only the configuration(s) designed in to that controller. (I had a customer recently who wanted 8 lanes of PCIe ¡°bifurcatable¡± into 1ea x8 or 1ea x4 or 2ea x2s, so would he even have found that exact configuration available?)
Perhaps now you can see why the PCIe spec doesn¡¯t say much about bifurcation!