From WikiChip
Difference between revisions of "amd/infinity fabric"
< amd

(Overview)
Line 1: Line 1:
{{amd title|Infinity Fabric (IF)}}[[File:amd infinity fabric.svg|right|300px]]
+
{{amd title|Infinity Fabric (IF)}}[[File:amd infinity fabric.svg|right|250px]]
 
'''Infinity Fabric''' ('''IF''') is a system [[interconnect architecture]] that facilitates data and control transmission accross all linked components. This architecture is utilized by [[AMD]]'s recent microarchitectures for both CPU (i.e., {{amd|Zen|l=arch}}) and graphics (e.g., {{amd|Vega|l=arch}}), and any other additional accelerators they might add in the future. The fabric was first announced and detailed in April 2017 by Mark Papermaster, AMD's SVP and CTO.
 
'''Infinity Fabric''' ('''IF''') is a system [[interconnect architecture]] that facilitates data and control transmission accross all linked components. This architecture is utilized by [[AMD]]'s recent microarchitectures for both CPU (i.e., {{amd|Zen|l=arch}}) and graphics (e.g., {{amd|Vega|l=arch}}), and any other additional accelerators they might add in the future. The fabric was first announced and detailed in April 2017 by Mark Papermaster, AMD's SVP and CTO.
  
 
== Overview ==
 
== Overview ==
The Infinity Fabric consists of two separate communication planes - Infinity '''Scalable Data Fabric''' ('''SDF''') and the Infinity '''Scalable Control Fabric''' ('''SCF'''). The SDF is the primary means by which data flows around the system between endpoints (e.g. [[NUMA node]]s, [[PHY]]s). The SDF might have dozens of connecting points hooking together things such as [[PCIe]] PHYs, [[memory controller]]s, USB hub, and the various computing and execution units. The SDF is a [[superset]] of what was previously [[HyperTransport]]. The SCF handles the transmission of the many miscellaneous system control signals - this includes things such as thermal and power management, tests, security, and 3rd party IP. With those two planes, AMD can efficiently scale up many of the basic computing blocks.
+
The Infinity Fabric consists of two separate communication planes - Infinity '''Scalable Data Fabric''' ('''SDF''') and the Infinity '''Scalable Control Fabric''' ('''SCF'''). The SDF is the primary means by which data flows around the system between endpoints (e.g. [[NUMA node]]s, [[PHY]]s). The SDF might have dozens of connecting points hooking together things such as [[PCIe]] PHYs, [[memory controller]]s, USB hub, and the various computing and execution units. The SDF is a [[superset]] of what was previously [[HyperTransport]]. The SCF is a complementary plane that handles the transmission of the many miscellaneous system control signals - this includes things such as thermal and power management, tests, security, and 3rd party IP. With those two planes, AMD can efficiently scale up many of the basic computing blocks.
  
== Inter-/Intra- communication ==
+
== Scalable Data Fabric (SDF) ==
[[File:epyc tech dayp77.jpg|right|thumb|[[AMD]] {{amd|EPYC}} dual-socket config]][[File:amd if slide.png|right|thumb]][[File:amd if scalable control .png|right|thumb]][[File:amd if data fabric.png|right|thumb]]
+
[[File:amd zeppelin sdf plane block.svg|400px|right]]
A key feature of the coherent data fabric is that it's not limited to a single die and can extend over multiple dies in an [[MCP]] as well as multiple sockets over PCIe links (possibly even across independent systems, although that's speculation). There's also no constraint on the topology of the nodes connected over the fabric, communication can be done directly node-to-node, island-hopping in a [[bus topology]], or as a [[mesh topology]] system.
+
The Infinity Scalable Data Fabric (SDF) is the data communication plane of the Infinity Fabric. All data from and to the cores and to the other peripherals (e.g. memory controller and I/O hub) are routed through the SDF. A key feature of the coherent data fabric is that it's not limited to a single die and can extend over multiple dies in an [[MCP]] as well as multiple sockets over PCIe links (possibly even across independent systems, although that's speculation). There's also no constraint on the topology of the nodes connected over the fabric, communication can be done directly node-to-node, island-hopping in a [[bus topology]], or as a [[mesh topology]] system.
  
: '''Dual-socket, 4-Die multi-chip package:'''
+
In the case of AMD's processors based on the {{amd|Zeppelin}} SoC and the {{amd|Zen|Zen core|l=arch}}, the block diagram of the SDF is shown on the right. The two {{amd|CPU Complex|CCX's}} are directly connected to the SDF plane using the '''Cache-Coherent Master''' ('''CCM''') which provides the mechanism for coherent data transports between cores. There is also a single '''I/O Master/Slave''' (IOMS) interface for the I/O Hub communication. The Hub contains two [[PCIe]] controllers, a [[SATA]] controller, the [[USB]] controllers, [[Ethernet]] controller, and the [[southbridge]]. From an operational point of view, the IOMS and the CCMs are actually the only interfaces that are capable of making DRAM requests.
:[[File:amd infinity fabric dual-socket(4 dies).svg|650px]]
 
  
* Rates assumes DDR4-2666 is used.
+
The DRAM is attached to the DDR4 interface which is attached to the Unified Memory Controller (UMC). There are two Unified Memory Controllers (UMC) for each of the DDR channels which are also directly connected to the SDF.
  
Note that at most, there's a maximum of 2 hops between any two physical dies - both in the same package or between sockets.
+
== Scalable Control Fabric (SDF) ==
 
+
The Infinity Scalable Control Fabric (SCF) is the control communication plane of the Infinity Fabric.
 
 
With the implementation of the Infinity Fabric in the {{amd|Zen|l=arch}} microarchitecture, Intra-Chip (i.e. die-to-die) communication over AMD's '''Global Memory Interconnect''' has a bi-directional bandwidth of 39.736 GB/s per 4B link. AMD uses [[Single-ended signaling]] (as opposed to [[differential signaling|differential]] PHY) along with zero termination power in order to increase efficiency when transmitting idles. This allows the CPU cores to make use of the added power when workloads are not utilizing the entire fiber's bandwidth. AMD uses a 256-bit wide interface on-die while using a 32-bit wide interface per link for die-to-die communication. It's worth pointing out that for some products, e.g. {{amd|Ryzen Threadripper}} which uses a two-die configuration, AMD appears to be using double the links for a 64-bit wide interface. Note that this is exactly the same bandwidth as a dual-channel DDR4 operating at a rate of 2666 MT/s that should be used in the system, therefore the bandwidth of the system will be directly tied to the DRAM transfer rate. Because the wiring distance is short enough, clock skew can be suppressed. Additionally, in order to reduce latency, AMD reduced the number of [[FIFO]] buffers as much as possible which are found between each of the high-end interfaces. In AMD's {{amd|EPYC}} server processor family, which consist of 4 dies, this gives a bisection bandwidth of 158.944 GiB/s. At 2666 MT/s, the fiber can transfer a bit between two dies at the cost of roughly ~2pJ/bit. From an outside view, the control fabric can also be seen as a single extended control fabric which allows the multiple dies to communicate and handle various controls such as power management. AMD claims that the bisectional bandwidth achieved is twice which helps an MCM design behave more closely to a monolithic design.
 
 
 
Inter-Chip communication (i.e., chip-to-chip such as in the case of a [[dual-socket]] server) has greater restrictions (e.g. the number of external signals you can have). AMD uses four wide high-bandwidth links that go between each of the dies between each of the sockets. This gives a maximum of two hops between any two requests and responders. Those links use traditional [[differential signaling|differential]] [[SerDes]] techniques in order to address the further physical distance between the sockets operating at 10.6 GT/s. This network has a bi-directional bandwidth of 35.3 GiB/s for a bisection bandwidth of 141.2 GiB/s (slightly less than maximum given the operating rate due to additional overhead from the CRC error detection which accounts for roughly 10% of the total bandwidth). This works out to around ~9pJ/bit TDP.
 
 
 
The processor keeps track of how active each of the links is and make use of dynamic SerDes link width management mechanism based on bandwidth and workload requirements, allowing conservation of power when not necessary.
 
  
 
== References ==
 
== References ==

Revision as of 12:19, 23 March 2018

amd infinity fabric.svg

Infinity Fabric (IF) is a system interconnect architecture that facilitates data and control transmission accross all linked components. This architecture is utilized by AMD's recent microarchitectures for both CPU (i.e., Zen) and graphics (e.g., Vega), and any other additional accelerators they might add in the future. The fabric was first announced and detailed in April 2017 by Mark Papermaster, AMD's SVP and CTO.

Overview

The Infinity Fabric consists of two separate communication planes - Infinity Scalable Data Fabric (SDF) and the Infinity Scalable Control Fabric (SCF). The SDF is the primary means by which data flows around the system between endpoints (e.g. NUMA nodes, PHYs). The SDF might have dozens of connecting points hooking together things such as PCIe PHYs, memory controllers, USB hub, and the various computing and execution units. The SDF is a superset of what was previously HyperTransport. The SCF is a complementary plane that handles the transmission of the many miscellaneous system control signals - this includes things such as thermal and power management, tests, security, and 3rd party IP. With those two planes, AMD can efficiently scale up many of the basic computing blocks.

Scalable Data Fabric (SDF)

amd zeppelin sdf plane block.svg

The Infinity Scalable Data Fabric (SDF) is the data communication plane of the Infinity Fabric. All data from and to the cores and to the other peripherals (e.g. memory controller and I/O hub) are routed through the SDF. A key feature of the coherent data fabric is that it's not limited to a single die and can extend over multiple dies in an MCP as well as multiple sockets over PCIe links (possibly even across independent systems, although that's speculation). There's also no constraint on the topology of the nodes connected over the fabric, communication can be done directly node-to-node, island-hopping in a bus topology, or as a mesh topology system.

In the case of AMD's processors based on the Zeppelin SoC and the Zen core, the block diagram of the SDF is shown on the right. The two CCX's are directly connected to the SDF plane using the Cache-Coherent Master (CCM) which provides the mechanism for coherent data transports between cores. There is also a single I/O Master/Slave (IOMS) interface for the I/O Hub communication. The Hub contains two PCIe controllers, a SATA controller, the USB controllers, Ethernet controller, and the southbridge. From an operational point of view, the IOMS and the CCMs are actually the only interfaces that are capable of making DRAM requests.

The DRAM is attached to the DDR4 interface which is attached to the Unified Memory Controller (UMC). There are two Unified Memory Controllers (UMC) for each of the DDR channels which are also directly connected to the SDF.

Scalable Control Fabric (SDF)

The Infinity Scalable Control Fabric (SCF) is the control communication plane of the Infinity Fabric.

References

  • AMD Infinity Fabric introduction by Mark Papermaster, April 6, 2017
  • AMD EPYC Tech Day, June 20, 2017
  • ISSCC 2018