Interconnect Bandwidth Heterogeneity on AMD MI250x and Infinity Fabric
Demand for low-latency and high-bandwidth data transfer between GPUs has driven the development of multi-GPU nodes. Physical constraints on the manufacture and integration of such systems has yielded heterogeneous intra-node interconnects, where not all devices are connected equally. The next generation of supercomputing platforms are expected to feature AMD CPUs and GPUs. This work characterizes the extent to which interconnect heterogeneity is visible through GPU programming APIs on a system with four AMD MI250x GPUs, and provides several insights for users of such systems.