SDR vs DDR vs QDR vs EDR vs HDR: InfiniBand Speed Guide

Published on February 12, 2026

Introduction : InfiniBand is a high performance networking technology widely used in AI clusters, supercomputers and data centers where ultra low latency and massive bandwidth are critical. Over the years, InfiniBand has evolved through multiple generations of data rate standards, commonly referred to as SDR, DDR, QDR, EDR and HDR. Each generation represents a major leap in throughput, encoding efficiency and scalability which enables faster communication between compute nodes, GPUs and storage systems. This page explains the differences between SDR, DDR, QDR, EDR and HDR, their bandwidth capabilities, and how they fit into modern AI and HPC deployments

Understanding the Generations of Speed

SDR (Single Data Rate) : The foundational generation of InfiniBand. It established the baseline signaling rate of 2.5 Gbps per lane. In the early days of high speed interconnects, this was the standard for connecting servers in a cluster.

DDR (Double Data Rate) : As the name suggests, DDR doubled the performance of the original standard. By increasing the signaling rate to 5 Gbps per lane, it allowed for much faster communication between processors and storage without changing the physical wiring architecture.

QDR (Quad Data Rate): QDR was a major milestone for data centers, pushing the speed to 10 Gbps per lane. By using a 4x link (the most common configuration), QDR achieved a total throughput of 40 Gbps, which became the industry benchmark for several years.

EDR (Enhanced Data Rate): EDR marked a shift toward “100G” networking. It moved the signaling rate to 25 Gbps per lane. EDR also introduced more efficient encoding schemes (moving from 8b/10b to 64b/66b), which reduced the “overhead” (wasted data bits) and allowed for nearly 100% of the bandwidth to be used for actual data.

HDR (High Data Rate): HDR represents the “200G” generation. It utilizes 50 Gbps per lane. In modern AI data centers, HDR is often used to connect GPUs in massive clusters to ensure that the network does not become a bottleneck during complex training tasks.

GenerationLane Speed (Gbps)Aggregate Throughput (4x Link)Encoding EfficiencyEra/Usage
SDR2.5 Gbps10 Gbps80% (8b/10b)Legacy/Founding standard
DDR5.0 Gbps20 Gbps80% (8b/10b)Early HPC Clusters
QDR10 Gbps40 Gbps80% (8b/10b)Standardized 40G networking
EDR25 Gbps100 Gbps97% (64b/66b)Modern “100G” Enterprise
HDR50 Gbps200 Gbps97% (64b/66b)AI Training and Super computing

Summary

Selecting the right InfiniBand speed depends on workload scale, budget and future scalability needs, making it a critical design decision for modern data center and AI networking architectures.