We frequently see DDR SDRAMs as a key specification of computers and digital devices. In this two-part guest commentary, Ben Miller from Keysight Technologies shares his insights into the design of DDR SDRAMs (Part 1) and how faster memory speeds shape the future (Part 2).

DDR SDRAM is a building block of many digital devices. Photo by Jeremy Bezanger on Unsplash.
Faster data processing requires faster memory.
Double data rate synchronous dynamic random-access memory (DDR SDRAM) enables the world’s computers to work with the data in memory.
Faster, not wider
Since the origins of SDRAM, engineers have faced challenges with increasing memory speeds.
DDR emerged as a faster, more efficient way to handle memory, while providing a universal standard between chip designers and device manufacturers.
As shown in Figure 1, DDR memory consists of a memory controller which transmits clock, address and control signals, and a series of DRAM chips which store the data.
In a write operation, the controller sends data and strobe signals to the DRAM; in a read operation, the DRAM sends data and strobe signals back over the same bi-directional line.
DDR SDRAM became the standard in the late 1990s and has since been improved upon many times over.
Prior to DDR, memory speeds maxed out in the range of 100 MT/s.
Why is DDR important?
DDR is used everywhere — not just in servers, workstations, and desktops, but it is also embedded in consumer electronics, automobiles, and many other systems.
DDR SRAM is used for running applications and doing computations at blazing speeds, and the DDR standard ensures that memory providers all follow the same level of quality and speed for reading and writing to memory.
Each new generation of the DDR standard, defined by the Joint Electron Device Engineering Council (JEDEC), delivers significant improvements over the previous generation including increased speeds, reduced footprint, increased capacity, and improved power efficiency.
The latest standard, DDR5, was released in 2020.
Developers are already eager to take advantage of the incredible data rates: up to 6400 megatransfers per second (MT/s), doubling that of the previous generation.
These memory speed improvements will enable technologies which rely heavily on near-instant data-processing.
However, these improvements introduce new design and test challenges for memory designers and manufacturers.
Let’s look at how the DDR5 standard deals with these challenges, how developers are taking advantage of DDR5 in emerging technologies, and what the next generation of DDR might look like.
The first generation of the DDR standard boosted the data transfer rate up to between 200 – 400 MT/s.
Each successive generation has generally doubled the bit rate of its predecessor.
This first group of standards, DDR1 through DDR3, eventually ran into issues of increasing signal integrity as data bit rates were reached.
Along the way, JEDEC developed related specs optimised for mobile applications (Low-Power DDR: LPDDR), and for computer graphics (Graphics DDR: GDDR).
By the release of DDR4, the 3200 MT/s speeds were starting to cause bit error rate issues.
Faster speeds complicated design and validation by making signal integrity even more of a priority.
While the clock and strobe signals are differential and cancel out noise, the other signals, including the bidirectional data signal between the controller and the DRAM chip, are single ended, making them susceptible to noise, crosstalk, and interference.
Faster DDR bit rates are also possible by transmitting more bits at a time.
Unlike many modern communication protocols, DDR is a parallel bus, not a serial one.
However, increasing the number of pins on a device above a certain amount is unrealistic and expensive, as both the die and package would need to grow to support the added channels.
The pin count on memory has increased in prior generations, but it has since leveled out, as DDR5 has the same number of pins as DDR4.
Faster signal speed alone is now the method of choice to increase memory bit rates for next-generation devices.
With faster speed comes challenges
As mentioned previously, faster transmission speeds can create issues with bit error rates. DRAM processes are inherently anti-speed, as the data bits are stored in charged capacitors.
Channel and interconnect effects can cause inter-symbol interference (ISI) in signals operating at high speed.
This appears as a closed eye diagram on an oscilloscope, which closes more and more as the bit rate increases above 3600 MT/s.

Figure 2: Inter-symbol interference increases as bit rate increases, resulting in a closed eye diagram.
High bit error rates manifest as the receiver cannot resolve the symbols with such distortion in the signal.
In other words, the data bits become indistinguishable at high speeds, as shown in the eye diagrams in Figure 2.
To preserve signal integrity, DDR5 uses Decision Feedback Equalization (DFE) to compensate for signal distortion and reopen the eye.
Unlike high-speed serial communication, which also uses equalization for a similar purpose, DDR5 puts the equalization at the DDR controller side as well as inside the DRAM die.
This element was not present in DDR4.
Notably, the PHY is outside of the DRAM memory cell array itself because of potential issues with capacitance.
Figure 3 shows a block diagram to demonstrate where these elements are placed in a DDR5 memory system.
Engineers at every step of the DDR memory cycle will need to pay attention to the increased requirements of DDR5.
Device simulation, design, and validation require optimizing the transmitter, receiver, and channel for the most reliable data transfer, dealing with the increased design complexity and smaller timing margins inherent in the faster bit rate requirements.

Figure 3: New additions to DDR include specifications inside the die, such as an equalizer. Click to enlarge.
Device, board, and system test engineers should keep the increasing ISI and signal integrity issues in mind, especially when debugging DDR failures, a process complicated further by the equalizer on the data path.
Ultimately, the device must be tested for conformance with the JEDEC DDR5 standard to ensure interoperability with other memory components.
All this effort and complexity culminates in a more sophisticated, faster memory transfer system, allow memory speeds to reach 6000 MT/s and beyond while maintaining a low bit error rate.
Tags: byline, commentary, DDR, interviews, Keysight, memory, opinion, RAM, SDRAM, Tech Focus, technology
Always saw the alphabet soup for different types of RAM for laptops etc… never understood what they meant. Now at least I know roughly how DDR SDRAM works.
Glad you gained something from reading the article!
[…] Bite-sized reads about digital photography, gadgets and technology in general ~ over a cuppa tea. « Tech Focus: DDR5 – How Faster Memory Speeds Shape the Future (Part 1 of 2) […]