Summary: DRAM (e.g. DDR5 SDRAM) is a computer’s main working memory — volatile, capacitor-based storage positioned between CPU cache and persistent disk in the memory hierarchy, packaged as DIMMs that connect to the CPU via an integrated memory controller.
What the acronyms mean
DDR5 SDRAM stands for Double Data Rate Synchronous Dynamic Random Access Memory
| Term | Meaning |
|---|---|
| Double Data Rate | Transfers data on both rising and falling clock edges — 2× throughput for same clock frequency |
| Synchronous | Operation is locked to the system clock |
| Dynamic | Capacitors leak charge — cells must be periodically refreshed to retain data (see dram-read-write-refresh) |
| Random Access | Any cell is addressable in any order (contrast: sequential tape or spinning-disk HDD) |
Image: Micron DDR5 SDRAM Functional Block Diagrams
DDR5 2 GB x8 bank groups (used mainly as the example in this series)
Images: Additional diagrams for 4GB x4 and 1GB x16
![]()
Memory hierarchy
| Level | Typical capacity | Typical latency | Technology |
|---|---|---|---|
| Registers | ~1 KB | < 1 ns | On-die flip-flops |
| L1–L3 Cache | 0.5–64 MB | 1–40 ns | On-die SRAM (no refresh needed) |
| DRAM | 16–128 GB | ~17 ns | Off-die DRAM (this page) |
| NVMe SSD | 1–4 TB | ~50 µs | NAND flash |
| HDD | 1–20 TB | ~5 ms | Magnetic disk |
DRAM sits between fast, small, expensive on-die SRAM cache and slow, large, cheap persistent storage. Programs and files are copied from SSD into DRAM before the CPU processes them — loading bars correspond to SSD → DRAM transfers, a ~3,000× latency gap.
Image: Memory Hierarchy Diagram
(Source: Max Maxfield)
Physical packaging
Hierarchy (DDR5, single-rank 16 GB DIMM, ×8 chips):
- DIMM — Dual Inline Memory Module, 288-pin PCB; the physical memory stick.
- It has two memory sub-channels
- Sub-channel A — Four ×8 chips presenting a 32-bit data bus (each chip has one 2GB die)
- Sub-channel B — Four ×8 chips presenting a 32-bit data bus (each chip has one 2GB die)
- Total: 8 chips × 2 GB = 16 GB per DIMM
Chip designation (×N notation): ×N means N data output pins per chip. Four ×8 chips wired in parallel give the 32-bit sub-channel data bus. ×4 chips would require 8 chips per sub-channel.
review : Ranks
Ranks: A rank is a set of chips that respond together to a single CS command, collectively presenting the full sub-channel data bus width. A dual-rank DIMM adds a second independent rank per sub-channel, doubling capacity without widening the bus:
DIMM type Chips per sub-channel Total chips (×8) Single-rank 4 8 Dual-rank 8 16 See 6:35 of video
Memory sub-channel architecture
DDR5 sub-channels
Each DDR5 DIMM is split into two independent sub-channels (A and B), each presenting its own 32-bit data (DQ) bus and its own dedicated Command/Access (CA) bus.
Since the sub-channels are fully independent, the memory controller (CPU) can issue different commands down each CA bus simultaneously — for example, a READ on sub-channel A while sub-channel B handles a WRITE.
Images: Two independent 32-bit memory sub-channels A and B, each with its own CA bus
![]()
For dual-rank: show rank 0 vs rank 1 chip positions.
Per sub-channel signals:
| Signal group | Signal/Pin Name | Count | Purpose |
|---|---|---|---|
| Data (see below img) | DQ | 32 / 401 | Bidirectional; 8 bits per chip × 4 chips per sub-channel |
| Command/Address | CA | 142 | Time-multiplexed: - carries bank + row address (RAS phase) - then column address (CAS phase) on the same pins |
| Chip Select | CS | 1–2 | Selects which rank responds |
| Differential clock | CK | 2 | Differential pair; data is transferred on both rising and falling edges |
Images: 32-bit sub-channels data (DQ) lines wired in parallel across four ×8 chips (i.e. 8-bit)
Hence each chip has 8-bit read/write only
![]()
Parallelism: Each sub-channel has its own CA bus
Each sub-channel has a dedicated CA[13:0] bus connecting it to the CPU’s integrated memory controller (IMC). This means two CA buses per DIMM, allowing the controller to issue different commands to sub-channel A and B (e.g. READ on A, while WRITE on B) in the same clock cycle — this is parallelism, not multiplexing.
Time-multiplexing of each CA bus
The CA bus is time-multiplexed — both phases reuse the same physical pins, so the physical pin count is far smaller than the logical address width.
Command and address multiplexing
In DDR5, the CA bus now carries both the command opcode (e.g. ACT, PRE, READ, WRITE…) and the 31-bit logical address across two phases:
More on Multiplexing: See multiplexing and burst buffer
- Row Address Strobe (RAS) phase:
- 3-bit bank group (1–8)
- 2-bit bank number (1–4)
- 16-bit row number (1–65,536)
- Column Address Strobe (CAS) phase:
- Without burst buffer: 10-bit column number (1–8,192)
- With burst buffer:
- 6-bit multiplexer select: 1 of 64 contiguous groups of 128 bitlines (64 × 128 = 8,192 bitlines total)
- 4-bit burst position: selects 1 of 16 eight-bit segments within the 128-bit burst buffer
Multiplexing was more complex in DDR4. DDR5 simplified the architecture.
- DDR4 had:
- 21 address wires:
A[16:0]: row/column address,BA[1:0]: bank address;BG[1:0]: bank group- 7 control wires:
RAS#: Row address strobe,CAS#: Column address strobe,WE#: Write enable,CS#: Chip select,CKE#: Clock enable,ODT#: On-die termination control,RESET#: Reset- What changed in DDR5:
RAS#,CAS#, andWE#were eliminated dedicated pins entirely,- Folded 21 address pins + remaining control signals into the consolidated
CA[13:0]14-pin time-multiplexed bus.Outcome: Command/address interface went from 28 dedicated pins → 14 multiplexed pins
Branch Education’s video is misleading. This graphic is actually for DDR4, but he calls it DDR5:
![]()
ECC DIMMs: Each sub-channel carries 40 bits instead of 32 — the extra 8 bits carry system-level ECC syndrome bytes computed across the DIMM. This is separate from on-die ECC inside each chip (see dram-memory-cell).
Both phases reuse the same physical pins, so physical pin count is far smaller than the logical address width. Note, the exact pin count may differ ( review 14 vs 21 CA pins, I’m confused re: DDR4 vs DDR5).
DDR4 comparison
DDR4 had a single CA bus shared across all chips on the DIMM, and no sub-channel split. All chips responded to the same command stream, limiting command-level parallelism.
Sub-channel and CA bus are not the same thing
The sub-channel is the data path split, the CA bus is the command delivery mechanism. DDR5 gives each sub-channel its own CA bus as a consequence of making them independent, but they’re describing different aspects:
Sub-channel CA bus Describes Data path Command path Width 32-bit DQ 14-bit CA Count per DDR5 DIMM 2 2 (one per sub-channel)
Each DDR5 DIMM has two independent sub-channels (A and B), each with its own Command/Address (CA) bus. This is a major change from DDR4, which had a single shared CA bus controlling all chips on the DIMM.
Power management
DDR5 moves voltage regulation onto the DIMM itself via an on-board PMIC (Power Management IC):
- Input: 12 V (RDIMMs / server) or 5 V (UDIMMs / desktop) from the motherboard
- Output: regulated 1.1 V for all DRAM chips (down from DDR4’s 1.2 V)
- Four step-down switching regulators on the PMIC provide multiple regulated rails
- Reduces motherboard power-delivery complexity; improves per-DIMM efficiency
Image: Power management chips
Review: review entire section (specifically, link to motherboard memory channels (different to DIMM subchannels))
/## Connection to the CPU
The Integrated Memory Controller (IMC) — on-die since Intel Nehalem (2008) and AMD K8 (2003) — is the CPU’s interface to DRAM. Earlier architectures placed the memory controller in the motherboard’s northbridge, adding round-trip latency.
[!image] TODO: Memory system hierarchy diagram — CPU die (cores → L1/L2/L3 cache → IMC) → memory channels → DIMMs → sub-channels → chips → die. Annotate bandwidth and latency at each level.
What the IMC does:
- Translates physical memory addresses from the CPU into DRAM commands (ACT, RD, WR, PRE, REF)
- Schedules commands to maximise row hits and minimise row misses (see dram-row-hits-and-latency)
- Issues REFab and REFsb refresh commands on schedule (see dram-read-write-refresh)
- Manages power states (self-refresh, power-down modes)
- Controls sub-channel selection, rank selection, and command interleaving
Intel Gear mode (12th gen Alder Lake+):
Mode IMC : memory clock ratio Use case Gear 1 1 : 1 Lowest latency; feasible at lower DDR5 speeds (≤DDR5-4800 typically) Gear 2 1 : 2 IMC runs at half memory frequency for signal integrity; adds ~10–15 ns effective latency AMD (Ryzen 7000+ on AM5): The IMC is coupled to the Infinity Fabric clock (FCLK). At FCLK = MCLK (1:1 ratio), latency is minimised. At high DDR5 speeds where FCLK can’t keep pace with MCLK, the desynchronisation adds measurable latency.
DDR5 specifications
| Metric | DDR4 (reference) | DDR5 range |
|---|---|---|
| Supply voltage (Vcc) | 1.2 V | 1.1 V |
| Transfer rate | 2,133–3,200 MT/s | 4,000–8,800 MT/s |
| Clock frequency | 1,067–1,600 MHz | 2,000–4,400 MHz |
| Peak bandwidth (per DIMM) | 25.6 GB/s | 32.0–70.4 GB/s |
| Max DIMM capacity | 64 GB | 512 GB |
| Banks | 16 (4 groups × 4 banks) | 32 (8 groups × 4 banks) |
| Burst length | BL8 | BL16, optionally BC8 |
| On-die ECC | Optional | Mandatory |
| Voltage regulation | On motherboard | On DIMM (PMIC) |
| Sub-channels per DIMM | 1 × 64-bit | 2 × 32-bit (independent) |
Example: DDR5 DIMM spec card — 16 GB, 4,800 MT/s, 2,400 MHz, 38,400 MB/s peak.
See also
- dram-memory-cell — 1T1C cell, bank and die structure, sense amplifiers, on-die ECC
- dram-read-write-refresh — 31-bit address decoding; read, write, and refresh sequences
- dram-row-hits-and-latency — RAS/CAS, row hit vs. miss, latency metrics, row hammer
- dram-burst-buffer — burst buffer, BL16, 16n prefetch, cache-line alignment
- dram-subarrays — subarray structure, folded bitline, sense amplifier placement
Sources
- Branch Education — How Does Computer Memory Work?
- Micron — DDR5 SDRAM New Features White Paper
- Micron — DDR5 Datasheet
- Kingston — DDR5 Memory Standard Overview
- Wikipedia — DDR5 SDRAM
- Rambus — DDR4 vs DDR5 DIMM Design Challenges


(Source: Max Maxfield)




