Table of Contents
All GPUs need VRAM to function, the most commonly used currently are GDDR6 and HBM2e memory.ย In this post, we are going to explain how the memory bus affects the amount of VRAM installed on a video card.ย For this, we are going to explain the case of GDDR6 and HBM2.
If youโve ever wondered why low to mid-range graphics cards donโt have as much VRAM as high-end graphics cards, it all has a very simple explanation: memory bus bandwidth affects the number of chips on a platter, but if you need a more detailed explanation, go ahead. To read.
Bus And Number of VRAM With GDDR6 Memory

GDDR6 memory is a dual-channel memory,ย each of the chips has a 32-bit bus, which is actually two 16-bit busesย that work in parallel and allow two simultaneous memory accesses.ย This means that every GDDR6 interface on a GPU must be at least 32-bit, 2 x 16-bit, and organised from 32-bit to 32-bit.
If we have a GPU with a 64-bit bus, then we will have two GDDR6 memory chips, if we have one with a 128-bit bus, it will be 4 GDDR6 memory chips, 192 bit 6 chips, 256 bit 8 chips, 320 10 chips bit and 384 bits 12 chips and so on.

obviously, as the memory interface expands, the GPU will occupy more perimeter, and it will become larger,ย soย if we want to increase the VRAM capacity, we will have to increase the memory bus, which means an increase in the number of interfaces and, therefore, the periphery of the chip.

However,ย there is a mode called x8 in the case of GDDR6, whichย consists of two microcircuits that alternately separate the two channels, so that theย first chip receives the first 8 bits of the transmission of each channel, and the rest receive the rest.ย 8 bits to another chip.ย This technique has long been used in GDDR memory and is a way to increase VRAM capacity without increasing the complexity of the memory interface, but it also does not increase bandwidth.
Also Read: CPU vs GPU: Why Performance is Different?
This mode is used in memory in the RTX 3090 and allows the NVIDIA card to have 24 GB of memory using 24 chips on the board, without the need for a 768-bit bus for this, and yes, we have not forgotten what the use of GDDR6X does, butย outside the PAM interface โ 4, both GDDR6 and GDDR6X work exactly the same.
Bus Quantity And VRAM With HBM Memory

These memories, because they are part of a 2.5DIC configuration, with an adapter in the middle and vias through silicon, work differently and may seem a little complicated.
First of all, we have to keep in mind thatย each HBM interface is 1024 bits, but since it communicates vertically with the intermediate device,ย its interface does not take up the perimeter space that aย 1024-bit GDDR6ย would take.ย Of course, each interface corresponds to the HBM memory stack, namely:

Without inserting HBM2 memory, this would not be possible, since it is the part responsible for routing the signal to different chips in the stack, and HBM memory does not consist of the same chip, but several different chips in the stack.

Standard HBM uses 4 chips per stack.ย to communicate with eachย 1024-bit interface it is divided into 8 channels of 128 bits each, with 2 channels assigned to each chip in the stack.โฆย Currently, each memory chip in the HBM stack has aย capacity of 2 GBย so thatย is 8 GB per stack.
Of course, the HBM memory bus can be shortened, for example, a low-cost type of HBM memory was proposed a few years ago with a 512-bit interface and therefore only 2 chips per stack.
The Ratio of The Memory Bus To The Rest of The Internal Components of The GPU


Another relationship in the GPU is between the last level cache of the same and interfaces with VRAM, as the number of L2 partitions increases or decreases depending on the width of the memory interface.
The last-level cache of the GPU is not only a client of the memory interface, but also the previous levels of the cache, some of which are located in computational units, and fixed function blocks, such as the first, are inside the raster.ย units, mosaics or ROPS.ย They are also clients of the last-level cache, command processors use L2 cache.
Thus, the memory bus affects the number of last-level cache partitions, and this affects the internal configuration of the GPU.