Table of Contents
All GPUs need VRAM to function, the most commonly used currently are GDDR6 and HBM2e memory. In this post, we are going to explain how the memory bus affects the amount of VRAM installed on a video card. For this, we are going to explain the case of GDDR6 and HBM2.
If you’ve ever wondered why low to mid-range graphics cards don’t have as much VRAM as high-end graphics cards, it all has a very simple explanation: memory bus bandwidth affects the number of chips on a platter, but if you need a more detailed explanation, go ahead. To read.
Bus And Number of VRAM With GDDR6 Memory
GDDR6 memory is a dual-channel memory, each of the chips has a 32-bit bus, which is actually two 16-bit buses that work in parallel and allow two simultaneous memory accesses. This means that every GDDR6 interface on a GPU must be at least 32-bit, 2 x 16-bit, and organised from 32-bit to 32-bit.
If we have a GPU with a 64-bit bus, then we will have two GDDR6 memory chips, if we have one with a 128-bit bus, it will be 4 GDDR6 memory chips, 192 bit 6 chips, 256 bit 8 chips, 320 10 chips bit and 384 bits 12 chips and so on.
obviously, as the memory interface expands, the GPU will occupy more perimeter, and it will become larger, so if we want to increase the VRAM capacity, we will have to increase the memory bus, which means an increase in the number of interfaces and, therefore, the periphery of the chip.
However, there is a mode called x8 in the case of GDDR6, which consists of two microcircuits that alternately separate the two channels, so that the first chip receives the first 8 bits of the transmission of each channel, and the rest receive the rest. 8 bits to another chip. This technique has long been used in GDDR memory and is a way to increase VRAM capacity without increasing the complexity of the memory interface, but it also does not increase bandwidth.
Also Read: CPU vs GPU: Why Performance is Different?
This mode is used in memory in the RTX 3090 and allows the NVIDIA card to have 24 GB of memory using 24 chips on the board, without the need for a 768-bit bus for this, and yes, we have not forgotten what the use of GDDR6X does, but outside the PAM interface – 4, both GDDR6 and GDDR6X work exactly the same.
Bus Quantity And VRAM With HBM Memory
These memories, because they are part of a 2.5DIC configuration, with an adapter in the middle and vias through silicon, work differently and may seem a little complicated.
First of all, we have to keep in mind that each HBM interface is 1024 bits, but since it communicates vertically with the intermediate device, its interface does not take up the perimeter space that a 1024-bit GDDR6 would take. Of course, each interface corresponds to the HBM memory stack, namely:
Without inserting HBM2 memory, this would not be possible, since it is the part responsible for routing the signal to different chips in the stack, and HBM memory does not consist of the same chip, but several different chips in the stack.
Standard HBM uses 4 chips per stack. to communicate with each 1024-bit interface it is divided into 8 channels of 128 bits each, with 2 channels assigned to each chip in the stack.… Currently, each memory chip in the HBM stack has a capacity of 2 GB so that is 8 GB per stack.
Of course, the HBM memory bus can be shortened, for example, a low-cost type of HBM memory was proposed a few years ago with a 512-bit interface and therefore only 2 chips per stack.
The Ratio of The Memory Bus To The Rest of The Internal Components of The GPU
Another relationship in the GPU is between the last level cache of the same and interfaces with VRAM, as the number of L2 partitions increases or decreases depending on the width of the memory interface.
The last-level cache of the GPU is not only a client of the memory interface, but also the previous levels of the cache, some of which are located in computational units, and fixed function blocks, such as the first, are inside the raster. units, mosaics or ROPS. They are also clients of the last-level cache, command processors use L2 cache.
Thus, the memory bus affects the number of last-level cache partitions, and this affects the internal configuration of the GPU.
1 comment
This, highly-polished, article, does just about nothing to explain why I have an empty slot on my graphics card!
I’ll just point this out, regarding my theory – I have yet to see a graphic card specify its memory bus width as anything less than the product of the sum of memory chips, each with 32-bit processing.
An acute example – the gtx 1080 ti has a memory ‘bus’ or bandwidth of (11 * 32 bit chips) 352 bits.
– the titanX, tho, has a memory ‘bus’ or bandwidth of (12 * 32 bit chips) 384 bits.
You know what I DON’T think?! I don’t think that NVIDIA manufacture specific bus widths. I’m not even convinced that they restrict such in software as such metrics are easier, and better solved, when calculated.
You know what I think? I think that, if I added a RAM chip to the 1080, it WOULD have 384 bits, and the rest is marketing bluff.
Did you know – the titanX has, essentially, the same board design as the 1080 ti?
The major difference? The 1080 ti has AN EMPTY RAM SLOT!
Comments are closed.