whic is better hbm or hbm 2 or gddr5 or gddr5x
Video RAM: What’s the difference between the types available
today?

Some Samsung VRAM
All graphics cards need both a GPU and VRAM to function
properly. While the GPU (Graphics Processing Unit)
does the actual processing of data to output images on your monitor, the data
it is processing and providing is stored and accessed from the chips of VRAM (Video Random Access Memory)
surrounding it.
Outputting high-resolution graphics at a quick rate requires
both a beefy GPU and a large quantity of high-bandwidth VRAM working in tandem.
For most of the past decade, VRAM design was fairly stagnant, and focused on
using more power to achieve greater VRAM clock speeds.
But the power consumption of that process was beginning to
impinge on the power needed by newer GPU designs. In addition to possibly
bottlenecking GPU improvements, the standard sort of VRAM (which is known as
GDDR5) was also determining (and growing) the form factor (i.e. the actual
size) of graphics cards.
Chips of GDDR5 VRAM have to be attached directly to the card in
a single layer, which means that adding more VRAM involves spreading out
horizontally on the graphics card. And moving beyond a tight circle of VRAM
around the GPU means increasing the travel distance for the transfer process as
well.
With these concerns in mind, new forms of VRAM began to be
developed, such as HBM and GDDR5X, which have finally surfaced in the past
couple of years. These are explained below, as straightforward as possible.
HBM vs. GDDR5:
If you want the differences between these two varieties of VRAM
summed up in two simple sentences, here they are:
GDDR5 (SGRAM Double Data Rate
Type 5) has been the industry
standard form of VRAM for the better part of a decade, and is capable of
achieving high clock speeds at the expense of space on the card and power
consumption.
HBM (High Bandwidth Memory)
is a new kind of VRAM that uses less power, can be stacked to increase memory
while taking up less space on the card, and has a wide bus to allow for higher
bandwidth at a lower clock speed. (HBM was developed by AMD and Hynix, and
standardized by JEDEC—as was HBM2.)
Here is a per-package (one stack of HBM vs. one chip of GDDR5)
comparison:[i]
HBM (1 stack)
|
GDDR5 (1 chip)
|
|
Higher Bandwidth
|
✓
(~100 GB/s)
|
–
(~28 GB/s)
|
Smaller Form Factor
|
✓
(Stackable, Integrated)
|
–
(Single-layer)
|
Higher Clock Speed
|
–
(~1 Gb/s)
|
✓
(~7 Gb/s)
|
Lower Voltage
|
✓
(~1.3 V)
|
–
(~1.5 Volts)
|
Widely Available
|
–
(New, Needs Redesigned
Cards)
|
✓
(Old, Cards Designed
Alongside)
|
Less Expensive
|
–
(New, Needs Redesigned
Cards)
|
✓
(Old, Cards Designed
Alongside)
|
Again, don’t be fooled by the ✓ that GDDR5 received
there for having a higher clock speed; HBM, with its wide bus, still boasts a
higher overall bandwidth per Watt (according to AMD, over three times as much
bandwidth per Watt). The lower clock speed is related to how HBM attains its energy
savings.

A diagram of HBM’s stacked
design, by ScotXW
The idea here is that GDDR5, with its narrow channel, keeps
being pushed to higher and higher clock speeds in order to achieve the
performance that is currently expected out of VRAM. This is very costly from a
power perspective. HBM, on the other hand, moves at a lower rate across a wide
bus.
With the huge gains in GPU processing power and the increasing
consumer appetite for high-resolution gaming (a higher resolution means more
visible detail, which means more data, which requires VRAM that is both higher
capacity and higher speed), it seemed inevitable that most cards, starting at
the top-end and moving down, would be re-designed to feature a version of HBM
(such as the already-developed HBM2, or otherwise) in the future. But then,
last year, yet another new standard of VRAM came about which called that into
question.
GDDR5 vs. GDDR5X:
You may have seen some news in the past year or so regarding a
form of VRAM called GDDR5X, and wondered exactly what this might be. For
starters, here’s a simple-sentence-summary like the one offered for HBM and
GDDR5 above:
GDDR5X (SGRAM Double Data Rate Type 5X)
is a new version of GDDR5, which has the same low- and high-speed modes at
which GDDR5 operates, but also an additional third tier of even higher speed
with reportedly twice the data rate of high-speed GDDR5. (GDDR5X was
standardized by JEDEC.)
Here is a per-package (one chip of GDDR5X vs. one chip of GDDR5)
comparison:[ii]
GDDR5X (1 chip)
|
GDDR5 (1 chip)
|
|
Higher Bandwidth
|
✓
(~56 GB/s)
|
–
(~28 GB/s)
|
Smaller Form Factor
|
Tie
(Single-layer)
|
Tie
(Single-layer)
|
Higher Clock Speed
|
✓
(~14 Gb/s)
|
–
(~7 Gb/s)
|
Lower Voltage
|
✓
(~1.35 Volts)
|
–
(~1.5 Volts)
|
Widely Available
|
–
(New, Only in High-end
Cards)
|
✓
(Old, Cards Designed
Alongside)
|
Less Expensive
|
–
(New, Only in High-end
Cards)
|
✓
(Old, Cards Designed
Alongside)
|
So, you might be wondering, if a chip of GDDR5X is still
operating at just around 60% of the overall bandwidth of a stack of HBM while
not even quite making the same power savings or space savings, then why is it a
big deal? Isn’t it still just immediately made obsolete by HBM? Well, the
answer is no, for two reasons.
The first thing to notice is that it’s not a perfect comparison.
After all, one chip is just one chip, whereas a stack has the advantage of
holding multiple chip-equivalents. Just because they take up the same real
estate on the card, that doesn’t mean they are the same amount of memory. So,
in theory, a GDDR5X array with the same VRAM capacity as some HBM VRAM
array would come much closer in overall
graphics card VRAM bandwidth (perhaps just over 10% slower than the HBM system, as estimated by Anandtech).
And yes, that’s still lower, but there are further advantages to
GDDR5X when you consider the development side of things. HBM being an entirely
new form of VRAM means that chip developers will need to redesign their
products with new memory controllers. GDDR5X has enough similarities to GDDR5
to make it a much easier and less expensive proposition to implement it. For
this reason, even if HBM, HBM2, and other HBM-like solutions win out in the
long run, GDDR5X is likely to see a wider roll-out than HBM in the short run
(and possibly at a lower cost to the consumer).
Which Graphics Cards Use Which VRAM:
Now that you’ve heard about these exciting new developments in
VRAM design, you might be wondering what sort of VRAM lies within your card, or
else where you can get your hands on some of this new technology.
A Founder’s Edition GTX
1070
Well, for the time being, most of the cards that are available,
from the low-end through the mid-range and into the lower high-end (currently
including every card from our Minimum tier to our Exceptional tier builds)
still feature GDDR5 VRAM. Popular cards in this year’s builds, from the RX 480to the GTX 1060 to the GTX 1070, all feature this fairly standard
variety of high-clock-speed, relatively-space-inefficient,
relatively-energy-inefficient VRAM.
NVIDIA’s highest tier of cards, including the GTX 1080 and the Titan X, currently feature GDDR5X. It
seems likely (but not guaranteed) that NVIDIA will continue to make use of
GDDR5 and GDDR5X in the near future, simply because that is their current trend
and the design implementation is less costly.
AMD, meanwhile, has rolled out HBM in some of their high-end
cards, including the R9 Fury X and the Pro Duo. Don’t be surprised if you see
smaller form factor cards sporting HBM from AMD in the future. Perhaps using
HBM and related innovations will be the avenue through which AMD finally breaks
free of their reputation for making cards with comparable performance, but
worse thermals and power consumption, compared to NVIDIA.
What about GDDR6?
Micron has been teasing yet another new memory technology for
over a year now: GDDR6. Their current plan is have GDDR6 on the market in or
before 2018 (though their earlier estimates were closer to 2020). And, while
info on it is scarce, they are now claiming that it will provide 16 Gb/s per
pin (meaning somewhere in the neighborhood of 64 GB/s of overall bandwidth per
chip—compared to 56 GB/s per chip of GDDR5X and 100 GB/s per stack of HBM).
Is GDDR6 likely to start showing its face in high-end cards over
a year from now? Yes, it is.
It’s a GDDR solution, which means—like GDDR5X—it will be less
costly for manufacturers to implement than HBM.
Does that mean you should shelve your planned build until it
shows up? Absolutely not.
Three reasons: (1) at the claimed speed of GDDR6, it still has a
significantly lower overall bandwidth and likely lower power savings than HBM,
let alone HBM2; (2) at the claimed speed of GDDR6, it is less than 15% faster
than GDDR5X, which is unlikely to be noticeable to the user; and (3) there is
no guarantee that this new standard will be released by Micron on schedule, nor
that it will live up to its claimed figures (ancient wisdom you should always
heed: benchmarks before buying).
Conclusion:
So, would I say you should pick your card based on its VRAM
type? In the current market situation, I would say probably not. Frankly, there
just aren’t enough cards out there with HBM or GDDR5X to put together proper
apples-to-apples benchmark comparisons. But this information definitely helps
to illustrate something that we here at Logical Increments are all about: a
well-balanced build is crucial.
Consider: a high amount of VRAM (and VRAM that performs at a
high level) is going to be most important in set-ups that run at a high
resolution. And if you’re already balancing your build well—by following our
guides, for instance—then you are not likely to end up in
a situation where you buy a 4K monitor (such as the grandiose Dell Ultrasharp 4K 31.5” LCD Monitor)
and pair it with a low-end graphics card (like the respectable yet modest RX 460).
And for those of us who are mid-range builders, don’t despair.
As with any new technology in the computer world, what is currently rare and
expensive will likely become both commonplace and affordable in the future.


Komentar
Posting Komentar