Monday, April 8, 2013

Hybrid Memory Cube spec makes DRAM 15 times faster

The three largest memory makers - Micron, Samsung and Hynix; announced the final specifications for three-dimensional DRAM, which is aimed at increasing performance for networking and high-performance computing markets. The technology, called a Hybrid Memory Cube (HMC), will stack multiple volatile memory dies on top of a DRAM controller. The DRAM is connected to the controller by way of the relatively new silicon VIA (Vertical Interconnect Access) technology, a method of passing an electrical wire vertically through a silicon wafer.
The logic portion of the DRAM functionality is dropped into the logic chip that sits at the base of that 3D stack. That logic process allows to take advantage of higher performance transistors ... to not only interact up through the DRAM on top of it, but in a high-performance, efficient manner across a channel to a host processor. This logic layer serves both as the host interface connection as well as the memory controller for the DRAM sitting on top of it. The DRAM is broken into 16 partitions, each one with two I/O channels back to the controller. Each Hybrid Memory Cube - there are two prototypes - has either 128 or 256 memory banks available to the host system.
The first Hybrid Memory Cube specification will deliver 2GB and 4GB of capacity, providing aggregate bi-directional bandwidth of up to 160GBps compared with DDR3's 11GBps of aggregate bandwidth and DDR4, with 18GB to 20GB of aggregate bandwidth.
The Hybrid Memory Cube technology solves some significant memory issues. Today's DRAM chips are burdened with having to drive circuit board traces or copper electrical connections, and the I/O pins of numerous other chips to force data down the bus at gigahertz speeds, which consumes a lot of energy. The Hybrid Memory Cube technology reduces this task to make the DRAM drive only tiny TSVs which are connected to much lower loads over shorter distances. A logic chip at the bottom is the only one burdened with driving the circuit board traces and the processor's I/O pins.
The interface is 15 times as fast as standard DRAMs, while reducing power by 70 percent! The beauty of it is that it gets rid of all the issues that were keeping DDR3 and DDR4 from going as fast as they could. For example, instead of having multiple DIMMS (which can range from one to four) on a motherboard, you would need only one Hybrid Memory Cube, cutting down on the number of interfaces to the CPU.
The HMC has defined two physical interfaces back to a host system processor: a short reach and an ultra-short reach.
The short reach is similar to most motherboard technologies today where the DRAM is within eight to 10 inches of the CPU. That technology is aimed mainly for use in network applications and has the goal of boosting throughput from as much as 15Gbps to 28Gbps per lane in a four-lane configuration.
The ultra-short reach interconnection definition is focused on a low energy, close-proximity memory design support of FPGAs, ASICs and ASSPs, such as high-performace networking, and test and measurement applications. That will have a one to three-inch channel back to the CPU, and it has the throughput goal of 15Gbps per lane.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.