Tag Archives: DRAM controller

Idle Cycle Limit - The BIOS Optimization GuideIdle Cycle Limit - The BIOS Optimization Guide

Idle Cycle Limit – The BIOS Optimization Guide

Idle Cycle Limit - The BIOS Optimization GuideIdle Cycle Limit - The BIOS Optimization Guide

Idle Cycle Limit

Common Options : 0T, 16T, 32T, 64T, 96T, Infinite, Auto

 

Quick Review of Idle Cycle Limit

The Idle Cycle Limit BIOS feature sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 8T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

 

Details of Idle Cycle Limit

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles.

[adrotate group=”1″]

This is where the Idle Cycle Limit BIOS feature comes in. It sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged. The default value is 16T which forces the memory controller to close the open pages once sixteen idle cycles have passed.

Increasing this BIOS feature to more than the default of 16T forces the memory controller to keep the activated pages opened longer during times of no activity. This allows for quicker data access if the next data request can be satisfied by the open pages.

However, this is limited by the refresh cycle already set by the BIOS. This means the open pages will automatically close when the memory bank needs to be recharged, even if the number of idle cycles have not reached the Idle Cycle Limit. So, this BIOS option can only be used to force the precharging of the memory bank before the set refresh cycle but not to actually delay the refresh cycle.

Reducing the number of cycles from the default of 16T to 0T forces the memory controller to close all open pages once there are no data requests. In short, the open pages are refreshed as soon as there are no further data requests. This may increase the efficiency of the memory subsystem by masking the bank precharge during idle cycles. However, prematurely closing the open pages may convert what could have been a page hit (and satisfied immediately) into a page miss which will have to wait for the bank to precharge and the same page reopened.

Because refreshes do not occur that often (usually only about once every 64 msec), the impact of refreshes on memory performance is really quite minimal. The apparent benefits of masking the refreshes during idle cycles will not be noticeable, especially since memory systems these days already use bank interleaving to mask refreshes.

With a 0T setting, data requests are also likely to get stalled because even a single idle cycle will cause the memory controller to close all open pages! In desktop applications, most memory reads follow the spatial locality concept where if one data bit is read, chances are high that the next data bit will also need to be read. That’s why closing open pages prematurely using DRAM Idle Timer will most likely cause reduced performance in desktop applications.

On the other hand, using a 0 or 16 idle cycles limit will ensure that the memory cells will be refreshed more often, thereby preventing the loss of data due to insufficiently refreshed memory cells. Forcing the memory controller to close open pages more often will also ensure that in the event of a very long read, the pages can be opened long enough to fulfil the data request.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 16T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

Alternatively, you can greatly increase the value of the Refresh Interval or Refresh Mode Select feature to boost bandwidth and use this BIOS feature to maintain the data integrity of the memory cells. As ultra-long refresh intervals (i.e. 64 or 128 µsec) can cause memory cells to lose their contents, setting a low Idle Cycle Limit like 0T or 16T allows the memory cells to be refreshed more often, with a high chance of those refreshes being done during idle cycles.

[adrotate group=”2″]

This appears to combine the best of both worlds – a long bank active period when the memory controller is being stressed and more refreshes when the memory controller is idle. However, this is not a reliable way of ensuring sufficient refresh cycles since it depends on the vagaries of memory usage to provide sufficient idle cycles to trigger the refreshes.

If your memory subsystem is under extended load, there may not be any idle cycle to trigger an early refresh. This may cause the memory cells to lose their contents. Therefore, it is still recommended that you maintain a proper refresh interval and set this feature to 16T for desktops.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Termination – The BIOS Optimization Guide

DRAM Termination

Common Options : 50 Ohms, 75 Ohms, 150 Ohms (DDR2) / 40 Ohms, 60 Ohms, 120 Ohms (DDR3)

 

Quick Review of DRAM Termination

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal and higher power consumption.

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

 

Details of DRAM Termination

Like a ball thrown against a wall, electrical signals reflect (bounce) back when they reach the end of a transmission path. They also reflect at points where points where there is a change in impedance, e.g. at connections to DRAM devices or a bus. These reflected signals are undesirable because they distort the actual signal, impairing the signal quality and the data being transmitted.

Prior to the introduction of DDR2 memory, motherboard designers use line termination resistors at the end of the DRAM signal lines to reduce signal reflections. However, these resistors are only partially effective because they cannot reduce reflections generated by the stub lines that lead to the individual DRAM chips on the memory module (see illustration below). Even so, this method worked well enough with the lower operating frequency and higher signal voltages of SDRAM and DDR SDRAM modules.

Line termination resistors on the motherboard (Courtesy of Rambus)

The higher speed (and lower signal voltages) of DDR2 and DDR3 memory though require much better signal quality and these high-speed modules have much lower tolerances for noise. The problem is also compounded by the higher number of memory modules used. Line termination resistors are no longer good enough to tackle the problem of signal reflections. This is where On-Die Termination (ODT) comes in.

On-Die Termination shifts the termination resistors from the motherboard to the DRAM die itself. These resistors can better suppress signal reflections, providing much better a signal-to-noise ratio in DDR2 and DDR3 memory. This allows for much higher clock speeds at much lower voltages.

It also reduces the cost of motherboard designs. In addition, the impedance value of the termination resistors can be adjusted, or even turned off via the memory module’s Extended Mode Register Set (EMRS).

On-die termination (Courtesy of Rambus)

Unlike the termination resistors on the motherboard, the on-die termination resistors can be turned on and off as required. For example, when a DIMM is inactive, its on-die termination resistors turn on to prevent signals from the memory controller reflecting to the active DIMMs. The impedance value of the resistors are usually programmed by the BIOS at boot-time, so the memory controller only turns it on or off (unless the system includes a self-calibration circuit).

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal, and higher power consumption.

[adrotate group=”2″]

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Synchronous Mode Select – The BIOS Optimization Guide

Synchronous Mode Select

Common Options : Synchronous, Asynchronous

 

Quick Review

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

 

Details

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

[adrotate banner=”4″]

However, the Asynchronous mode does have its uses. Users of multiplier-locked processors and slow memory modules may find that using the Asynchronous mode allows them to overclock the processor much higher without the need to buy faster memory modules.

The Asynchronous mode is also useful for those who have very fast memory modules and multiplier-locked processors with low bus speeds. Running the fast memory modules synchronously with the low CPU bus speed would force the memory modules will have to run at the same slow speed. Running asynchronously will therefore allow the memory modules to run at a much higher speed than the CPU bus.

But please note that the performance gains of running synchronously cannot be underestimated. Synchronous operations are generally much faster than asychronous operations running at a higher clock speed. It is advisable that you compare benchmark scores of your computer running asynchronously (at a higher clock speed) and synchronously to determine the best option for your system.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!