Tag Archives: DDR3 SDRAM

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength

Common Options : Not Reduced, Reduced 15%, Reduced 30%, Reduced 50%

 

Memory DQ Drive Strength : A Quick Review

The Memory DQ Drive Strength BIOS feature allows you to reduce the drive strength for the memory DQ (data) pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Memory DQ Drive Strength : The Full Details

Every Dual Inline Memory Module (DIMM) has 64 data (DQ) lines. These lines transfer data from the DRAM chips to the memory controller and vice versa.

No matter what kind of DRAM chips are used (whether it’s regular SDRAM, DDR SDRAM or DDR2 SDRAM), the 64 data lines allow it to transfer 64-bits of data every clock cycle.

Each DIMM also has a number of data strobe (DQS) lines. These serve to time the data transfers on the DQ lines. The number of DQS lines depends on the type of memory chip used.

DIMMs based on x4 DRAM chips have 16 DQS lines, while DIMMs using x8 DRAM chips have 8 DQS lines and DIMMs with x16 DRAM chips have only 4 DQS lines.

Memory data transfers begin with the memory controller sending its commands to the DIMM. If data is to be read from the DIMM, then DRAM chips on the DIMM will drive their DQ and DQS (data strobe) lines.

On the other hand, if data is to be written to the DIMM, the memory controller will drive its DQ and DQS lines instead.

If many output buffers (on either the DIMMs or the memory controller) drive their DQ lines simultaneously, they can cause a drop in the signal level with a momentary raise in the relative ground voltage.

This reduces the quality of the signal which can be problematic at high clock speeds. Increasing the drive strength of the DQ pins can help give it a higher voltage swing, improving the signal quality.

However, it is important to increase the DQ drive strength according to the DRAM load. Unnecessarily increasing the DQ drive strength can cause the signal to overshoot its rising and falling edges, as well as create more signal reflection.

All this increase signal noise, which ironically negates the increased signal strength provided by a higher drive strength. Therefore, it is sometimes useful to reduce the DQ drive strength.

With light DRAM loads, you can reduce the DQ drive strength to lower signal noise and improve the signal-noise ratio. Doing so will also reduce power consumption, although that is probably low on most people’s list of importance. In certain cases, it actually allows you to achieve a higher memory clock speed.

This is where the Memory DQ Drive Strength BIOS feature comes in. It allows you to reduce the drive strength for the memory data pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DCLK Feedback Delay – The Tech ARP BIOS Guide

DCLK Feedback Delay

Common Options : 0 ps, 150 ps, 300 ps, 450 ps, 600 ps, 750 ps, 900 ps, 1050 ps

 

Quick Review of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

 

Details of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

This feedback signal is used by the SDRAM controller to determine when it can write data to the SDRAM module. The main idea of this system is to ensure that both clock phases are properly aligned for the proper delivery of data.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

[adrotate group=”2″]

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Idle Cycle Limit – The BIOS Optimization Guide

Idle Cycle Limit

Common Options : 0T, 16T, 32T, 64T, 96T, Infinite, Auto

 

Quick Review of Idle Cycle Limit

The Idle Cycle Limit BIOS feature sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 8T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

 

Details of Idle Cycle Limit

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles.

[adrotate group=”1″]

This is where the Idle Cycle Limit BIOS feature comes in. It sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged. The default value is 16T which forces the memory controller to close the open pages once sixteen idle cycles have passed.

Increasing this BIOS feature to more than the default of 16T forces the memory controller to keep the activated pages opened longer during times of no activity. This allows for quicker data access if the next data request can be satisfied by the open pages.

However, this is limited by the refresh cycle already set by the BIOS. This means the open pages will automatically close when the memory bank needs to be recharged, even if the number of idle cycles have not reached the Idle Cycle Limit. So, this BIOS option can only be used to force the precharging of the memory bank before the set refresh cycle but not to actually delay the refresh cycle.

Reducing the number of cycles from the default of 16T to 0T forces the memory controller to close all open pages once there are no data requests. In short, the open pages are refreshed as soon as there are no further data requests. This may increase the efficiency of the memory subsystem by masking the bank precharge during idle cycles. However, prematurely closing the open pages may convert what could have been a page hit (and satisfied immediately) into a page miss which will have to wait for the bank to precharge and the same page reopened.

Because refreshes do not occur that often (usually only about once every 64 msec), the impact of refreshes on memory performance is really quite minimal. The apparent benefits of masking the refreshes during idle cycles will not be noticeable, especially since memory systems these days already use bank interleaving to mask refreshes.

With a 0T setting, data requests are also likely to get stalled because even a single idle cycle will cause the memory controller to close all open pages! In desktop applications, most memory reads follow the spatial locality concept where if one data bit is read, chances are high that the next data bit will also need to be read. That’s why closing open pages prematurely using DRAM Idle Timer will most likely cause reduced performance in desktop applications.

On the other hand, using a 0 or 16 idle cycles limit will ensure that the memory cells will be refreshed more often, thereby preventing the loss of data due to insufficiently refreshed memory cells. Forcing the memory controller to close open pages more often will also ensure that in the event of a very long read, the pages can be opened long enough to fulfil the data request.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 16T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

Alternatively, you can greatly increase the value of the Refresh Interval or Refresh Mode Select feature to boost bandwidth and use this BIOS feature to maintain the data integrity of the memory cells. As ultra-long refresh intervals (i.e. 64 or 128 µsec) can cause memory cells to lose their contents, setting a low Idle Cycle Limit like 0T or 16T allows the memory cells to be refreshed more often, with a high chance of those refreshes being done during idle cycles.

[adrotate group=”2″]

This appears to combine the best of both worlds – a long bank active period when the memory controller is being stressed and more refreshes when the memory controller is idle. However, this is not a reliable way of ensuring sufficient refresh cycles since it depends on the vagaries of memory usage to provide sufficient idle cycles to trigger the refreshes.

If your memory subsystem is under extended load, there may not be any idle cycle to trigger an early refresh. This may cause the memory cells to lose their contents. Therefore, it is still recommended that you maintain a proper refresh interval and set this feature to 16T for desktops.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Termination – The BIOS Optimization Guide

DRAM Termination

Common Options : 50 Ohms, 75 Ohms, 150 Ohms (DDR2) / 40 Ohms, 60 Ohms, 120 Ohms (DDR3)

 

Quick Review of DRAM Termination

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal and higher power consumption.

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

 

Details of DRAM Termination

Like a ball thrown against a wall, electrical signals reflect (bounce) back when they reach the end of a transmission path. They also reflect at points where points where there is a change in impedance, e.g. at connections to DRAM devices or a bus. These reflected signals are undesirable because they distort the actual signal, impairing the signal quality and the data being transmitted.

Prior to the introduction of DDR2 memory, motherboard designers use line termination resistors at the end of the DRAM signal lines to reduce signal reflections. However, these resistors are only partially effective because they cannot reduce reflections generated by the stub lines that lead to the individual DRAM chips on the memory module (see illustration below). Even so, this method worked well enough with the lower operating frequency and higher signal voltages of SDRAM and DDR SDRAM modules.

Line termination resistors on the motherboard (Courtesy of Rambus)

The higher speed (and lower signal voltages) of DDR2 and DDR3 memory though require much better signal quality and these high-speed modules have much lower tolerances for noise. The problem is also compounded by the higher number of memory modules used. Line termination resistors are no longer good enough to tackle the problem of signal reflections. This is where On-Die Termination (ODT) comes in.

On-Die Termination shifts the termination resistors from the motherboard to the DRAM die itself. These resistors can better suppress signal reflections, providing much better a signal-to-noise ratio in DDR2 and DDR3 memory. This allows for much higher clock speeds at much lower voltages.

It also reduces the cost of motherboard designs. In addition, the impedance value of the termination resistors can be adjusted, or even turned off via the memory module’s Extended Mode Register Set (EMRS).

On-die termination (Courtesy of Rambus)

Unlike the termination resistors on the motherboard, the on-die termination resistors can be turned on and off as required. For example, when a DIMM is inactive, its on-die termination resistors turn on to prevent signals from the memory controller reflecting to the active DIMMs. The impedance value of the resistors are usually programmed by the BIOS at boot-time, so the memory controller only turns it on or off (unless the system includes a self-calibration circuit).

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal, and higher power consumption.

[adrotate group=”2″]

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Digital Locked Loop (DLL) – The BIOS Optimization Guide

Digital Locked Loop (DLL)

Common Options : Enabled, Disabled

 

Quick Review

The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL). It is a digital circuit that aligns the data strobe signal (DQS) with the data signal (DQ) to ensure proper data transfer of DDR, DDR2, DDR3 and DDR4 memory. However, it can be disabled to allow the memory chips to run beyond a fixed frequency range.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

It is recommended that you keep this BIOS feature enabled at all times. The digital locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Details

DDR, DDR2, DDR3 and DDR4 SDRAM deliver data on both rising and falling edges of the signal. This requires much tighter timings, necessitating the use of a data strobe signal (DQS) generated by differential clocks. This data strobe is then aligned to the data signal (DQ) using a delay-locked loop (DLL) circuit.

The DQS and DQ signals must be aligned with minimal skew to ensure proper data transfer. Otherwise, data transferred on the DQ signal will be read incorrectly, causing the memory contents to be corrupted and the system to malfunction.

However, the delay-locked loop circuit of every DDR, DDR2, DDR3 or DDR4 chip is tuned for a certain fixed frequency range. If you run the chip beyond that frequency rate, the DLL circuit may not work correctly. That’s why DDR, DDR2, DDR3 and DDR4 SDRAM chips can have problems running at clock speeds slower than what they are rated for.

[adrotate banner=”5″]

If you encounter such a problem, it is possible to disable the DLL. Disabling the DLL will allow the chip to run beyond the frequency range for which the DLL is tuned for. This is where the Digital Locked Loop (DLL) BIOS feature comes in.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

Note : The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL).

It is recommended that you keep this BIOS feature enabled at all times. The delay-locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Counter – BIOS Optimization Guide

Dynamic Counter

Common Options : Enabled, Disabled

 

Quick Review of Dynamic Counter

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Details of Dynamic Counter

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind Idle Cycle Limit.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate group=”1″]

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!