Tag Archives: DRAM

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength

Common Options : Not Reduced, Reduced 15%, Reduced 30%, Reduced 50%

 

Memory DQ Drive Strength : A Quick Review

The Memory DQ Drive Strength BIOS feature allows you to reduce the drive strength for the memory DQ (data) pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Memory DQ Drive Strength : The Full Details

Every Dual Inline Memory Module (DIMM) has 64 data (DQ) lines. These lines transfer data from the DRAM chips to the memory controller and vice versa.

No matter what kind of DRAM chips are used (whether it’s regular SDRAM, DDR SDRAM or DDR2 SDRAM), the 64 data lines allow it to transfer 64-bits of data every clock cycle.

Each DIMM also has a number of data strobe (DQS) lines. These serve to time the data transfers on the DQ lines. The number of DQS lines depends on the type of memory chip used.

DIMMs based on x4 DRAM chips have 16 DQS lines, while DIMMs using x8 DRAM chips have 8 DQS lines and DIMMs with x16 DRAM chips have only 4 DQS lines.

Memory data transfers begin with the memory controller sending its commands to the DIMM. If data is to be read from the DIMM, then DRAM chips on the DIMM will drive their DQ and DQS (data strobe) lines.

On the other hand, if data is to be written to the DIMM, the memory controller will drive its DQ and DQS lines instead.

If many output buffers (on either the DIMMs or the memory controller) drive their DQ lines simultaneously, they can cause a drop in the signal level with a momentary raise in the relative ground voltage.

This reduces the quality of the signal which can be problematic at high clock speeds. Increasing the drive strength of the DQ pins can help give it a higher voltage swing, improving the signal quality.

However, it is important to increase the DQ drive strength according to the DRAM load. Unnecessarily increasing the DQ drive strength can cause the signal to overshoot its rising and falling edges, as well as create more signal reflection.

All this increase signal noise, which ironically negates the increased signal strength provided by a higher drive strength. Therefore, it is sometimes useful to reduce the DQ drive strength.

With light DRAM loads, you can reduce the DQ drive strength to lower signal noise and improve the signal-noise ratio. Doing so will also reduce power consumption, although that is probably low on most people’s list of importance. In certain cases, it actually allows you to achieve a higher memory clock speed.

This is where the Memory DQ Drive Strength BIOS feature comes in. It allows you to reduce the drive strength for the memory data pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


10GB + 12GB Samsung LPDDR4X uMCP Memory Details!

Samsung just announced that they have begun mass-producing LPDDR4X uMCP memory in 10GB and 12GB capacities!

Here is EVERYTHING you need to know about the new 10GB and 12GB Samsung LPDDR4X uMCP memory!

 

10GB + 12GB Samsung LPDDR4X uMCP Memory

The Samsung LPDDR4X uMCP memory is a UFS-based package that uses multiple LPDDR4X (low-power double data rate 4X) memory chips.

Samsung actually introduced a 12GB LPDDR4X uMCP memory about seven months ago. That package used six 16Gb LPDDR4X DRAM chips to achieve the 12 GB capacity.

Now, Samsung has a new higher-capacity 24Gb LPDDR4X DRAM chip – that’s 3 GB of memory on a single chip. This new chip uses the 1y-nanometer process technology, which can be either 14 nm or 16 nm.

The new 12GB Samsung LPDDR4X uMCP memory (KM8F8001MM) combines just four of the new 24Gb LPDDR4X DRAM chips, and ultra-fast eUFS 3.0 NAND storage into a single package.

With a data transfer speed of 4,266 Mbps, or 533 MB/s, this new Samsung LPDDR4X uMCP is designed to support 4K video recording, AI and machine learning capabilities in mid-range smartphones.

Samsung also created a 10GB LPDDR4X uMCP memory package using two 24Gb DRAM chips and two 16Gb DRAM chips.

 

10GB + 12GB Samsung LPDDR4X uMCP Specifications

Features 12 GB Samsung
LPDDR4X uMCP
10 GB Samsung
LPDDR4X uMCP
Process Technology 1y (14-16 nm) 1y (14-16 nm)
LPDDR4X Capacity 12 GB 10 GB
Configuration 4 x 24 Gb (3 GB) LPDDR4X
+
eUFS 3.0 NAND storage
2 x 24 Gb (3 GB) LPDDR4X
+
2 x 16 Gb (2 GB) LPDDR4X
+
eUFS 3.0 NAND storage
Max. Transfer Rate 4266 Mbps / 533 MB/s NA

 

Recommended Reading

Go Back To > Mobile Devices | BusinessHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Bank Swizzle Mode from The Tech ARP BIOS Guide

Bank Swizzle Mode

Common Options : Enabled, Disabled

 

Quick Review of Bank Swizzle Mode

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits.

It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

This effectively interleaves the memory banks and maximises memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Details of Bank Swizzle Mode

DRAM (and its various derivatives – SDRAM, DDR SDRAM, etc.) store data in cells that are organized in rows and columns.

Whenever a read command is issued to a memory bank, the appropriate row is first activated using the RAS (Row Address Strobe). Then, to read data from the target memory cell, the appropriate column is activated using the CAS (Column Address Strobe).

Multiple cells can be read from the same active row by applying the appropriate CAS signals. If data has to be read from a different row, the active row has to be deactivated before the appropriate row can be activated.

This takes time and reduces performance, so good memory controllers will try to schedule memory accesses to maximize the number of hits on active rows. One of the methods used to achieve that goal is the bank swizzle mode.

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits. It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

The XOR operation results in a value of true if only one of the two operands (inputs) is true. If both operands are simultaneously false or true, then it results in a value of false.

[adrotate group=”1″]

This characteristic of XORing the physical address to create the bank address reduces page conflicts by remapping the memory bank addresses so only one of two banks can be active at any one time.

This effectively interleaves the memory banks and maximizes memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Idle Cycle Limit – The BIOS Optimization Guide

Idle Cycle Limit

Common Options : 0T, 16T, 32T, 64T, 96T, Infinite, Auto

 

Quick Review of Idle Cycle Limit

The Idle Cycle Limit BIOS feature sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 8T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

 

Details of Idle Cycle Limit

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles.

[adrotate group=”1″]

This is where the Idle Cycle Limit BIOS feature comes in. It sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged. The default value is 16T which forces the memory controller to close the open pages once sixteen idle cycles have passed.

Increasing this BIOS feature to more than the default of 16T forces the memory controller to keep the activated pages opened longer during times of no activity. This allows for quicker data access if the next data request can be satisfied by the open pages.

However, this is limited by the refresh cycle already set by the BIOS. This means the open pages will automatically close when the memory bank needs to be recharged, even if the number of idle cycles have not reached the Idle Cycle Limit. So, this BIOS option can only be used to force the precharging of the memory bank before the set refresh cycle but not to actually delay the refresh cycle.

Reducing the number of cycles from the default of 16T to 0T forces the memory controller to close all open pages once there are no data requests. In short, the open pages are refreshed as soon as there are no further data requests. This may increase the efficiency of the memory subsystem by masking the bank precharge during idle cycles. However, prematurely closing the open pages may convert what could have been a page hit (and satisfied immediately) into a page miss which will have to wait for the bank to precharge and the same page reopened.

Because refreshes do not occur that often (usually only about once every 64 msec), the impact of refreshes on memory performance is really quite minimal. The apparent benefits of masking the refreshes during idle cycles will not be noticeable, especially since memory systems these days already use bank interleaving to mask refreshes.

With a 0T setting, data requests are also likely to get stalled because even a single idle cycle will cause the memory controller to close all open pages! In desktop applications, most memory reads follow the spatial locality concept where if one data bit is read, chances are high that the next data bit will also need to be read. That’s why closing open pages prematurely using DRAM Idle Timer will most likely cause reduced performance in desktop applications.

On the other hand, using a 0 or 16 idle cycles limit will ensure that the memory cells will be refreshed more often, thereby preventing the loss of data due to insufficiently refreshed memory cells. Forcing the memory controller to close open pages more often will also ensure that in the event of a very long read, the pages can be opened long enough to fulfil the data request.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 16T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

Alternatively, you can greatly increase the value of the Refresh Interval or Refresh Mode Select feature to boost bandwidth and use this BIOS feature to maintain the data integrity of the memory cells. As ultra-long refresh intervals (i.e. 64 or 128 µsec) can cause memory cells to lose their contents, setting a low Idle Cycle Limit like 0T or 16T allows the memory cells to be refreshed more often, with a high chance of those refreshes being done during idle cycles.

[adrotate group=”2″]

This appears to combine the best of both worlds – a long bank active period when the memory controller is being stressed and more refreshes when the memory controller is idle. However, this is not a reliable way of ensuring sufficient refresh cycles since it depends on the vagaries of memory usage to provide sufficient idle cycles to trigger the refreshes.

If your memory subsystem is under extended load, there may not be any idle cycle to trigger an early refresh. This may cause the memory cells to lose their contents. Therefore, it is still recommended that you maintain a proper refresh interval and set this feature to 16T for desktops.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Termination – The BIOS Optimization Guide

DRAM Termination

Common Options : 50 Ohms, 75 Ohms, 150 Ohms (DDR2) / 40 Ohms, 60 Ohms, 120 Ohms (DDR3)

 

Quick Review of DRAM Termination

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal and higher power consumption.

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

 

Details of DRAM Termination

Like a ball thrown against a wall, electrical signals reflect (bounce) back when they reach the end of a transmission path. They also reflect at points where points where there is a change in impedance, e.g. at connections to DRAM devices or a bus. These reflected signals are undesirable because they distort the actual signal, impairing the signal quality and the data being transmitted.

Prior to the introduction of DDR2 memory, motherboard designers use line termination resistors at the end of the DRAM signal lines to reduce signal reflections. However, these resistors are only partially effective because they cannot reduce reflections generated by the stub lines that lead to the individual DRAM chips on the memory module (see illustration below). Even so, this method worked well enough with the lower operating frequency and higher signal voltages of SDRAM and DDR SDRAM modules.

Line termination resistors on the motherboard (Courtesy of Rambus)

The higher speed (and lower signal voltages) of DDR2 and DDR3 memory though require much better signal quality and these high-speed modules have much lower tolerances for noise. The problem is also compounded by the higher number of memory modules used. Line termination resistors are no longer good enough to tackle the problem of signal reflections. This is where On-Die Termination (ODT) comes in.

On-Die Termination shifts the termination resistors from the motherboard to the DRAM die itself. These resistors can better suppress signal reflections, providing much better a signal-to-noise ratio in DDR2 and DDR3 memory. This allows for much higher clock speeds at much lower voltages.

It also reduces the cost of motherboard designs. In addition, the impedance value of the termination resistors can be adjusted, or even turned off via the memory module’s Extended Mode Register Set (EMRS).

On-die termination (Courtesy of Rambus)

Unlike the termination resistors on the motherboard, the on-die termination resistors can be turned on and off as required. For example, when a DIMM is inactive, its on-die termination resistors turn on to prevent signals from the memory controller reflecting to the active DIMMs. The impedance value of the resistors are usually programmed by the BIOS at boot-time, so the memory controller only turns it on or off (unless the system includes a self-calibration circuit).

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal, and higher power consumption.

[adrotate group=”2″]

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Synchronous Mode Select – The BIOS Optimization Guide

Synchronous Mode Select

Common Options : Synchronous, Asynchronous

 

Quick Review

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

 

Details

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

[adrotate banner=”4″]

However, the Asynchronous mode does have its uses. Users of multiplier-locked processors and slow memory modules may find that using the Asynchronous mode allows them to overclock the processor much higher without the need to buy faster memory modules.

The Asynchronous mode is also useful for those who have very fast memory modules and multiplier-locked processors with low bus speeds. Running the fast memory modules synchronously with the low CPU bus speed would force the memory modules will have to run at the same slow speed. Running asynchronously will therefore allow the memory modules to run at a much higher speed than the CPU bus.

But please note that the performance gains of running synchronously cannot be underestimated. Synchronous operations are generally much faster than asychronous operations running at a higher clock speed. It is advisable that you compare benchmark scores of your computer running asynchronously (at a higher clock speed) and synchronously to determine the best option for your system.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Bus Selection – The BIOS Optimization Guide

DRAM Bus Selection

Common Options : Auto, Single Channel, Dual Channel

 

Quick Review

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

 

Details

Many motherboards now come with dual memory channels. Each channel can be accessed by the memory controller concurrently, thereby improving memory throughput, as well as reducing memory latency.

Depending on the chipset and motherboard design, each memory channel may support one or more DIMM slots. But for the dual channel feature to be work properly, at least one DIMM slot from each memory channel must be filled.

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Idle Cycle Counter – The BIOS Optimization Guide

Dynamic Idle Cycle Counter

Common Options : Enabled, Disabled

 

Quick Review

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

 

Details

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind DRAM Idle Timer.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate banner=”5″]

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

Go Back To > The BIOS Optimization Guide | Home

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Read Latch Delay – BIOS Optimization Guide

DRAM Read Latch Delay

Common Options : Enabled, Disabled

Quick Review

This BIOS feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. Start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. If your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Details

This feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading. As such, a lone single-sided memory module provides the lowest DRAM load possible.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The longer the delay, the poorer the read performance of your memory modules. However, the stability of your memory modules won’t increase together with the length of the delay. Remember, the purpose of the feature is only to ensure that the memory controller will be able to latch onto the DRAM device with all sorts of DRAM loadings.

[adrotate banner=”5″]

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. It isn’t going to increase stability. In fact, it may just make things worse! So, start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. This forces the memory controller to latch onto the DRAM devices without delay, even if the BIOS presets indicate that a delay is required. Naturally, this can potentially cause stability problems if you actually have a heavy DRAM load. Therefore, if your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP to DRAM Prefetch – BIOS Optimization Guide

AGP to DRAM Prefetch

Common Options : Enabled, Disabled

 

Quick Review

This feature controls the system controller’s AGP prefetch capability.

When enabled, the system controller will prefetch data whenever the AGP graphics card reads from system memory. This speeds up AGP reads as it allows contiguous memory reads by the AGP graphics card to proceed with minimal delay.

It is highly recommended that you enable this feature for better AGP read performance.

 

Details

This feature controls the system controller’s AGP prefetch capability. When enabled, the system controller will prefetch data whenever the AGP graphics card reads from system memory. Here is how it works.

[adrotate banner=”4″]Whenever the system controller reads AGP-requested data from system memory, it also reads the subsequent chunk of data. This is done on the assumption that the AGP graphics card will request for the subsequent chunk of data. When the AGP graphics card actually initiates a read command for that chunk of data, the system controller can immediately send it to the AGP graphics card.

This speeds up AGP memory reads as the AGP graphics card won’t need to wait for the system controller to read from system memory. In other words, AGP to DRAM Prefetch allows contiguous memory reads by the AGP graphics card to proceed with minimal delay.

It is highly recommended that you enable this feature for better AGP read performance. Please note that AGP writes to system memory do not benefit from this feature.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!