Tag Archives: Memory controller

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength

Common Options : Not Reduced, Reduced 15%, Reduced 30%, Reduced 50%

 

Memory DQ Drive Strength : A Quick Review

The Memory DQ Drive Strength BIOS feature allows you to reduce the drive strength for the memory DQ (data) pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Memory DQ Drive Strength : The Full Details

Every Dual Inline Memory Module (DIMM) has 64 data (DQ) lines. These lines transfer data from the DRAM chips to the memory controller and vice versa.

No matter what kind of DRAM chips are used (whether it’s regular SDRAM, DDR SDRAM or DDR2 SDRAM), the 64 data lines allow it to transfer 64-bits of data every clock cycle.

Each DIMM also has a number of data strobe (DQS) lines. These serve to time the data transfers on the DQ lines. The number of DQS lines depends on the type of memory chip used.

DIMMs based on x4 DRAM chips have 16 DQS lines, while DIMMs using x8 DRAM chips have 8 DQS lines and DIMMs with x16 DRAM chips have only 4 DQS lines.

Memory data transfers begin with the memory controller sending its commands to the DIMM. If data is to be read from the DIMM, then DRAM chips on the DIMM will drive their DQ and DQS (data strobe) lines.

On the other hand, if data is to be written to the DIMM, the memory controller will drive its DQ and DQS lines instead.

If many output buffers (on either the DIMMs or the memory controller) drive their DQ lines simultaneously, they can cause a drop in the signal level with a momentary raise in the relative ground voltage.

This reduces the quality of the signal which can be problematic at high clock speeds. Increasing the drive strength of the DQ pins can help give it a higher voltage swing, improving the signal quality.

However, it is important to increase the DQ drive strength according to the DRAM load. Unnecessarily increasing the DQ drive strength can cause the signal to overshoot its rising and falling edges, as well as create more signal reflection.

All this increase signal noise, which ironically negates the increased signal strength provided by a higher drive strength. Therefore, it is sometimes useful to reduce the DQ drive strength.

With light DRAM loads, you can reduce the DQ drive strength to lower signal noise and improve the signal-noise ratio. Doing so will also reduce power consumption, although that is probably low on most people’s list of importance. In certain cases, it actually allows you to achieve a higher memory clock speed.

This is where the Memory DQ Drive Strength BIOS feature comes in. It allows you to reduce the drive strength for the memory data pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Write Data In to Read Delay from The Tech ARP BIOS Guide!

Write Data In to Read Delay

Common Options : 1 Cycle, 2 Cycles

 

Write Data In to Read Delay : A Quick Review

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Write Data In to Read Delay : The Full Details

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

Please note that this is only applicable for read commands that follow a write operation. Consecutive read operations or writes that follow reads are not affected.

If a 1 Cycle delay is selected, every read command that follows a write operation will be delayed one clock cycle before it is issued.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

If a 2 Cycles delay is selected, every read command that follows a write operation will be delayed two clock cycles before it is issued.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

By default, this BIOS feature is set to 2 Cycles. This meets JEDEC’s specification of 2 clock cycles for write-to-read command delay in DDR400 memory modules. DDR266 and DDR333 memory modules require a write-to-read command delay of only 1 clock cycle.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Bank Swizzle Mode from The Tech ARP BIOS Guide

Bank Swizzle Mode

Common Options : Enabled, Disabled

 

Quick Review of Bank Swizzle Mode

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits.

It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

This effectively interleaves the memory banks and maximises memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Details of Bank Swizzle Mode

DRAM (and its various derivatives – SDRAM, DDR SDRAM, etc.) store data in cells that are organized in rows and columns.

Whenever a read command is issued to a memory bank, the appropriate row is first activated using the RAS (Row Address Strobe). Then, to read data from the target memory cell, the appropriate column is activated using the CAS (Column Address Strobe).

Multiple cells can be read from the same active row by applying the appropriate CAS signals. If data has to be read from a different row, the active row has to be deactivated before the appropriate row can be activated.

This takes time and reduces performance, so good memory controllers will try to schedule memory accesses to maximize the number of hits on active rows. One of the methods used to achieve that goal is the bank swizzle mode.

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits. It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

The XOR operation results in a value of true if only one of the two operands (inputs) is true. If both operands are simultaneously false or true, then it results in a value of false.

[adrotate group=”1″]

This characteristic of XORing the physical address to create the bank address reduces page conflicts by remapping the memory bank addresses so only one of two banks can be active at any one time.

This effectively interleaves the memory banks and maximizes memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


RW Queue Bypass from The Tech ARP BIOS Guide

RW Queue Bypass

Common Options : Auto, 2X, 4X, 8X, 16X

 

Quick Review of RW Queue Bypass

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set the RW Queue Bypass BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

 

Details of RW Queue Bypass

The R/W Queue Bypass BIOS option is similar to the DCQ Bypass Maximum BIOS option – they both decide the limits on which an arbiter can intelligently reschedule memory accesses to improve performance.

The difference between the two is that DCQ Bypass Maximum does this at the memory controller level, while R/W Queue Bypass does it at the Device Control Interface (DCI) level.

To improve performance, the arbiter can reschedule transactions in the DCI read / write queue.

By allowing some transactions to bypass other transactions in the queue, the arbiter can maximize the number of hits on open memory pages.

This improves the overall memory performance but at the expense of some memory accesses which have to be delayed.

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set this BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM PH Limit from the Tech ARP BIOS Guide

SDRAM PH Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM PH Limit

SDRAM PH Limit is short for SDRAM Page Hit Limit. The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM PH Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DCLK Feedback Delay – The Tech ARP BIOS Guide

DCLK Feedback Delay

Common Options : 0 ps, 150 ps, 300 ps, 450 ps, 600 ps, 750 ps, 900 ps, 1050 ps

 

Quick Review of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

 

Details of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

This feedback signal is used by the SDRAM controller to determine when it can write data to the SDRAM module. The main idea of this system is to ensure that both clock phases are properly aligned for the proper delivery of data.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

[adrotate group=”2″]

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Asynclat (Asynchronous Latency) – The Tech ARP BIOS Guide

Asynclat

Common Options : 0 to 15 ns

 

Quick Review of Asynclat

Asynclat is an AMD processor-specific BIOS feature. It controls the amount of asynchronous latency, which depends on the time it takes for data to travel from the processor to the furthest DIMM on the motherboard and back. For your reference, AMD has a few conservative suggestions on setting the Asynclat BIOS feature.

Memory Type

Number Of
DIMM Slots

Memory Clock

200 MHz

166 MHz

133 MHz

100 MHz

Registered

8

8 ns

8 ns

9 ns

9 ns

Unbuffered

4

8 ns

8 ns

8 ns

8 ns

3 or 4

7 ns

7 ns

7 ns

7 ns

1 or 2

6 ns

6 ns

6 ns

6 ns

Do note that in this case, the distance of the furthest DIMM slot is considered analogous to the number of DIMM slots. The greater number of DIMM slots available on the motherboard, the further the final slot is from the memory controller.

Also, these values are rough and conservative recommendations that assume that the furthest DIMM slot is occupied by a module. If your motherboard has four slots and you choose to populate only the first two slots, you could use a shorter asynchronous latency.

Generally, it is recommended that you stick with the asynchronous latency recommended by AMD (see table above) or your memory module’s manufacturer. You can, of course, adjust the amount of asynchronous latency according to the situation. For example, if you are overclocking the memory modules, or if you populate the first two slots of the four available DIMM slots; you can get away with a lower asynchronous latency.

 

Details of Asynclat

Asynclat is an AMD processor-specific BIOS feature. It controls the amount of asynchronous latency, which depends on the time it takes for data to travel from the processor to the furthest DIMM on the motherboard and back.

The asynchronous latency is designed to account for variances in the trace length to the furthest DIMM on the motherboard, as well as the type of DIMM, number of chips in that DIMM and the memory bus frequency. For your reference, AMD has a few conservative suggestions on setting the Asynclat BIOS feature.

Memory Type

Number Of
DIMM Slots

Memory Clock

200 MHz

166 MHz

133 MHz

100 MHz

Registered

8

8 ns

8 ns

9 ns

9 ns

Unbuffered

4

8 ns

8 ns

8 ns

8 ns

3 or 4

7 ns

7 ns

7 ns

7 ns

1 or 2

6 ns

6 ns

6 ns

6 ns

Do note that in this case, the distance of the furthest DIMM slot is considered analogous to the number of DIMM slots. The greater number of DIMM slots available on the motherboard, the further the final slot is from the memory controller.

Also, these values are rough and conservative recommendations that assume that the furthest DIMM slot is occupied by a module. If your motherboard has four slots and you choose to populate only the first two slots, you could use a shorter asynchronous latency.

[adrotate group=”2″]

Naturally, the shorter the latency, the better the performance. However, if the latency is too short, it will not allow enough time for data to be returned from the furthest DIMM on the motherboard. This results in data corruption and system instability.

The optimal asynchronous latency varies from system to system. It depends on the motherboard design, where you install your DIMMs, the type of DIMM used and the memory bus speed selected. The only way to find the optimal asychronous latency is trial and error, by starting with a high value and working your way down.

Generally, it is recommended that you stick with the asynchronous latency recommended by AMD (see table above) or your memory module’s manufacturer. You can, of course, adjust the amount of asynchronous latency according to the situation. For example, if you are overclocking the memory modules, or if you populate the first two slots of the four available DIMM slots; you can get away with a lower asynchronous latency.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

2T Command – BIOS Optimization Guide

2T Command

Common Options : Enabled, Disabled, Auto

 

Quick Review of 2T Command

The 2T Command BIOS feature allows you to select the delay between the assertion of the Chip Select signal till the time the memory controller starts sending commands to the memory bank. The lower the value, the sooner the memory controller can send commands out to the activated memory bank.

When this feature is disabled, the memory controller will only insert a command delay of one clock cycle or 1T.

When this feature is enabled, the memory controller will insert a command delay of two clock cycles or 2T.

The Auto option allows the memory controller to use the memory module’s SPD value for command delay.

If the SDRAM command delay is too long, it can reduce performance by unnecessarily preventing the memory controller from issuing the commands sooner.

However, if the SDRAM command delay is too short, the memory controller may not be able to translate the addresses in time and the “bad commands” that result will cause data loss and corruption.

It is recommended that you try disabling 2T Command for better memory performance. But if you face stability issues, enable this BIOS feature.

 

Details of 2T Command

Whenever there is a memory read request from the operating system, the memory controller does not actually receive the physical memory addresses where the data is located. It is only given a virtual address space which it has to translate into physical memory addresses. Only then can it issue the proper read commands. This produces a slight delay at the start of every new memory transaction.

Instead of immediately issuing the read commands, the memory controller instead asserts the Chip Select signal to the physical bank that contains the requested data. What this Chip Select signal does is activate the bank so that it is ready to accept the commands. In the meantime, the memory controller will be busy translating the memory addresses. Once the memory controller has the physical memory addresses, it starts issuing read commands to the activated memory bank.

As you can see, the command delay is not caused by any latency inherent in the memory module. Rather, it is determined by the time taken by the memory controller to translate the virtual address space into physical memory addresses.

Naturally, because the delay is due to translation of addresses, the memory controller will require more time to translate addresses in high density memory modules due to the higher number of addresses. The memory controller will also take a longer time if there is a large number of physical banks.

The 2T Command BIOS feature allows you to select the delay between the assertion of the Chip Select signal till the time the memory controller starts sending commands to the memory bank. The lower the value, the sooner the memory controller can send commands out to the activated memory bank.

When this feature is disabled, the memory controller will only insert a command delay of one clock cycle or 1T.

When this feature is enabled, the memory controller will insert a command delay of two clock cycles or 2T.

The Auto option allows the memory controller to use the memory module’s SPD value for command delay.

If the SDRAM command delay is too long, it can reduce performance by unnecessarily preventing the memory controller from issuing the commands sooner.

[adrotate group=”2″]

However, if the SDRAM command delay is too short, the memory controller may not be able to translate the addresses in time and the “bad commands” that result will cause data loss and corruption.

Fortunately, all unbuffered SDRAM modules are capable of a 1T command delay up to four memory banks per channel. After that, a 2T command delay may be required. However, support for 1T command delay varies from chipset to chipset and even from one motherboard model to another. You should consult your motherboard manufacturer to see if your motherboard supports a command delay of 1T.

It is recommended that you try disabling 2T Command for better memory performance. But if you face stability issues, enable this BIOS feature.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Idle Cycle Limit – The BIOS Optimization Guide

Idle Cycle Limit

Common Options : 0T, 16T, 32T, 64T, 96T, Infinite, Auto

 

Quick Review of Idle Cycle Limit

The Idle Cycle Limit BIOS feature sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 8T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

 

Details of Idle Cycle Limit

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles.

[adrotate group=”1″]

This is where the Idle Cycle Limit BIOS feature comes in. It sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged. The default value is 16T which forces the memory controller to close the open pages once sixteen idle cycles have passed.

Increasing this BIOS feature to more than the default of 16T forces the memory controller to keep the activated pages opened longer during times of no activity. This allows for quicker data access if the next data request can be satisfied by the open pages.

However, this is limited by the refresh cycle already set by the BIOS. This means the open pages will automatically close when the memory bank needs to be recharged, even if the number of idle cycles have not reached the Idle Cycle Limit. So, this BIOS option can only be used to force the precharging of the memory bank before the set refresh cycle but not to actually delay the refresh cycle.

Reducing the number of cycles from the default of 16T to 0T forces the memory controller to close all open pages once there are no data requests. In short, the open pages are refreshed as soon as there are no further data requests. This may increase the efficiency of the memory subsystem by masking the bank precharge during idle cycles. However, prematurely closing the open pages may convert what could have been a page hit (and satisfied immediately) into a page miss which will have to wait for the bank to precharge and the same page reopened.

Because refreshes do not occur that often (usually only about once every 64 msec), the impact of refreshes on memory performance is really quite minimal. The apparent benefits of masking the refreshes during idle cycles will not be noticeable, especially since memory systems these days already use bank interleaving to mask refreshes.

With a 0T setting, data requests are also likely to get stalled because even a single idle cycle will cause the memory controller to close all open pages! In desktop applications, most memory reads follow the spatial locality concept where if one data bit is read, chances are high that the next data bit will also need to be read. That’s why closing open pages prematurely using DRAM Idle Timer will most likely cause reduced performance in desktop applications.

On the other hand, using a 0 or 16 idle cycles limit will ensure that the memory cells will be refreshed more often, thereby preventing the loss of data due to insufficiently refreshed memory cells. Forcing the memory controller to close open pages more often will also ensure that in the event of a very long read, the pages can be opened long enough to fulfil the data request.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 16T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

Alternatively, you can greatly increase the value of the Refresh Interval or Refresh Mode Select feature to boost bandwidth and use this BIOS feature to maintain the data integrity of the memory cells. As ultra-long refresh intervals (i.e. 64 or 128 µsec) can cause memory cells to lose their contents, setting a low Idle Cycle Limit like 0T or 16T allows the memory cells to be refreshed more often, with a high chance of those refreshes being done during idle cycles.

[adrotate group=”2″]

This appears to combine the best of both worlds – a long bank active period when the memory controller is being stressed and more refreshes when the memory controller is idle. However, this is not a reliable way of ensuring sufficient refresh cycles since it depends on the vagaries of memory usage to provide sufficient idle cycles to trigger the refreshes.

If your memory subsystem is under extended load, there may not be any idle cycle to trigger an early refresh. This may cause the memory cells to lose their contents. Therefore, it is still recommended that you maintain a proper refresh interval and set this feature to 16T for desktops.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM Precharge Control – The BIOS Optimization Guide

SDRAM Precharge Control

Common Options : Enabled, Disabled

 

Quick Review of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank. This is useful in cases where subsequent data requests will also result in page misses.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

 

Details of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This naturally improves performance.

But if read request cannot be satisfied by any of the four open pages, there are two possibilities. Either one page is closed and the correct page opened; or all open pages are closed and new pages opened up. Either way, the read request suffers the full latency penalty.

[adrotate group=”1″]

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur. Naturally, this greatly impacts memory performance.

Fortunately, after the four full latency reads, the memory controller can often predict what pages will be needed next. It can then open them for minimum latency reads . This somewhat reduces the negative effect of consecutive page misses.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank.

This is useful in cases where subsequent data requests will also result in page misses. This is because the memory banks will already be precharged and ready to be activated. There is no need to wait for the memory banks to precharge before they can be activated. However, it also means that you won’t be able to benefit from data accesses that could have been satisfied by the previously opened pages.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

LD-Off Dram RD/WR Cycles – The BIOS Optimization Guide

LD-Off Dram RD/WR Cycles

Common Options : Delay 1T, Normal

 

Quick Review

The LD-Off Dram RD/WR Cycles BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

 

Details

At the beginning of a memory transaction (read or write), the memory controller normally sends the address and command signals simultaneously to the memory bank. This allows for the quickest activation of the memory bank.

However, this may cause problems with certain memory modules. In these memory modules, the target row may not be activated quickly enough to allow the memory controller to read from or write to it. This is where the LD-Off Dram RD/WR Cycles BIOS feature comes in.

[adrotate group=”1″]

This BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle. This ensures there is enough time for the memory bank to be activated before the read or write command arrives.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

MCLK Spread Spectrum – The BIOS Optimization Guide

MCLK Spread Spectrum

Common Options : 0.25%, 0.5%, 0.75%, Disabled

 

Quick Review

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The MCLK Spread Spectrum BIOS feature controls spread spectrum clocking of the memory bus. It usually offers three levels of modulation – 0.25%, 0.5% or 0.75%. They denote the amount of modulation around the memory bus frequency. The greater the modulation, the greater the reduction of EMI. Therefore, if you need to significantly reduce EMI, a modulation of 0.75% is recommended.

Generally, frequency modulation through spread spectrum clocking should not cause any problems. However, system stability may be compromised if you are overclocking the memory bus.

Therefore, it is recommended that you disable the MCLK Spread Spectrum feature if you are overclocking the memory bus. Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the memory bus frequency a little to provide a margin of safety.

If you are not overclocking the memory bus, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature. Otherwise, disable it to remove even the slightest possibility of stability issues.

 

Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted.

To prevent EMI from causing problems to other electronics, the FCC enacted Part 15 of the FCC regulations in 1975. It regulates the power output of such clock generators by limiting the amount of EMI they can generate. As a result, engineers use spread spectrum clocking to ensure that their motherboards comply with the FCC regulation on EMI levels.

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. Instead of generating a typical waveform, the clock signal continuously varies around the target frequency within a tight range. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The MCLK Spread Spectrum BIOS feature controls spread spectrum clocking of the memory bus. It usually offers three levels of modulation – 0.25%, 0.5% or 0.75%. They denote the amount of modulation around the memory bus frequency. The greater the modulation, the greater the reduction of EMI. Therefore, if you need to significantly reduce EMI, a modulation of 0.75% is recommended.

[adrotate group=”2″]

Generally, frequency modulation through spread spectrum clocking should not cause any problems. However, system stability may be compromised if you are overclocking the memory bus. Of course, this depends on the amount of modulation, the extent of overclocking and other factors like temperature, voltage levels, etc. As such, the problem may not readily manifest itself immediately.

Therefore, it is recommended that you disable the MCLK Spread Spectrum feature if you are overclocking the memory bus. You will be able to achieve better overclockability, at the expense of higher EMI. Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the memory bus frequency a little to provide a margin of safety.

If you are not overclocking the memory bus, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature. Otherwise, disable it to remove even the slightest possibility of stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Read-Around-Write – The BIOS Optimization Guide

Read-Around-Write

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature allows the processor to execute read commands out of order, as if they are independent from the write commands. It does this by using a Read-Around-Write buffer.

If this BIOS feature is enabled, all processor writes to memory are first accumulated in that buffer. This allows the processor to execute read commands without waiting for the write commands to be completed.

The buffer will then combine the writes and write them to memory as burst transfers. This reduces the number of writes to memory and boosts the processor’s write performance.

If this BIOS feature is disabled, the processor writes directly to the memory controller. This reduces the processor’s read performance.

Therefore, it is highly recommended that you enable the Read-Around-Write BIOS feature for better processor read and write performance.

 

Details

This BIOS feature allows the processor to execute read commands out of order, as if they are independent from the write commands. It does this by using a Read-Around-Write buffer.

If this BIOS feature is enabled, all processor writes to memory are first accumulated in that buffer. This allows the processor to execute read commands without waiting for the write commands to be completed.

The buffer will then combine the writes and write them to memory as burst transfers. This reduces the number of writes to memory and boosts the processor’s write performance.

Incidentally, until its contents have been written to memory, the Read-Around-Write buffer also serves as a cache of the data that it is storing. These tend to be the most up-to-date data since the processor has just written them to the buffer.

[adrotate banner=”4″]

Therefore, if the processor sends out a read command for data that is still in the Read-Around-Write buffer, the processor can read directly from the buffer instead. This greatly improves read performance because the processor bypasses the memory controller to access the data. The buffer is much closer logically, so reading from it will be much faster than reading from memory.

If this BIOS feature is disabled, the processor writes directly to the memory controller. All writes have to be completed before the processor can execute a read command. It also prevents the buffer from being used as a temporary cache of processor writes. This reduces the processor’s read performance.

Therefore, it is highly recommended that you enable the Read-Around-Write BIOS feature for better processor read and write performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Memory Hole At 15M-16M – The BIOS Optimization Guide

Memory Hole At 15M-16M

Common Options : Enabled, Disabled

 

Quick Review

Certain ISA cards require exclusive access to the 1 MB block of memory, from the 15th to the 16th megabyte, to work properly. The Memory Hole At 15M-16M BIOS feature allows you to reserve that 1 MB block of memory for such cards to use.

If you enable this feature, 1 MB of memory (the 15th MB) will be reserved exclusively for the ISA card’s use. This effectively reduces the total amount of memory available to the operating system by 1 MB.

Please note that in certain motherboards, enabling this feature may actually render all memory above the 15th MB unavailable to the operating system!

If you disable this feature, the 15th MB of RAM will not be reserved for the ISA card’s use. The full range of memory is therefore available for the operating system to use. However, if your ISA card requires the use of that memory area, it may then fail to work.

Since ISA cards are a thing of the past, it is highly recommended that you disable this feature. Even if you have an ISA card that you absolutely have to use, you may not actually need to enable this feature.

Most ISA cards do not need exclusive access to this memory area. Make sure that your ISA card requires this memory area before enabling this feature. You should use this BIOS feature only in a last-ditch attempt to get a stubborn ISA card to work.

 

Details

Certain ISA cards require exclusive access to the 1 MB block of memory, from the 15th to the 16th megabyte, to work properly. The Memory Hole At 15M-16M BIOS feature allows you to reserve that 1 MB block of memory for such cards to use.

If you enable this feature, 1 MB of memory (the 15th MB) will be reserved exclusively for the ISA card’s use. This effectively reduces the total amount of memory available to the operating system by 1 MB. Therefore, if you have 256 MB of memory, the usable amount of memory will be reduced to 255 MB.

Please note that in certain motherboards, enabling this feature may actually render all memory above the 15th MB unavailable to the operating system! In such cases, you will end up with only 14 MB of usable memory, irrespective of how much memory your system actually has.

[adrotate banner=”4″]

If you disable this feature, the 15th MB of RAM will not be reserved for the ISA card’s use. The full range of memory is therefore available for the operating system to use. However, if your ISA card requires the use of that memory area, it may then fail to work.

Since ISA cards are a thing of the past, it is highly recommended that you disable this feature. Even if you have an ISA card that you absolutely have to use, you may not actually need to enable this feature.

Most ISA cards do not need exclusive access to this memory area. Make sure that your ISA card requires this memory area before enabling this feature. You should use this BIOS feature only in a last-ditch attempt to get a stubborn ISA card to work.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Synchronous Mode Select – The BIOS Optimization Guide

Synchronous Mode Select

Common Options : Synchronous, Asynchronous

 

Quick Review

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

 

Details

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

[adrotate banner=”4″]

However, the Asynchronous mode does have its uses. Users of multiplier-locked processors and slow memory modules may find that using the Asynchronous mode allows them to overclock the processor much higher without the need to buy faster memory modules.

The Asynchronous mode is also useful for those who have very fast memory modules and multiplier-locked processors with low bus speeds. Running the fast memory modules synchronously with the low CPU bus speed would force the memory modules will have to run at the same slow speed. Running asynchronously will therefore allow the memory modules to run at a much higher speed than the CPU bus.

But please note that the performance gains of running synchronously cannot be underestimated. Synchronous operations are generally much faster than asychronous operations running at a higher clock speed. It is advisable that you compare benchmark scores of your computer running asynchronously (at a higher clock speed) and synchronously to determine the best option for your system.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

CPU Hardware Prefetch – The BIOS Optimization Guide

CPU Hardware Prefetch

Common Options : Enabled, Disabled

 

Quick Review

The processor has a hardware prefetcher that automatically analyzes its requirements and prefetches data and instructions from the memory into the Level 2 cache that are likely to be required in the near future. This reduces the latency associated with memory reads.

When enabled, the processor’s hardware prefetcher will be enabled and allowed to automatically prefetch data and code for the processor.

When disabled, the processor’s hardware prefetcher will be disabled.

If you are using a C1 stepping (or older) of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, it is recommended that you enable this BIOS feature so that the hardware prefetcher is enabled for maximum performance.

But if you are using an older version of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, then you should disable the CPU Hardware Prefetch BIOS feature to circumvent the O37 bug which causes data corruption when the hardware prefetcher is operational.

 

Details

CPU Hardware Prefetch is a BIOS feature specific to processors based on the Intel NetBurst microarchitecture (e.g. Intel Pentium 4 and Intel Pentium 4 Xeon).

These processors have a hardware prefetcher that automatically analyzes the processor’s requirements and prefetches data and instructions from the memory into the Level 2 cache that are likely to be required in the near future. This reduces the latency associated with memory reads.

When it works, the hardware prefetcher does a great job of keeping the processor loaded with code and data. However, it doesn’t always work right.

Prior to the C1 stepping of the Intel Pentium 4 and Intel Pentium 4 Xeon, these processors shipped with a bug that causes data corruption when the hardware prefetcher was enabled. According to Intel, Errata O37 causes the processor to “use stale data from the cache while the Hardware Prefetcher is enabled“.

Unfortunately, the only solution for the affected processors is to disable the hardware prefetcher. This is where the CPU Hardware Prefetch BIOS feature comes in.

[adrotate banner=”4″]

When enabled, the processor’s hardware prefetcher will be enabled and allowed to automatically prefetch data and code for the processor.

When disabled, the processor’s hardware prefetcher will be disabled.

If you are using a C1 stepping (or older) of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, it is recommended that you enable this BIOS feature so that the hardware prefetcher is enabled for maximum performance.

But if you are using an older version of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, then you should disable this BIOS feature to circumvent the O37 bug which causes data corruption when the hardware prefetcher is operational.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Chipkill – The BIOS Optimization Guide

Chipkill

Common Options : Enabled, Disabled

 

Quick Review

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Details

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

Normal ECC technology make use of eight ECC bits for every 64-bits of data and the Hamming code. This allows it to detect all single-bit and double-bit errors, but correct only single bit errors.

IBM’s Chipkill technology makes use of the BCH (Bose, Ray-Chaudhuri, Hocquenghem) code with sixteen ECC bits for every 128-bits of data. It can detect all single-symbol and double-symbol errors, but correct only single-symbol errors.

A symbol, by the way, is a group of 4-bits. A single symbol error is any error combination within that symbol. That means a single symbol error can consist of anything from one to four corrupted bits. Chipkill is therefore capable of detecting and correcting more errors than standard ECC.

Unlike standard ECC, Chipkill can only be used in systems with two channels of ECC memory (128-bits data width configuration). This is because it requires sixteen ECC bits, which can only be obtained using two ECC memory modules. However, it won’t work if you place both ECC modules in the same memory channel. Both memory channels must be active for Chipkill to work.

[adrotate banner=”5″]

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Idle Cycle Counter – The BIOS Optimization Guide

Dynamic Idle Cycle Counter

Common Options : Enabled, Disabled

 

Quick Review

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

 

Details

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind DRAM Idle Timer.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate banner=”5″]

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

Go Back To > The BIOS Optimization Guide | Home

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ACPI SRAT Table – BIOS Optimization Guide

ACPI SRAT Table

Common Options : Enabled, Disabled

 

Quick Review

The ACPI Static Resource Affinity Table (SRAT) stores topology information for all the processors and memory, describing the physical locations of the processors and memory in the system. It also describes what memory is hot-pluggable, and what is not.

The operating system scans the ACPI SRAT at boot time and uses the information to better allocate memory and schedule software threads for maximum performance. This BIOS feature controls whether the SRAT is made available to the operating system at boot up, or not.

When enabled, the BIOS will build the Static Resource Affinity Table (SRAT) and allow the operating system to access and use the information to optimize software thread allocation and memory usage.

When disabled, the BIOS will not build the Static Resource Affinity Table (SRAT). Alternative optimizations like Node Memory Interleaving can then be enabled.

If you are using an operating system that supports ACPI SRAT (e.g. Windows Server 2003, Windows XP SP2 with Physical Address Extensions or PAE enabled), it is recommended that you enable this BIOS feature to allow the operating system to dynamically allocate threads and memory according to the SRAT data.

Please note that you must disable Node Memory Interleave if you intend to enable this BIOS feature. Node Memory Interleave is a static optimization that cannot work in tandem with the dynamic optimizations that the operating system can perform using information from the ACPI SRAT.

If you are using an operating system that does not support ACPI SRAT (e.g. Windows 2000, Windows 98 ), it is recommended that you disable this BIOS feature, and possibly enable Node Memory Interleaving instead.

 

Details

Although multiple cores and increased clock speeds have increased computing performance, the processor bus and memory bus are becoming significant bottlenecks. Even SMP (Symmetric MultiProcessor) systems are limited by their dependence on a processor and memory bus.

To allow computing performance to scale better, system designers are building smaller systems, called nodes, each containing their own processors and memory. These are connected together using a high-speed cache-coherent interconnect, forming a larger system. This architecture is known as ccNUMA, short for Cache-Coherent Non-Uniform Memory Access.

The cache-coherent interconnect may be a network switch, or the interconnect within a multi-core processor (e.g. the HyperTransport bus between the two cores of an AMD Opteron processors). Any processor in any node can access and use memory in other nodes through this interconnect. In multi-core processors, this allows one core to read from another core’s memory.

However, while memory accesses within the node itself (or local memory access by one core) is fast, access to memory in other nodes (or another core’s memory) is several times slower. Therefore, improving performance on a ccNUMA system would involve optimizations based on prioritizing threads to processors in the same node, and ensuring processors use memory closest to them.

Older operating systems like Windows 2000 are not capable of determining the design of the system, and therefore cannot perform such optimizations. However, newer operating systems like Windows Server 2003 can readily identify the system’s hardware topology, and allocate software threads and memory in a more optimal fashion.

[adrotate banner=”5″]

This is where the ACPI Static Resource Affinity Table (SRAT) comes in. The SRAT stores topology information for all the processors and memory, describing the physical locations of the processors and memory in the system. It also describes what memory is hot-pluggable, and what is not.

The operating system scans the ACPI SRAT at boot time and uses the information to better allocate memory and schedule software threads for maximum performance. This BIOS feature controls whether the SRAT is made available to the operating system at boot up, or not.

When enabled, the BIOS will build the Static Resource Affinity Table (SRAT) and allow the operating system to access and use the information to optimize software thread allocation and memory usage.

When disabled, the BIOS will not build the Static Resource Affinity Table (SRAT). Alternative optimizations like Node Memory Interleaving can then be enabled.

If you are using an operating system that supports ACPI SRAT (e.g. Windows Server 2003, Windows XP SP2 with Physical Address Extensions or PAE enabled), it is recommended that you enable this BIOS feature to allow the operating system to dynamically allocate threads and memory according to the SRAT data.

Please note that you must disable Node Memory Interleave if you intend to enable this BIOS feature. Node Memory Interleave is a static optimization that cannot work in tandem with the dynamic optimizations that the operating system can perform using information from the ACPI SRAT.

If you are using an operating system that does not support ACPI SRAT (e.g. Windows 2000, Windows 98 ), it is recommended that you disable this BIOS feature, and possibly enable Node Memory Interleaving instead.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Fast R-W Turn Around – BIOS Optimization Guide

Fast R-W Turn Around

Common Options : Enabled, Disabled

 

Quick Review

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

As its name suggests, this BIOS feature allows you to skip that delay. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

 

Details

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

Please note that this extra delay is only introduced when there is a switch from reads to writes. Switching from writes to reads will not suffer from such a delay.

As its name suggests, this BIOS feature allows you to skip that delay so that the memory controller can switch or “turn around” from reads to writes faster than normal. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

DRAM Read Latch Delay – BIOS Optimization Guide

DRAM Read Latch Delay

Common Options : Enabled, Disabled

Quick Review

This BIOS feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. Start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. If your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Details

This feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading. As such, a lone single-sided memory module provides the lowest DRAM load possible.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The longer the delay, the poorer the read performance of your memory modules. However, the stability of your memory modules won’t increase together with the length of the delay. Remember, the purpose of the feature is only to ensure that the memory controller will be able to latch onto the DRAM device with all sorts of DRAM loadings.

[adrotate banner=”5″]

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. It isn’t going to increase stability. In fact, it may just make things worse! So, start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. This forces the memory controller to latch onto the DRAM devices without delay, even if the BIOS presets indicate that a delay is required. Naturally, this can potentially cause stability problems if you actually have a heavy DRAM load. Therefore, if your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Counter – BIOS Optimization Guide

Dynamic Counter

Common Options : Enabled, Disabled

 

Quick Review of Dynamic Counter

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Details of Dynamic Counter

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind Idle Cycle Limit.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate group=”1″]

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

32 Byte Granularity – BIOS Optimization Guide

32 Byte Granularity

Common Options : Auto, Enabled, Disabled

 

Quick Review

The 32 Byte Granularity BIOS option determines the burst length of the DRAM controller.

When set to Enabled, the DRAM controller will read or write in bursts of 32 bytes in length.

When set to Disabled, the DRAM controller will read or write in bursts of 64 bytes in length.

When set to Auto, the DRAM controller will use a burst length of 64 bytes if the DRAM interface is 128-bits wide (dual-channel), and a burst length of 32 bytes if the DRAM interface is 64-bits wide (single-channel).

If you are using a discrete graphics card with dedicated graphics memory, you should disable this BIOS option for optimal performance. This is true whether your system is running on dual-channel memory, or single-channel memory.

It is not recommended that you leave the BIOS option at the default setting of Auto when your system is running on a single memory channel. Doing so will cause the DRAM controller to default to a burst length of 32 bytes when a burst length of 64 bytes would be faster.

If you are using your motherboard’s onboard graphics chip which shares system memory, you should enable this BIOS option, but only if your system is running on a single memory channel. If it’s running on dual-channel memory, then you must disable this BIOS option.

 

Details

The 32 Byte Granularity BIOS option determines the burst length of the DRAM controller.

When set to Enabled, the DRAM controller will read or write in bursts of 32 bytes in length.

When set to Disabled, the DRAM controller will read or write in bursts of 64 bytes in length.

When set to Auto, the DRAM controller will use a burst length of 64 bytes if the DRAM interface is 128-bits wide (dual-channel), and a burst length of 32 bytes if the DRAM interface is 64-bits wide (single-channel).

Generally, the larger burst length of 64-bytes is faster. However, a 32-byte burst length is better if the system uses an onboard graphics chip that uses system memory as framebuffer and texture memory. This is because the graphics chip would generate a lot of 32-byte system memory accesses.

Keeping that in mind, the 32-byte burst length is only supported if the DRAM interface is 64-bits wide (single-channel). If the DRAM interface is 128-bits wide, the burst length must be 64 bytes long.

[adrotate banner=”4″]

If you are using a discrete graphics card with dedicated graphics memory, you should disable this BIOS option for optimal performance. This is true whether your system is running on dual-channel memory, or single-channel memory.

It is not recommended that you leave the BIOS option at the default setting of Auto when your system is running on a single memory channel. Doing so will cause the DRAM controller to default to a burst length of 32 bytes when a burst length of 64 bytes would be faster.

If you are using your motherboard’s onboard graphics chip which shares system memory, you should enable this BIOS option, but only if your system is running on a single memory channel. If it’s running on dual-channel memory, then you must disable this BIOS option.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!