Tag Archives: SDRAM

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength

Common Options : Not Reduced, Reduced 15%, Reduced 30%, Reduced 50%

 

Memory DQ Drive Strength : A Quick Review

The Memory DQ Drive Strength BIOS feature allows you to reduce the drive strength for the memory DQ (data) pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Memory DQ Drive Strength : The Full Details

Every Dual Inline Memory Module (DIMM) has 64 data (DQ) lines. These lines transfer data from the DRAM chips to the memory controller and vice versa.

No matter what kind of DRAM chips are used (whether it’s regular SDRAM, DDR SDRAM or DDR2 SDRAM), the 64 data lines allow it to transfer 64-bits of data every clock cycle.

Each DIMM also has a number of data strobe (DQS) lines. These serve to time the data transfers on the DQ lines. The number of DQS lines depends on the type of memory chip used.

DIMMs based on x4 DRAM chips have 16 DQS lines, while DIMMs using x8 DRAM chips have 8 DQS lines and DIMMs with x16 DRAM chips have only 4 DQS lines.

Memory data transfers begin with the memory controller sending its commands to the DIMM. If data is to be read from the DIMM, then DRAM chips on the DIMM will drive their DQ and DQS (data strobe) lines.

On the other hand, if data is to be written to the DIMM, the memory controller will drive its DQ and DQS lines instead.

If many output buffers (on either the DIMMs or the memory controller) drive their DQ lines simultaneously, they can cause a drop in the signal level with a momentary raise in the relative ground voltage.

This reduces the quality of the signal which can be problematic at high clock speeds. Increasing the drive strength of the DQ pins can help give it a higher voltage swing, improving the signal quality.

However, it is important to increase the DQ drive strength according to the DRAM load. Unnecessarily increasing the DQ drive strength can cause the signal to overshoot its rising and falling edges, as well as create more signal reflection.

All this increase signal noise, which ironically negates the increased signal strength provided by a higher drive strength. Therefore, it is sometimes useful to reduce the DQ drive strength.

With light DRAM loads, you can reduce the DQ drive strength to lower signal noise and improve the signal-noise ratio. Doing so will also reduce power consumption, although that is probably low on most people’s list of importance. In certain cases, it actually allows you to achieve a higher memory clock speed.

This is where the Memory DQ Drive Strength BIOS feature comes in. It allows you to reduce the drive strength for the memory data pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Bank Swizzle Mode from The Tech ARP BIOS Guide

Bank Swizzle Mode

Common Options : Enabled, Disabled

 

Quick Review of Bank Swizzle Mode

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits.

It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

This effectively interleaves the memory banks and maximises memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Details of Bank Swizzle Mode

DRAM (and its various derivatives – SDRAM, DDR SDRAM, etc.) store data in cells that are organized in rows and columns.

Whenever a read command is issued to a memory bank, the appropriate row is first activated using the RAS (Row Address Strobe). Then, to read data from the target memory cell, the appropriate column is activated using the CAS (Column Address Strobe).

Multiple cells can be read from the same active row by applying the appropriate CAS signals. If data has to be read from a different row, the active row has to be deactivated before the appropriate row can be activated.

This takes time and reduces performance, so good memory controllers will try to schedule memory accesses to maximize the number of hits on active rows. One of the methods used to achieve that goal is the bank swizzle mode.

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits. It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

The XOR operation results in a value of true if only one of the two operands (inputs) is true. If both operands are simultaneously false or true, then it results in a value of false.

[adrotate group=”1″]

This characteristic of XORing the physical address to create the bank address reduces page conflicts by remapping the memory bank addresses so only one of two banks can be active at any one time.

This effectively interleaves the memory banks and maximizes memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM PH Limit from the Tech ARP BIOS Guide

SDRAM PH Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM PH Limit

SDRAM PH Limit is short for SDRAM Page Hit Limit. The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM PH Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM Cycle Length from The Tech ARP BIOS Guide

SDRAM Cycle Length

Common Options : 2, 3 (SDR memory) or 1.5, 2, 2.5, 3 (DDR memory)

 

Quick Review of SDRAM Cycle Length

The SDRAM Cycle Length BIOS feature is same as the SDRAM CAS Latency Time BIOS feature. It controls the delay (in clock cycles) between the assertion of the CAS signal and the availability of the data from the target memory cell. It also determines the number of clock cycles required for the completion of the first part of a burst transfer. In other words, the lower the CAS latency, the faster memory reads or writes can occur.

Please note that some memory modules may not be able to handle the lower latency and may lose data. Therefore, while it is recommended that you reduce the SDRAM CAS Latency Time to 2 or 2.5 clock cycles for better memory performance, you should increase it if your system becomes unstable.

Interestingly, increasing the CAS latency time will often allow the memory module to run at a higher clock speed. So, if you hit a snag while overclocking your SDRAM modules, try increasing the CAS latency time.

 

Details of SDRAM Cycle Length

Whenever a read command is issued, a memory row is activated using the RAS (Row Address Strobe). Then, to read data from the target memory cell, the appropriate column is activated using the CAS (Column Address Strobe).

Multiple cells can be read from the same active row by applying the appropriate CAS signals. However, there is a short delay after each assertion of the CAS signal before data can be read from the target memory cell. This delay is known as the CAS latency.

The appropriate delay for your memory module is reflected in its rated timings. In JEDEC specifications, it is the first number in the three or four number sequence. For example, if your memory module has the rated timings of 2-3-4-7, its rated CAS latency would be 2 clock cycles.

The SDRAM Cycle Length BIOS feature is same as the SDRAM CAS Latency Time BIOS feature. It controls the delay (in clock cycles) between the assertion of the CAS signal and the availability of the data from the target memory cell. It also determines the number of clock cycles required for the completion of the first part of a burst transfer. In other words, the lower the CAS latency, the faster memory reads or writes can occur.

[adrotate group=”2″]

Because column activation occurs every time a new memory cell is read from, the effect of CAS latency on memory performance is significant, especially with SDR SDRAM. Its effect is less obvious in DDR SDRAM.

Please note that some memory modules may not be able to handle the lower latency and may lose data. Therefore, while it is recommended that you reduce the SDRAM CAS Latency Time to 2 or 2.5 clock cycles for better memory performance, you should increase it if your system becomes unstable.

Interestingly, increasing the CAS latency time will often allow the memory module to run at a higher clock speed. So, if you hit a snag while overclocking your SDRAM modules, try increasing the CAS latency time.

This is particularly true for DDR SDRAM memory since CAS latency has much less effect on performance with such memory, compared to the older SDR memory. The improvement in overclockability with higher CAS latencies cannot be underestimated. If you are interested in overclocking your DDR SDRAM modules, you might want to consider increasing the CAS latency. The huge increase in overclockability far outweighs the minor loss in performance.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DCLK Feedback Delay – The Tech ARP BIOS Guide

DCLK Feedback Delay

Common Options : 0 ps, 150 ps, 300 ps, 450 ps, 600 ps, 750 ps, 900 ps, 1050 ps

 

Quick Review of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

 

Details of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

This feedback signal is used by the SDRAM controller to determine when it can write data to the SDRAM module. The main idea of this system is to ensure that both clock phases are properly aligned for the proper delivery of data.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

[adrotate group=”2″]

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM Page Hit Limit – The Tech ARP BIOS Guide

SDRAM Page Hit Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM Page Hit Limit

The SDRAM Page Hit Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM Page Hit Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM Page Hit Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

In-Order Queue Depth – The BIOS Optimization Guide

In-Order Queue Depth

Common Options : 1, 4, 8, 12

 

Quick Review of In-Order Queue Depth

The In-Order Queue Depth BIOS feature controls the use of the processor bus’ command queue. Normally, there are only two options available. Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12).

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands. If selected, each command will only be issued after the processor has finished with the previous one. Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled. This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

Interestingly, this feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds. To overclock beyond what’s normally possible, you can try disabling command queuing.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

 

Details of In-Order Queue Depth

For greater performance at high clock speeds, motherboard chipsets now feature a pipelined processor bus. The multiple stages in this pipeline can also be used to queue up multiple commands to the processor. This command queuing greatly improves performance because it effectively masks the latency of the processor bus. In optimal situations, the amount of latency between each succeeding command can be reduced to only a single clock cycle!

The In-Order Queue Depth BIOS feature controls the use of the processor bus’ command queue. Normally, there are only two options available. Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12). This is because this BIOS feature does not actually allow you to select the number of commands that can be queued.

It merely allows you to disable or enable the command queuing capability of the processor bus pipeline. This is because the number of commands that can be queued depends entirely on the number of stages in the pipeline. As such, you can expect to see this feature associated with options like Enabled and Disabled in some motherboards.

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands. If selected, each command will only be issued after the processor has finished with the previous one. Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

As you can see, this reduces performance as the processor has to wait for each command to filter down the pipeline. The severity of the effect depends greatly on the depth of the pipeline. The deeper the pipeline, the greater the effect.

If the second queue depth option is 4, this means that the processor bus pipeline has 4 stages in it. Selecting this option allows the queuing of up to 4 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 8, this means that the processor bus pipeline has 8 stages in it. Selecting this option allows the queuing of up to 8 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 12, this means that the processor bus pipeline has 12 stages in it. Selecting this option allows the queuing of up to 12 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

Please note that the latency of only 1 clock cycle is only possible if the pipeline is completely filled up. If the pipeline is only partially filled up, then the latency affecting one or more of the commands will be more than 1 clock cycle. Still, the average latency for each command will be much lower than it would be with command queuing disabled.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled. This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

[adrotate group=”2″]

Interestingly, this feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds. To overclock beyond what’s normally possible, you can try disabling command queuing. This may reduce performance but it will make the processor more stable and may allow it to be further overclocked.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

2T Command – BIOS Optimization Guide

2T Command

Common Options : Enabled, Disabled, Auto

 

Quick Review of 2T Command

The 2T Command BIOS feature allows you to select the delay between the assertion of the Chip Select signal till the time the memory controller starts sending commands to the memory bank. The lower the value, the sooner the memory controller can send commands out to the activated memory bank.

When this feature is disabled, the memory controller will only insert a command delay of one clock cycle or 1T.

When this feature is enabled, the memory controller will insert a command delay of two clock cycles or 2T.

The Auto option allows the memory controller to use the memory module’s SPD value for command delay.

If the SDRAM command delay is too long, it can reduce performance by unnecessarily preventing the memory controller from issuing the commands sooner.

However, if the SDRAM command delay is too short, the memory controller may not be able to translate the addresses in time and the “bad commands” that result will cause data loss and corruption.

It is recommended that you try disabling 2T Command for better memory performance. But if you face stability issues, enable this BIOS feature.

 

Details of 2T Command

Whenever there is a memory read request from the operating system, the memory controller does not actually receive the physical memory addresses where the data is located. It is only given a virtual address space which it has to translate into physical memory addresses. Only then can it issue the proper read commands. This produces a slight delay at the start of every new memory transaction.

Instead of immediately issuing the read commands, the memory controller instead asserts the Chip Select signal to the physical bank that contains the requested data. What this Chip Select signal does is activate the bank so that it is ready to accept the commands. In the meantime, the memory controller will be busy translating the memory addresses. Once the memory controller has the physical memory addresses, it starts issuing read commands to the activated memory bank.

As you can see, the command delay is not caused by any latency inherent in the memory module. Rather, it is determined by the time taken by the memory controller to translate the virtual address space into physical memory addresses.

Naturally, because the delay is due to translation of addresses, the memory controller will require more time to translate addresses in high density memory modules due to the higher number of addresses. The memory controller will also take a longer time if there is a large number of physical banks.

The 2T Command BIOS feature allows you to select the delay between the assertion of the Chip Select signal till the time the memory controller starts sending commands to the memory bank. The lower the value, the sooner the memory controller can send commands out to the activated memory bank.

When this feature is disabled, the memory controller will only insert a command delay of one clock cycle or 1T.

When this feature is enabled, the memory controller will insert a command delay of two clock cycles or 2T.

The Auto option allows the memory controller to use the memory module’s SPD value for command delay.

If the SDRAM command delay is too long, it can reduce performance by unnecessarily preventing the memory controller from issuing the commands sooner.

[adrotate group=”2″]

However, if the SDRAM command delay is too short, the memory controller may not be able to translate the addresses in time and the “bad commands” that result will cause data loss and corruption.

Fortunately, all unbuffered SDRAM modules are capable of a 1T command delay up to four memory banks per channel. After that, a 2T command delay may be required. However, support for 1T command delay varies from chipset to chipset and even from one motherboard model to another. You should consult your motherboard manufacturer to see if your motherboard supports a command delay of 1T.

It is recommended that you try disabling 2T Command for better memory performance. But if you face stability issues, enable this BIOS feature.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Idle Cycle Limit – The BIOS Optimization Guide

Idle Cycle Limit

Common Options : 0T, 16T, 32T, 64T, 96T, Infinite, Auto

 

Quick Review of Idle Cycle Limit

The Idle Cycle Limit BIOS feature sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 8T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

 

Details of Idle Cycle Limit

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles.

[adrotate group=”1″]

This is where the Idle Cycle Limit BIOS feature comes in. It sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged. The default value is 16T which forces the memory controller to close the open pages once sixteen idle cycles have passed.

Increasing this BIOS feature to more than the default of 16T forces the memory controller to keep the activated pages opened longer during times of no activity. This allows for quicker data access if the next data request can be satisfied by the open pages.

However, this is limited by the refresh cycle already set by the BIOS. This means the open pages will automatically close when the memory bank needs to be recharged, even if the number of idle cycles have not reached the Idle Cycle Limit. So, this BIOS option can only be used to force the precharging of the memory bank before the set refresh cycle but not to actually delay the refresh cycle.

Reducing the number of cycles from the default of 16T to 0T forces the memory controller to close all open pages once there are no data requests. In short, the open pages are refreshed as soon as there are no further data requests. This may increase the efficiency of the memory subsystem by masking the bank precharge during idle cycles. However, prematurely closing the open pages may convert what could have been a page hit (and satisfied immediately) into a page miss which will have to wait for the bank to precharge and the same page reopened.

Because refreshes do not occur that often (usually only about once every 64 msec), the impact of refreshes on memory performance is really quite minimal. The apparent benefits of masking the refreshes during idle cycles will not be noticeable, especially since memory systems these days already use bank interleaving to mask refreshes.

With a 0T setting, data requests are also likely to get stalled because even a single idle cycle will cause the memory controller to close all open pages! In desktop applications, most memory reads follow the spatial locality concept where if one data bit is read, chances are high that the next data bit will also need to be read. That’s why closing open pages prematurely using DRAM Idle Timer will most likely cause reduced performance in desktop applications.

On the other hand, using a 0 or 16 idle cycles limit will ensure that the memory cells will be refreshed more often, thereby preventing the loss of data due to insufficiently refreshed memory cells. Forcing the memory controller to close open pages more often will also ensure that in the event of a very long read, the pages can be opened long enough to fulfil the data request.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 16T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

Alternatively, you can greatly increase the value of the Refresh Interval or Refresh Mode Select feature to boost bandwidth and use this BIOS feature to maintain the data integrity of the memory cells. As ultra-long refresh intervals (i.e. 64 or 128 µsec) can cause memory cells to lose their contents, setting a low Idle Cycle Limit like 0T or 16T allows the memory cells to be refreshed more often, with a high chance of those refreshes being done during idle cycles.

[adrotate group=”2″]

This appears to combine the best of both worlds – a long bank active period when the memory controller is being stressed and more refreshes when the memory controller is idle. However, this is not a reliable way of ensuring sufficient refresh cycles since it depends on the vagaries of memory usage to provide sufficient idle cycles to trigger the refreshes.

If your memory subsystem is under extended load, there may not be any idle cycle to trigger an early refresh. This may cause the memory cells to lose their contents. Therefore, it is still recommended that you maintain a proper refresh interval and set this feature to 16T for desktops.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Termination – The BIOS Optimization Guide

DRAM Termination

Common Options : 50 Ohms, 75 Ohms, 150 Ohms (DDR2) / 40 Ohms, 60 Ohms, 120 Ohms (DDR3)

 

Quick Review of DRAM Termination

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal and higher power consumption.

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

 

Details of DRAM Termination

Like a ball thrown against a wall, electrical signals reflect (bounce) back when they reach the end of a transmission path. They also reflect at points where points where there is a change in impedance, e.g. at connections to DRAM devices or a bus. These reflected signals are undesirable because they distort the actual signal, impairing the signal quality and the data being transmitted.

Prior to the introduction of DDR2 memory, motherboard designers use line termination resistors at the end of the DRAM signal lines to reduce signal reflections. However, these resistors are only partially effective because they cannot reduce reflections generated by the stub lines that lead to the individual DRAM chips on the memory module (see illustration below). Even so, this method worked well enough with the lower operating frequency and higher signal voltages of SDRAM and DDR SDRAM modules.

Line termination resistors on the motherboard (Courtesy of Rambus)

The higher speed (and lower signal voltages) of DDR2 and DDR3 memory though require much better signal quality and these high-speed modules have much lower tolerances for noise. The problem is also compounded by the higher number of memory modules used. Line termination resistors are no longer good enough to tackle the problem of signal reflections. This is where On-Die Termination (ODT) comes in.

On-Die Termination shifts the termination resistors from the motherboard to the DRAM die itself. These resistors can better suppress signal reflections, providing much better a signal-to-noise ratio in DDR2 and DDR3 memory. This allows for much higher clock speeds at much lower voltages.

It also reduces the cost of motherboard designs. In addition, the impedance value of the termination resistors can be adjusted, or even turned off via the memory module’s Extended Mode Register Set (EMRS).

On-die termination (Courtesy of Rambus)

Unlike the termination resistors on the motherboard, the on-die termination resistors can be turned on and off as required. For example, when a DIMM is inactive, its on-die termination resistors turn on to prevent signals from the memory controller reflecting to the active DIMMs. The impedance value of the resistors are usually programmed by the BIOS at boot-time, so the memory controller only turns it on or off (unless the system includes a self-calibration circuit).

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal, and higher power consumption.

[adrotate group=”2″]

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM Precharge Control – The BIOS Optimization Guide

SDRAM Precharge Control

Common Options : Enabled, Disabled

 

Quick Review of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank. This is useful in cases where subsequent data requests will also result in page misses.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

 

Details of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This naturally improves performance.

But if read request cannot be satisfied by any of the four open pages, there are two possibilities. Either one page is closed and the correct page opened; or all open pages are closed and new pages opened up. Either way, the read request suffers the full latency penalty.

[adrotate group=”1″]

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur. Naturally, this greatly impacts memory performance.

Fortunately, after the four full latency reads, the memory controller can often predict what pages will be needed next. It can then open them for minimum latency reads . This somewhat reduces the negative effect of consecutive page misses.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank.

This is useful in cases where subsequent data requests will also result in page misses. This is because the memory banks will already be precharged and ready to be activated. There is no need to wait for the memory banks to precharge before they can be activated. However, it also means that you won’t be able to benefit from data accesses that could have been satisfied by the previously opened pages.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

LD-Off Dram RD/WR Cycles – The BIOS Optimization Guide

LD-Off Dram RD/WR Cycles

Common Options : Delay 1T, Normal

 

Quick Review

The LD-Off Dram RD/WR Cycles BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

 

Details

At the beginning of a memory transaction (read or write), the memory controller normally sends the address and command signals simultaneously to the memory bank. This allows for the quickest activation of the memory bank.

However, this may cause problems with certain memory modules. In these memory modules, the target row may not be activated quickly enough to allow the memory controller to read from or write to it. This is where the LD-Off Dram RD/WR Cycles BIOS feature comes in.

[adrotate group=”1″]

This BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle. This ensures there is enough time for the memory bank to be activated before the read or write command arrives.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ZADAK Announces The ZADAK511 Shield RGB Series

Taipeh, 22 December 2016ZADAK Lab is thrilled to achieve the honor of being the only brand with the distinction of having overcame the challenges of developing multi-colored LED illuminated SSDs and memory modules with their new SHIELD RGB series.

The ZADAK team is happy to announce that both SHIELD RGB DDR4 and SHIELD RGB SSD are ground-breaking products unlike anything else in the market right now. ZADAK511 SHIELD RGB memory and SSD have set the standard for both design and functionality for both markets for the next one or two years. This is truly a revolution brought on by the SHIELD series from ZADAK511.

ZADAK511 Shield RGB Series: SHIELD RGB DDR4 and SHIELD RGB SSD

ZADAK LAB is the only brand to have an RGB dual-interface SSD, the SHIELD, is the world’s first and only SSD featuring dual-interface connections for both SATAIII and USB3.1 Type-C. The patented SHIELD RGB Dual-Interface SSD is one of the most beautiful designs to ever be seen on an SSD and is easily distinguishable from other SSDs.

The ZADAK team utilized various metal materials, forged by exquisite worksmanship to creative the multi-level design of the upper cover of the SHIELD RGB SSD, making the look pop with a 3D effect. This makes this product very exciting to look at and is simply an amazing adornment for PC enthusiasts, standing as a legend amongst SSDs because of its craftsmanship.

Function-wise, the high-speed SATAIII and USB3.1 Type-C Gen2 deliver superior performance from the SHIELD SSD’s MLC NAND flash which underwent rigorous testing for guaranteed reliability so your files are safe in the drive. The ZADAK511 SHIELD RGB SSD is rated for up to 550MB/s read and 480MB/s write speed with capacities up to 480GB.

The ZADAK511 SHIELD SSD also comes with the ZArsenal software which regulates the colors and lighting patterns and can sync with all types of g aming motherboards. It also allows instant firmware updates as well as displaying status information about the SSD like health status, disk information, capacity and alerts for maximum peace of mind so that measures can be taking when issues occur.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Bus Selection – The BIOS Optimization Guide

DRAM Bus Selection

Common Options : Auto, Single Channel, Dual Channel

 

Quick Review

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

 

Details

Many motherboards now come with dual memory channels. Each channel can be accessed by the memory controller concurrently, thereby improving memory throughput, as well as reducing memory latency.

Depending on the chipset and motherboard design, each memory channel may support one or more DIMM slots. But for the dual channel feature to be work properly, at least one DIMM slot from each memory channel must be filled.

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Rank Interleave – The BIOS Optimization Guide

Rank Interleave

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Details

Rank is a new term used to differentiate physical banks on a particular memory module from internal banks within the memory chip. Single-sided memory modules have a single rank while double-sided memory modules have two ranks.

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

[adrotate banner=”4″]

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

Please note that Rank Interleave currently works only if you are using double-sided memory modules. Rank Interleave will not work with two or more single-sided memory modules. The interleaving ranks must be on the same memory module.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Digital Locked Loop (DLL) – The BIOS Optimization Guide

Digital Locked Loop (DLL)

Common Options : Enabled, Disabled

 

Quick Review

The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL). It is a digital circuit that aligns the data strobe signal (DQS) with the data signal (DQ) to ensure proper data transfer of DDR, DDR2, DDR3 and DDR4 memory. However, it can be disabled to allow the memory chips to run beyond a fixed frequency range.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

It is recommended that you keep this BIOS feature enabled at all times. The digital locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Details

DDR, DDR2, DDR3 and DDR4 SDRAM deliver data on both rising and falling edges of the signal. This requires much tighter timings, necessitating the use of a data strobe signal (DQS) generated by differential clocks. This data strobe is then aligned to the data signal (DQ) using a delay-locked loop (DLL) circuit.

The DQS and DQ signals must be aligned with minimal skew to ensure proper data transfer. Otherwise, data transferred on the DQ signal will be read incorrectly, causing the memory contents to be corrupted and the system to malfunction.

However, the delay-locked loop circuit of every DDR, DDR2, DDR3 or DDR4 chip is tuned for a certain fixed frequency range. If you run the chip beyond that frequency rate, the DLL circuit may not work correctly. That’s why DDR, DDR2, DDR3 and DDR4 SDRAM chips can have problems running at clock speeds slower than what they are rated for.

[adrotate banner=”5″]

If you encounter such a problem, it is possible to disable the DLL. Disabling the DLL will allow the chip to run beyond the frequency range for which the DLL is tuned for. This is where the Digital Locked Loop (DLL) BIOS feature comes in.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

Note : The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL).

It is recommended that you keep this BIOS feature enabled at all times. The delay-locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Chipkill – The BIOS Optimization Guide

Chipkill

Common Options : Enabled, Disabled

 

Quick Review

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Details

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

Normal ECC technology make use of eight ECC bits for every 64-bits of data and the Hamming code. This allows it to detect all single-bit and double-bit errors, but correct only single bit errors.

IBM’s Chipkill technology makes use of the BCH (Bose, Ray-Chaudhuri, Hocquenghem) code with sixteen ECC bits for every 128-bits of data. It can detect all single-symbol and double-symbol errors, but correct only single-symbol errors.

A symbol, by the way, is a group of 4-bits. A single symbol error is any error combination within that symbol. That means a single symbol error can consist of anything from one to four corrupted bits. Chipkill is therefore capable of detecting and correcting more errors than standard ECC.

Unlike standard ECC, Chipkill can only be used in systems with two channels of ECC memory (128-bits data width configuration). This is because it requires sixteen ECC bits, which can only be obtained using two ECC memory modules. However, it won’t work if you place both ECC modules in the same memory channel. Both memory channels must be active for Chipkill to work.

[adrotate banner=”5″]

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Idle Cycle Counter – The BIOS Optimization Guide

Dynamic Idle Cycle Counter

Common Options : Enabled, Disabled

 

Quick Review

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

 

Details

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind DRAM Idle Timer.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate banner=”5″]

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

Go Back To > The BIOS Optimization Guide | Home

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Fast R-W Turn Around – BIOS Optimization Guide

Fast R-W Turn Around

Common Options : Enabled, Disabled

 

Quick Review

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

As its name suggests, this BIOS feature allows you to skip that delay. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

 

Details

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

Please note that this extra delay is only introduced when there is a switch from reads to writes. Switching from writes to reads will not suffer from such a delay.

As its name suggests, this BIOS feature allows you to skip that delay so that the memory controller can switch or “turn around” from reads to writes faster than normal. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

DRAM Read Latch Delay – BIOS Optimization Guide

DRAM Read Latch Delay

Common Options : Enabled, Disabled

Quick Review

This BIOS feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. Start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. If your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Details

This feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading. As such, a lone single-sided memory module provides the lowest DRAM load possible.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The longer the delay, the poorer the read performance of your memory modules. However, the stability of your memory modules won’t increase together with the length of the delay. Remember, the purpose of the feature is only to ensure that the memory controller will be able to latch onto the DRAM device with all sorts of DRAM loadings.

[adrotate banner=”5″]

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. It isn’t going to increase stability. In fact, it may just make things worse! So, start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. This forces the memory controller to latch onto the DRAM devices without delay, even if the BIOS presets indicate that a delay is required. Naturally, this can potentially cause stability problems if you actually have a heavy DRAM load. Therefore, if your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

SDRAM Burst Len – BIOS Optimization Guide

SDRAM Burst Len

Common Options : 4, 8

 

Quick Review

This BIOS feature allows you to control the length of a burst transaction.

When this feature is set to 4, a burst transaction can only comprise of up to four reads or four writes.

When this feature is set to 8, a burst transaction can only comprise of up to eight reads or eight writes.

As the initial CAS latency is fixed for each burst transaction, a longer burst transaction will allow more data to be read or written for less delay than a shorter burst transaction. Therefore, a burst length of 8 will be faster than a burst length of 4.

Therefore, it is recommended that you select the longer burst length of 8 for better performance.

 

Details

This is the same as the SDRAM Burst Length BIOS feature, only with a weirdly truncated name. Surprisingly, many manufacturers are using it. Why? Only they know. 🙂

Burst transactions improve SDRAM performance by allowing the reading or writing of whole ‘blocks’ of contiguous data with only one column address.

In a burst sequence, only the first read or write transfer incurs the initial latency of activating the column. The subsequent reads or writes in that burst sequence can then follow behind without any further delay. This allows blocks of data to be read or written with far less delay than non-burst transactions.

For example, a burst transaction of four writes can incur the following latencies : 4-1-1-1. In this example, the total time it takes to transact the four writes is merely 7 clock cycles.

In contrast, if the four writes are not written by burst transaction, they will incur the following latencies : 4-4-4-4. The time it takes to transact the four writes becomes 16 clock cycles, which is 9 clock cycles longer or more than twice as slow as a burst transaction.

This is where the SDRAM Burst Len BIOS feature comes in. It is a BIOS feature that allows you to control the length of a burst transaction.

When this feature is set to 4, a burst transaction can only comprise of up to four reads or four writes.

When this feature is set to 8, a burst transaction can only comprise of up to eight reads or eight writes.

As the initial CAS latency is fixed for each burst transaction, a longer burst transaction will allow more data to be read or written for less delay than a shorter burst transaction. Therefore, a burst length of 8 will be faster than a burst length of 4.

[adrotate banner=”5″]

For example, if the memory controller wants to write a block of contiguous data eight units long to memory, it can do it as a single burst transaction 8 units long or two burst transactions, each 4 units in length. The hypothetical latencies incurred by the single 8-unit long transaction would be 4-1-1-1-1-1-1-1 with a total time of 11 clock cycles for the entire transaction.

But if the eight writes are written to memory as two burst transactions of 4 units in length, the hypothetical latencies incurred would be 4-1-1-1-4-1-1-1. The time taken for the two transactions to complete would be 14 clock cycles. As you can see, this is slower than a single transaction, 8 units long.

Therefore, it is recommended that you select the longer burst length of 8 for better performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Team Group Delta Luminous Memory Series Launched

Team Group Inc. today announced the launch of the all new generation of luminous memory for gaming – Delta. Stability and performance are already the basic requirement for a high speed memory. With the ever growing gamers and modders, the appearance and the visual features of the product has become the new focus of gamers’ attention.

The all new Delta series from Team Group has inherited the excellence of Team Group’s overclocking and mainstream memories. In addition, through the recent coalition with AVEXIR, we are able to combine its patented LED lighting technology with our aluminum forged high-efficiency heat spreader to create Delta, a bionic memory with pulse-like LED light effect.

Delta is Team Group’s first luminous memory series. It is using the exclusive LED luminous cooling system to provide a steady pulse rhythm and soothe gamer’s nervous tension during an intense game, so the gamer is able to win with an optimum state.

To satisfy gamers’ demand for high performance memories, Delta releases two high specification memories, DDR4 2400 CL15-15-15-35 and DDR4 3000 CL16-16-16-36. And it also offers two packages, 4GBx2 and 8GBx2 for gamers to choose from. With computer modding trend on the rise around the globe, Delta is not only gamer’s best companion, its red and black heat spreader design with breathing LED light in three colors of red, white and blue also provide modders more options to build various styles of PC.

[adrotate banner=”5″]

Team Group’s memory module products have been persistently focused on offering the best performance experience. We design Xtreem, Delta and Elite/Elite Plus series specifically for overclocking, gaming/modding and mainstream users. Moreover, in the future we will create products targeting different groups of users with different style, performance and features to meet the needs of more users.

As a leading provider of memory storage products and mobile applications to the consumer market, Team Group is committed to providing the best storage, multimedia and data sharing solutions. All Team memory module products come with a lifetime warranty, repair and replacement services.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Counter – BIOS Optimization Guide

Dynamic Counter

Common Options : Enabled, Disabled

 

Quick Review of Dynamic Counter

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Details of Dynamic Counter

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind Idle Cycle Limit.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate group=”1″]

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Team Group Dark Pro DDR4 Overclocking RAM Launched

Team Group Inc., the world’s leading memory brand, today announced the launch of the all new generation of overclocking memory for gaming – Dark Pro Series. Since every motherboard manufacturer is focusing on competing in Skylake platform, besides model for the mainstream Z170 motherboard, we also provide other advanced models for overclocking enthusiasts and gamers to test their passion.

Team Group is pushing the overclocking limit of our all new Dark Pro series for targeted overclocking enthusiasts & gamers. Combining extreme performance and stability that hardcore gaming required, we will provide gamers the finest gaming experience with our solid technology strength.

Dark Pro’s all new cooling design combines the low key style of the Dark series and the multi colors design of the Vulcan series. The reinforced aluminum heat spreader with a punched dots design concept is used to express the hidden strength within. It also matches with the word of Dark Pro printed on the black Tungsten steel heat spreader. No matter it’s for assembling or upgrading, gamers will be satisfied with the excellent performance and the overall visual design.

Dark Pro series have a total of three frequencies: DDR4 3,000/3,200/3,333. In addition to the standard 4GBx2 dual channel version, we also provide 8GBx2 Kits gaming package for advanced gamers. Besides the Dark Pro and the previous released Xtreem for Skylake, all series of Team Group’s DDR4 memory has all been on sale. If you are a gamer who want to experience an all new generation of Skylake’s extreme performance, then you must not miss the overclocking series of Team Xtreem / Dark Pro / Dark / Vulcan / Zeus.

This is a reminder that platform changing comes with risk. You might come across compatibility issues when investing on overclocking accessories. Please choose the easy overclocking, most stable and highly compatible Team Group Dark Pro series, for it will give you the best performance and smoothest gaming experience you ever have.

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

32 Byte Granularity – BIOS Optimization Guide

32 Byte Granularity

Common Options : Auto, Enabled, Disabled

 

Quick Review

The 32 Byte Granularity BIOS option determines the burst length of the DRAM controller.

When set to Enabled, the DRAM controller will read or write in bursts of 32 bytes in length.

When set to Disabled, the DRAM controller will read or write in bursts of 64 bytes in length.

When set to Auto, the DRAM controller will use a burst length of 64 bytes if the DRAM interface is 128-bits wide (dual-channel), and a burst length of 32 bytes if the DRAM interface is 64-bits wide (single-channel).

If you are using a discrete graphics card with dedicated graphics memory, you should disable this BIOS option for optimal performance. This is true whether your system is running on dual-channel memory, or single-channel memory.

It is not recommended that you leave the BIOS option at the default setting of Auto when your system is running on a single memory channel. Doing so will cause the DRAM controller to default to a burst length of 32 bytes when a burst length of 64 bytes would be faster.

If you are using your motherboard’s onboard graphics chip which shares system memory, you should enable this BIOS option, but only if your system is running on a single memory channel. If it’s running on dual-channel memory, then you must disable this BIOS option.

 

Details

The 32 Byte Granularity BIOS option determines the burst length of the DRAM controller.

When set to Enabled, the DRAM controller will read or write in bursts of 32 bytes in length.

When set to Disabled, the DRAM controller will read or write in bursts of 64 bytes in length.

When set to Auto, the DRAM controller will use a burst length of 64 bytes if the DRAM interface is 128-bits wide (dual-channel), and a burst length of 32 bytes if the DRAM interface is 64-bits wide (single-channel).

Generally, the larger burst length of 64-bytes is faster. However, a 32-byte burst length is better if the system uses an onboard graphics chip that uses system memory as framebuffer and texture memory. This is because the graphics chip would generate a lot of 32-byte system memory accesses.

Keeping that in mind, the 32-byte burst length is only supported if the DRAM interface is 64-bits wide (single-channel). If the DRAM interface is 128-bits wide, the burst length must be 64 bytes long.

[adrotate banner=”4″]

If you are using a discrete graphics card with dedicated graphics memory, you should disable this BIOS option for optimal performance. This is true whether your system is running on dual-channel memory, or single-channel memory.

It is not recommended that you leave the BIOS option at the default setting of Auto when your system is running on a single memory channel. Doing so will cause the DRAM controller to default to a burst length of 32 bytes when a burst length of 64 bytes would be faster.

If you are using your motherboard’s onboard graphics chip which shares system memory, you should enable this BIOS option, but only if your system is running on a single memory channel. If it’s running on dual-channel memory, then you must disable this BIOS option.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!