Tag Archives: Computer memory

US Targets Chinese Military With New Chip Export Ban!

US Targets Chinese Military With New Chip Export Ban!

The US government just imposed sweeping chip and chipmaking export restrictions that target Chinese military capabilities!

 

US Targets Chinese Military With New Chip Export Ban!

On Friday, October 7, 2022, the US government announced new regulations on computer chips and chipmaking technologies designed to target Chinese military capabilities.

The stated goal of these new export restrictions is to “block the People’s Liberation Army and China’s domestic surveillance apparatus from gaining access to advanced computing capabilities that require the use of advanced semiconductors.

According to US officials, these rules target not only the sale and export of chips, but also the tools and software that might help the Chinese military in any way, including aiding the development of weapons of mass destruction.

Unlike earlier rules that target specific Chinese companies (like HUAWEI), the new rules have far greater reach, covering everything from chips made by AMD and NVIDIA, to complex and expensive hardware and software used to design and manufacture semiconductor chips.

These rules, some of which go into effect immediately, build on existing restrictions that were earlier applied to top chip toolmaking companies in the US, like KLA Corporation, Lam Research Corporation, and Applied Material Incorporated.

It’s not just advanced chips used for AI computing and national security or military applications that are being targeted.

The new rules are more generic, and forbid US companies from selling technologies to indigenous Chinese companies that will enable the production of:

  • DRAM chips at 18 nm or below,
  • NAND flash chips at 128 layers or above, and
  • logic chips at 14 nm or below.

Even foreign companies operating China are somewhat affected. US companies will need to obtain a license to export more advanced equipment to them.

But in a concession to South Korea, the US government will spare SK Hynix and Samsung from these measures. US companies will be able to continue supplying their production facilities in China.

Many of the rules aim to block foreign companies from selling advanced chips or chipmaking technologies and tools to China. However, the US government will need to “lean” on those countries to introduce similar measures.

Most advanced chips are manufactured in South Korea and Taiwan. If they continue to export to China, they will allow China to bypass US restrictions.

Read more : Biden To Hit China With More Chip Restrictions!

 

Chinese Government Criticised US Chip Export Ban

On Saturday, the Chinese government criticised the new US chip export ban, calling it a violation of International economic and trade rules that will “isolate and backfire” on the US.

Out of the need to maintain its sci-tech hegemony, the U.S. abuses export control measures to maliciously block and suppress Chinese companies.

It will not only damage the legitimate rights and interests of Chinese companies, but also affect American companies’ interest.

– Mao Ning, Chinese Foreign Ministry spokesperson

Despite Mao’s assertions that US actions will not stop China’s progress, the new wide-ranging chip and chipmaking export ban will undoubtedly be detrimental to the Chinese semiconductor industry, and set back its attempts at indigenous production by many years.

On the other hand, this export ban will spur China to redouble its efforts to develop its own chipmaking capabilities, and possibly “strike back” by restricting exports of rare earths to US companies.

Read more : Did China Make 7nm Chips In Spite Of US Sanctions?!

 

US Chip Export Rules Affecting Chinese Companies Summarised

Here is a summary of the new restrictions that take effect on Friday (October 7, 2022):

  • Tools that are capable of producing logic chips made using fin field-effect transistors (FinFET) are blocked from sale to China.
  • Tools capable of fabricating NAND flash storage chips with 128-layer technology or greater, and DRAM based on 18-nanometer half-pitch or less technology are blocked from sale to China.
  • Servicing and maintenance of restricted tools are also banned, which would prevent keeping advanced equipment in good enough shape to keep producing quality chips at high volume.
  • US citizens currently servicing or supporting tools on the restricted list must halt their activities by Wednesday, October 12.
  • Export of items that China can use to make its own chip manufacturing tools, such as a photolithography light source and other specialised components are also blocked from sale to China.

The US Commerce Department also enacted these additional measures on Friday:

  • 31 Chinese entities were added to the Unverified List, which consists of companies that the US government believes could divert technology that they purchase to restricted entities.
  • The Commerce Department expanded the scope of controls for the 28 Chinese firms already on the US Entity List, including presumptively denying any licenses because of the risk they will divert technology to the Chinese military.

Then two weeks later, these restrictions will take effect on Friday, October 21, 2022:

  • Using a new foreign direct product rule, the U.S. will block any chips that used in advanced computing and artificial intelligence applications.
  • The foreign direct product rule can block chips made by non-US companies — including Chinese chip designers — if they use American technology or software.
  • TSMC may be forced by this new rule to halt production on advanced AI or supercomputer chips designed by Chinese firms that are fabricated in Taiwan, unless it gets an exemption.
  • A new foreign direct product rule will apply to components and chips destined for supercomputers in China.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Business | Computer | Tech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength

Common Options : Not Reduced, Reduced 15%, Reduced 30%, Reduced 50%

 

Memory DQ Drive Strength : A Quick Review

The Memory DQ Drive Strength BIOS feature allows you to reduce the drive strength for the memory DQ (data) pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Memory DQ Drive Strength : The Full Details

Every Dual Inline Memory Module (DIMM) has 64 data (DQ) lines. These lines transfer data from the DRAM chips to the memory controller and vice versa.

No matter what kind of DRAM chips are used (whether it’s regular SDRAM, DDR SDRAM or DDR2 SDRAM), the 64 data lines allow it to transfer 64-bits of data every clock cycle.

Each DIMM also has a number of data strobe (DQS) lines. These serve to time the data transfers on the DQ lines. The number of DQS lines depends on the type of memory chip used.

DIMMs based on x4 DRAM chips have 16 DQS lines, while DIMMs using x8 DRAM chips have 8 DQS lines and DIMMs with x16 DRAM chips have only 4 DQS lines.

Memory data transfers begin with the memory controller sending its commands to the DIMM. If data is to be read from the DIMM, then DRAM chips on the DIMM will drive their DQ and DQS (data strobe) lines.

On the other hand, if data is to be written to the DIMM, the memory controller will drive its DQ and DQS lines instead.

If many output buffers (on either the DIMMs or the memory controller) drive their DQ lines simultaneously, they can cause a drop in the signal level with a momentary raise in the relative ground voltage.

This reduces the quality of the signal which can be problematic at high clock speeds. Increasing the drive strength of the DQ pins can help give it a higher voltage swing, improving the signal quality.

However, it is important to increase the DQ drive strength according to the DRAM load. Unnecessarily increasing the DQ drive strength can cause the signal to overshoot its rising and falling edges, as well as create more signal reflection.

All this increase signal noise, which ironically negates the increased signal strength provided by a higher drive strength. Therefore, it is sometimes useful to reduce the DQ drive strength.

With light DRAM loads, you can reduce the DQ drive strength to lower signal noise and improve the signal-noise ratio. Doing so will also reduce power consumption, although that is probably low on most people’s list of importance. In certain cases, it actually allows you to achieve a higher memory clock speed.

This is where the Memory DQ Drive Strength BIOS feature comes in. It allows you to reduce the drive strength for the memory data pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM Trrd Timing Value from The Tech ARP BIOS Guide!

SDRAM Trrd Timing Value

Common Options : 2 cycles, 3 cycles

 

SDRAM Trrd Timing Value : A Quick Review

The SDRAM Trrd Timing Value BIOS feature specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device.

The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges.

For desktop PCs, a delay of 2 cycles is recommended as current surges aren’t really important. The performance benefit of using the shorter 2 cycles delay is of far greater interest.

The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device’s read and write performance.

Switch to 3 cycles only when there are stability problems with the 2 cycles setting.

 

SDRAM Trrd Timing Value : The Details

The Bank-to-Bank Delay or tRRD is a DDR timing parameter which specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device, even to different internal banks.

The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges.

Because this timing parameter is DDR device-specific, it may differ from one DDR device to another. DDR DRAM manufacturers typically specify the tRRD parameter based on the row ACTIVATE activity to limit current surges within the device.

If you let the BIOS automatically configure your DRAM parameters, it will retrieve the manufacturer-set tRRD value from the SPD (Serial Presence Detect) chip. However, you may want to manually set the tRRD parameter to suit your requirements.

For desktop PCs, a delay of 2 cycles is recommended as current surges aren’t really important.

This is because the desktop PC essentially has an unlimited power supply and even the most basic desktop cooling solution is sufficient to dispel any extra thermal load that the current surges may impose.

The performance benefit of using the shorter 2 cycles delay is of far greater interest. The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device’s read and write performance.

Note that the shorter delay of 2 cycles works with most DDR DIMMs, even at 133 MHz (266 MHz DDR). However, DDR DIMMs running beyond 133 MHz (266 MHz DDR) may need to introduce a delay of 3 cycles between each successive bank activation.

Select 2 cycles whenever possible for optimal DDR DRAM performance.

Switch to 3 cycles only when there are stability problems with the 2 cycles setting.

In mobile devices like laptops however, it would be advisable to use the longer delay of 3 cycles.

Doing so limits the current surges that accompany row activations. This reduces the DDR device’s power consumption and thermal output, both of which should be of great interest to the road warrior.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Write Data In to Read Delay from The Tech ARP BIOS Guide!

Write Data In to Read Delay

Common Options : 1 Cycle, 2 Cycles

 

Write Data In to Read Delay : A Quick Review

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Write Data In to Read Delay : The Full Details

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

Please note that this is only applicable for read commands that follow a write operation. Consecutive read operations or writes that follow reads are not affected.

If a 1 Cycle delay is selected, every read command that follows a write operation will be delayed one clock cycle before it is issued.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

If a 2 Cycles delay is selected, every read command that follows a write operation will be delayed two clock cycles before it is issued.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

By default, this BIOS feature is set to 2 Cycles. This meets JEDEC’s specification of 2 clock cycles for write-to-read command delay in DDR400 memory modules. DDR266 and DDR333 memory modules require a write-to-read command delay of only 1 clock cycle.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


CPU / DRAM CLK Synch CTL – The Tech ARP BIOS Guide!

CPU / DRAM CLK Synch CTL

Common Options : Synchronous, Asynchronous, Auto

 

Quick Review of CPU / DRAM CLK Synch CTL

The CPU / DRAM CLK Synch CTL BIOS feature offers a clear-cut way of controlling the memory controller’s operating mode.

When set to Synchronous, the memory controller will set the memory clock to the same speed as the processor bus.

When set to Asynchronous, the memory controller will allow the memory clock to run at any speed.

When set to Auto, the operating mode of memory controller will depend on the memory clock you set.

It is recommended that you select the Synchronous operating mode. This generally provides the best performance, even if your memory modules are capable of higher clock speeds.

 

Details of CPU / DRAM CLK Synch CTL

The memory controller can operate either synchronously or asynchronously.

In the synchronous mode, the memory clock runs at the same speed as the processor bus speed.

In the asynchronous mode, the memory clock is allowed to run at a different speed than the processor bus.

While the asynchronous mode allows the memory controller to support memory modules of different clock speeds, it requires the use of FIFO (First In, First Out) buffers and resynchronizers. This increases the latency of the memory bus, and reduces performance.

Running the memory controller in synchronous mode allows the memory controller to bypass the FIFO buffers and deliver data directly to the processor bus. This reduces the latency of the memory bus and greatly improves performance.

Normally, the synchronicity of the memory controller is determined by the memory clock. If the memory clock is the same as the processor bus speed, then the memory controller is in the synchronous mode. Otherwise, it is in the asynchronous mode.

The CPU / DRAM CLK Synch CTL BIOS feature, however, offers a more clear-cut way of controlling the memory controller’s operating mode.

When set to Synchronous, the memory controller will set the memory clock to the same speed as the processor bus. Even if you set the memory clock to run at a higher speed than the front side bus, the memory controller automatically selects a lower speed that matches the processor bus speed.

When set to Asynchronous, the memory controller will allow the memory clock to run at any speed. Even if you set the memory clock to run at a higher speed than the front side bus, the memory controller will not force the memory clock to match the processor bus speed.

When set to Auto, the operating mode of memory controller will depend on the memory clock you set. If you set the memory clock to run at the same speed as the processor bus, the memory controller will operate in the synchronous mode. If you set the memory clock to run at a different speed, then the memory controller will operate in the asychronous mode.

It is recommended that you select the Synchronous operating mode. This generally provides the best performance, even if your memory modules are capable of higher clock speeds.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Bank Swizzle Mode from The Tech ARP BIOS Guide

Bank Swizzle Mode

Common Options : Enabled, Disabled

 

Quick Review of Bank Swizzle Mode

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits.

It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

This effectively interleaves the memory banks and maximises memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Details of Bank Swizzle Mode

DRAM (and its various derivatives – SDRAM, DDR SDRAM, etc.) store data in cells that are organized in rows and columns.

Whenever a read command is issued to a memory bank, the appropriate row is first activated using the RAS (Row Address Strobe). Then, to read data from the target memory cell, the appropriate column is activated using the CAS (Column Address Strobe).

Multiple cells can be read from the same active row by applying the appropriate CAS signals. If data has to be read from a different row, the active row has to be deactivated before the appropriate row can be activated.

This takes time and reduces performance, so good memory controllers will try to schedule memory accesses to maximize the number of hits on active rows. One of the methods used to achieve that goal is the bank swizzle mode.

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits. It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

The XOR operation results in a value of true if only one of the two operands (inputs) is true. If both operands are simultaneously false or true, then it results in a value of false.

[adrotate group=”1″]

This characteristic of XORing the physical address to create the bank address reduces page conflicts by remapping the memory bank addresses so only one of two banks can be active at any one time.

This effectively interleaves the memory banks and maximizes memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


RW Queue Bypass from The Tech ARP BIOS Guide

RW Queue Bypass

Common Options : Auto, 2X, 4X, 8X, 16X

 

Quick Review of RW Queue Bypass

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set the RW Queue Bypass BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

 

Details of RW Queue Bypass

The R/W Queue Bypass BIOS option is similar to the DCQ Bypass Maximum BIOS option – they both decide the limits on which an arbiter can intelligently reschedule memory accesses to improve performance.

The difference between the two is that DCQ Bypass Maximum does this at the memory controller level, while R/W Queue Bypass does it at the Device Control Interface (DCI) level.

To improve performance, the arbiter can reschedule transactions in the DCI read / write queue.

By allowing some transactions to bypass other transactions in the queue, the arbiter can maximize the number of hits on open memory pages.

This improves the overall memory performance but at the expense of some memory accesses which have to be delayed.

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set this BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM PH Limit from the Tech ARP BIOS Guide

SDRAM PH Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM PH Limit

SDRAM PH Limit is short for SDRAM Page Hit Limit. The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM PH Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

OS Select For DRAM > 64MB from The Tech ARP BIOS Guide

OS Select For DRAM > 64MB

Common Options : OS/2, Non-OS/2n-OS/2

 

Quick Review of OS Select For DRAM > 64MB

The OS Select For DRAM > 64MB BIOS feature is designed to correct the memory size detection problem for OS/2 systems that have more than 64 MB of system memory.

If you are using an older version of the IBM OS/2 operating system, you should select OS/2.

If you are using the IBM OS/2 Warp v4.0111 or higher operating system, you should select Non-OS/2.

If you are using an older version of the IBM OS/2 operating system but have already installed all the relevant IBM FixPaks, you should select Non-OS/2.

Users of non-OS/2 operating systems (like Microsoft Windows XP) should select the Non-OS/2 option.

 

Details of OS Select For DRAM > 64MB

Older versions of IBM’s OS/2 operating system use the BIOS function Int15 [AX=E801] to detect the size of installed system memory. Microsoft Windows, on the other hand, uses the BIOS function Int15 [EAX=0000E820].

However, the Int15 [AX=E801] function was later scrapped as not ACPI-compliant. As a result, OS/2 cannot detect the correct size of system memory if more than 64 MB of memory is installed. Microsoft Windows isn’t affected because the BIOS function it uses is ACPI-compliant.

The OS Select For DRAM > 64MB BIOS feature is designed to correct the memory size detection problem for OS/2 systems that have more than 64 MB of system memory.

[adrotate group=”1″]

If you are running an old, unpatched version of OS/2, you must select the OS/2 option. But please note that this is only true for older versions of OS/2 that haven’t been upgraded using IBM’s FixPaks.

Starting with the OS/2 Warp v4.0111, IBM changed the OS/2 kernel to start using Int15 [EAX=0000E820] to detect the size of installed system memory. the memory management system to the more conventional method. IBM also issued FixPaks to address this issue with older versions of OS/2.

Therefore, if you are using OS/2 Warp v4.0111 or higher, you should select Non-OS/2 instead. You should also select Non-OS/2 if you have upgraded an older version of OS/2 with the FixPaks that IBM have been releasing over the years.

If you select the OS/2 option with a newer (v4.0111 or higher) or updated version of OS/2, it will cause erroneous memory detection. For example, if you have 64 MB of memory, it may only register as 16 MB. Or if you have more than 64 MB of memory, it may register as only 64 MB of memory.

Users of non-OS/2 operating systems (like Microsoft Windows or Linux) should select the Non-OS/2 option. Doing otherwise will cause memory errors if you have more than 64 MB of memory in your system.

In conclusion :-

  • If you are using an older version of the IBM OS/2 operating system, you should select OS/2.
  • If you are using the IBM OS/2 Warp v4.0111 or higher operating system, you should select Non-OS/2.
  • If you are using an older version of the IBM OS/2 operating system but have already installed all the relevant IBM FixPaks, you should select Non-OS/2.
  • Users of non-OS/2 operating systems (like Microsoft Windows XP) should select the Non-OS/2 option.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DOS Flat Mode from The Tech ARP BIOS Guide

DOS Flat Mode

Common Options : Enabled, Disabled

 

Quick Review of DOS Flat Mode

The DOS Flat Mode BIOS feature controls the BIOS’ built-in extended memory manager.

When enabled, DOS programs can run in protected mode without the need of an extended memory manager.

When disabled, DOS programs require an extended memory manager to run in protected mode.

It is recommended that you enable DOS Flat Mode if you use the MS-DOS operating system and run protected mode DOS programs.

However, if you use a newer operating system that supports protected mode (for example, Windows XP, disable DOS Flat Mode.

 

Details of DOS Flat Mode

In real mode, MS-DOS (Microsoft Disk Operating System) cannot address more than 1 MB. It is also restricted to memory segments of up to 64 KB in size.

These limitations can be circumvented by making use of extended memory managers. Such software allow DOS programs to make use of memory beyond 1MB by running them in protected mode. This mode also removes the segmentation of memory and creates a flat memory model for the program to use.

Protected mode refers to a memory addressing mode supported since the Intel 80286 was introduced. It provides the following benefits over real mode :

  • Each program can be given its own protected memory area, hence, the name protected mode. In this protected memory area, the program is protected from interference by other programs.
  • Programs can now access more than 1 MB of memory (up to 4 GB). This allows the creation of bigger and more complex programs.
  • There is virtually no longer any segmentation of the memory. Memory segments can be up to 4 GB in size.

As you can see, running DOS programs in protected mode allows safer memory addressing, access to more memory and eliminates memory segmentation. Of course, the DOS program must be programmed to make use of protected mode. An extended memory manager, either within the program or as a separate program, must also be present for the program to run in protected mode. Otherwise, it will run in real mode.

Please note that even with an extended memory manager, MS-DOS does not actually run in protected mode. Only specifically-programmed DOS programs will run in protected mode. The processor switches between real mode and protected mode to handle both MS-DOS in real mode and the DOS program in protected mode.

[adrotate group=”1″]

This is where Gate A20 becomes critical to the computer’s performance. For more information on the effect of Gate A20 on the memory mode switching performance of the processor, please consult the Gate A20 Option BIOS feature.

The DOS Flat Mode BIOS feature controls the BIOS’ built-in extended memory manager.

When enabled, DOS programs can run in protected mode without the need of an extended memory manager.

When disabled, DOS programs require an extended memory manager to run in protected mode.

It is recommended that you enable DOS Flat Mode if you use the MS-DOS operating system and run protected mode DOS programs.

However, if you use a newer operating system that supports protected mode (for example, Windows XP), disable DOS Flat Mode.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Samsung Aquabolt – World’s Fastest HBM2 Memory Revealed!

2018-01-11 – Samsung Electronics today announced that it has started mass production of the Samsung Aquabolt – its 2nd-generation 8 GB High Bandwidth Memory-2 (HBM2) with the fastest data transmission speed on the market today. This is the industry’s first HBM2 to deliver a 2.4 Gbps data transfer speed per pin.

 

Samsung Aquabolt – World’s Fastest HBM2 Memory

Samsung’s new 8GB HBM2 delivers the highest level of DRAM performance, featuring a 2.4Gbps pin speed at 1.2V, which translates into a performance upgrade of nearly 50% per each package, compared to the 1st-generation 8GB HBM2 package with its 1.6Gbps pin speed at 1.2V and 2.0Gbps at 1.35V.

With these improvements, a single Samsung 8GB HBM2 package will offer a 307 GB/s data bandwidth – 9.6X faster than an 8 Gb GDDR5 chip, which provides a 32 GB/s data bandwidth. Using four of the new HBM2 packages in a system will enable a 1.2 TB/s bandwidth. This improves overall system performance by as much as 50%, compared to a system that uses the first-generation 1.6 Gbps HBM2 memory.

[adrotate group=”1″]

 

How Samsung Created Aquabolt

To achieve Aquabolt’s unprecedented performance, Samsung has applied new technologies related to TSV design and thermal control.

A single 8GB HBM2 package consists of eight 8Gb HBM2 dies, which are vertically interconnected using over 5,000 TSVs (Through Silicon Via’s) per die. While using so many TSVs can cause collateral clock skew, Samsung succeeded in minimizing the skew to a very modest level and significantly enhancing chip performance in the process.

In addition, Samsung increased the number of thermal bumps between the HBM2 dies, which enables stronger thermal control in each package. Also, the new HBM2 includes an additional protective layer at the bottom, which increases the package’s overall physical strength.

Go Back To > News | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DCLK Feedback Delay – The Tech ARP BIOS Guide

DCLK Feedback Delay

Common Options : 0 ps, 150 ps, 300 ps, 450 ps, 600 ps, 750 ps, 900 ps, 1050 ps

 

Quick Review of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

 

Details of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

This feedback signal is used by the SDRAM controller to determine when it can write data to the SDRAM module. The main idea of this system is to ensure that both clock phases are properly aligned for the proper delivery of data.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

[adrotate group=”2″]

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Asynclat (Asynchronous Latency) – The Tech ARP BIOS Guide

Asynclat

Common Options : 0 to 15 ns

 

Quick Review of Asynclat

Asynclat is an AMD processor-specific BIOS feature. It controls the amount of asynchronous latency, which depends on the time it takes for data to travel from the processor to the furthest DIMM on the motherboard and back. For your reference, AMD has a few conservative suggestions on setting the Asynclat BIOS feature.

Memory Type

Number Of
DIMM Slots

Memory Clock

200 MHz

166 MHz

133 MHz

100 MHz

Registered

8

8 ns

8 ns

9 ns

9 ns

Unbuffered

4

8 ns

8 ns

8 ns

8 ns

3 or 4

7 ns

7 ns

7 ns

7 ns

1 or 2

6 ns

6 ns

6 ns

6 ns

Do note that in this case, the distance of the furthest DIMM slot is considered analogous to the number of DIMM slots. The greater number of DIMM slots available on the motherboard, the further the final slot is from the memory controller.

Also, these values are rough and conservative recommendations that assume that the furthest DIMM slot is occupied by a module. If your motherboard has four slots and you choose to populate only the first two slots, you could use a shorter asynchronous latency.

Generally, it is recommended that you stick with the asynchronous latency recommended by AMD (see table above) or your memory module’s manufacturer. You can, of course, adjust the amount of asynchronous latency according to the situation. For example, if you are overclocking the memory modules, or if you populate the first two slots of the four available DIMM slots; you can get away with a lower asynchronous latency.

 

Details of Asynclat

Asynclat is an AMD processor-specific BIOS feature. It controls the amount of asynchronous latency, which depends on the time it takes for data to travel from the processor to the furthest DIMM on the motherboard and back.

The asynchronous latency is designed to account for variances in the trace length to the furthest DIMM on the motherboard, as well as the type of DIMM, number of chips in that DIMM and the memory bus frequency. For your reference, AMD has a few conservative suggestions on setting the Asynclat BIOS feature.

Memory Type

Number Of
DIMM Slots

Memory Clock

200 MHz

166 MHz

133 MHz

100 MHz

Registered

8

8 ns

8 ns

9 ns

9 ns

Unbuffered

4

8 ns

8 ns

8 ns

8 ns

3 or 4

7 ns

7 ns

7 ns

7 ns

1 or 2

6 ns

6 ns

6 ns

6 ns

Do note that in this case, the distance of the furthest DIMM slot is considered analogous to the number of DIMM slots. The greater number of DIMM slots available on the motherboard, the further the final slot is from the memory controller.

Also, these values are rough and conservative recommendations that assume that the furthest DIMM slot is occupied by a module. If your motherboard has four slots and you choose to populate only the first two slots, you could use a shorter asynchronous latency.

[adrotate group=”2″]

Naturally, the shorter the latency, the better the performance. However, if the latency is too short, it will not allow enough time for data to be returned from the furthest DIMM on the motherboard. This results in data corruption and system instability.

The optimal asynchronous latency varies from system to system. It depends on the motherboard design, where you install your DIMMs, the type of DIMM used and the memory bus speed selected. The only way to find the optimal asychronous latency is trial and error, by starting with a high value and working your way down.

Generally, it is recommended that you stick with the asynchronous latency recommended by AMD (see table above) or your memory module’s manufacturer. You can, of course, adjust the amount of asynchronous latency according to the situation. For example, if you are overclocking the memory modules, or if you populate the first two slots of the four available DIMM slots; you can get away with a lower asynchronous latency.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM Page Hit Limit – The Tech ARP BIOS Guide

SDRAM Page Hit Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM Page Hit Limit

The SDRAM Page Hit Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM Page Hit Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM Page Hit Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ISA Shared Memory – The Tech ARP BIOS Guide

ISA Shared Memory

Common Options : 64KB, 32KB, 16KB, Disabled

 

Quick Review

This is an ISA-specific BIOS option. In motherboards that have ISA slots, the BIOS can be set to reserve part of the upper memory area (UMA), which resides between 640 KB and 1 MB, for use by ISA expansion cards. This is particularly important for older operating systems like MS-DOS because it frees up conventional memory (the first 640 KB) for use by the operating system and applications.

If your ISA expansion card requires an upper memory area of 64 KB in size, select the 64KB option. Similarly, if it requires just 32 KB or 16 KB of upper memory, select the 32KB or 16KB options respectively.

If you are not sure how much memory your ISA expansion card requires, select 64KB. It will work with cards that only require 32 KB or 16 KB of upper memory. The rest of the reserved upper memory area will be left unused.

If you do not have any ISA expansion cards installed, leave it at its default setting of Disabled. This frees up the UMA for use by the operating system (or third-party memory managers) to store TSR (Terminate and Stay Resident) programs.

 

Details

This is an ISA-specific BIOS option. In motherboards that have ISA slots, the BIOS can be set to reserve part of the upper memory area (UMA), which resides between 640 KB and 1 MB, for use by ISA expansion cards. This is particularly important for older operating systems like MS-DOS because it frees up conventional memory (the first 640 KB) for use by the operating system and applications.

In truly old motherboards with multiple ISA slots, you will have the option to set both the memory address range and the reserved memory size. You may have to set jumpers on your ISA expansion cards to prevent two or more cards using the same memory address range, but this allows you to reserve segments of the upper memory area for your ISA expansion cards.

PCI-based motherboards have no need for so many ISA slots. Most, if not all, only have a single ISA slot for the rare ISA expansion card that had not yet been “ported” to the PCI bus. Therefore, their BIOS only has a single BIOS option, which allows you to set the size of the UMA to be reserved for the single ISA card. There’s no need to set the memory address range because there is only one ISA slot, so you need not worry about conflicts between multiple ISA cards. All you need to do is set the reserved memory size.

[adrotate group=”1″]

To do so, you will have to find out how much memory your ISA expansion card requires. Check the manual that came with the card. It ranges from 16 KB to 64 KB. Once you know how much upper memory your ISA expansion card requires, use this BIOS setting to reserve the memory segment for your card.

If your ISA expansion card requires an upper memory area of 64 KB in size, select the 64KB option. Similarly, if it requires just 32 KB or 16 KB of upper memory, select the 32KB or 16KB options respectively.

If you are not sure how much memory your ISA expansion card requires, select 64KB. It will work with cards that only require 32 KB or 16 KB of upper memory. The rest of the reserved upper memory area will be left unused.

If you do not have any ISA expansion cards installed, leave it at its default setting of Disabled. This frees up the UMA for use by the operating system (or third-party memory managers) to store TSR (Terminate and Stay Resident) programs.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

In-Order Queue Depth – The BIOS Optimization Guide

In-Order Queue Depth

Common Options : 1, 4, 8, 12

 

Quick Review of In-Order Queue Depth

The In-Order Queue Depth BIOS feature controls the use of the processor bus’ command queue. Normally, there are only two options available. Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12).

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands. If selected, each command will only be issued after the processor has finished with the previous one. Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled. This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

Interestingly, this feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds. To overclock beyond what’s normally possible, you can try disabling command queuing.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

 

Details of In-Order Queue Depth

For greater performance at high clock speeds, motherboard chipsets now feature a pipelined processor bus. The multiple stages in this pipeline can also be used to queue up multiple commands to the processor. This command queuing greatly improves performance because it effectively masks the latency of the processor bus. In optimal situations, the amount of latency between each succeeding command can be reduced to only a single clock cycle!

The In-Order Queue Depth BIOS feature controls the use of the processor bus’ command queue. Normally, there are only two options available. Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12). This is because this BIOS feature does not actually allow you to select the number of commands that can be queued.

It merely allows you to disable or enable the command queuing capability of the processor bus pipeline. This is because the number of commands that can be queued depends entirely on the number of stages in the pipeline. As such, you can expect to see this feature associated with options like Enabled and Disabled in some motherboards.

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands. If selected, each command will only be issued after the processor has finished with the previous one. Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

As you can see, this reduces performance as the processor has to wait for each command to filter down the pipeline. The severity of the effect depends greatly on the depth of the pipeline. The deeper the pipeline, the greater the effect.

If the second queue depth option is 4, this means that the processor bus pipeline has 4 stages in it. Selecting this option allows the queuing of up to 4 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 8, this means that the processor bus pipeline has 8 stages in it. Selecting this option allows the queuing of up to 8 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 12, this means that the processor bus pipeline has 12 stages in it. Selecting this option allows the queuing of up to 12 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

Please note that the latency of only 1 clock cycle is only possible if the pipeline is completely filled up. If the pipeline is only partially filled up, then the latency affecting one or more of the commands will be more than 1 clock cycle. Still, the average latency for each command will be much lower than it would be with command queuing disabled.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled. This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

[adrotate group=”2″]

Interestingly, this feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds. To overclock beyond what’s normally possible, you can try disabling command queuing. This may reduce performance but it will make the processor more stable and may allow it to be further overclocked.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

2T Command – BIOS Optimization Guide

2T Command

Common Options : Enabled, Disabled, Auto

 

Quick Review of 2T Command

The 2T Command BIOS feature allows you to select the delay between the assertion of the Chip Select signal till the time the memory controller starts sending commands to the memory bank. The lower the value, the sooner the memory controller can send commands out to the activated memory bank.

When this feature is disabled, the memory controller will only insert a command delay of one clock cycle or 1T.

When this feature is enabled, the memory controller will insert a command delay of two clock cycles or 2T.

The Auto option allows the memory controller to use the memory module’s SPD value for command delay.

If the SDRAM command delay is too long, it can reduce performance by unnecessarily preventing the memory controller from issuing the commands sooner.

However, if the SDRAM command delay is too short, the memory controller may not be able to translate the addresses in time and the “bad commands” that result will cause data loss and corruption.

It is recommended that you try disabling 2T Command for better memory performance. But if you face stability issues, enable this BIOS feature.

 

Details of 2T Command

Whenever there is a memory read request from the operating system, the memory controller does not actually receive the physical memory addresses where the data is located. It is only given a virtual address space which it has to translate into physical memory addresses. Only then can it issue the proper read commands. This produces a slight delay at the start of every new memory transaction.

Instead of immediately issuing the read commands, the memory controller instead asserts the Chip Select signal to the physical bank that contains the requested data. What this Chip Select signal does is activate the bank so that it is ready to accept the commands. In the meantime, the memory controller will be busy translating the memory addresses. Once the memory controller has the physical memory addresses, it starts issuing read commands to the activated memory bank.

As you can see, the command delay is not caused by any latency inherent in the memory module. Rather, it is determined by the time taken by the memory controller to translate the virtual address space into physical memory addresses.

Naturally, because the delay is due to translation of addresses, the memory controller will require more time to translate addresses in high density memory modules due to the higher number of addresses. The memory controller will also take a longer time if there is a large number of physical banks.

The 2T Command BIOS feature allows you to select the delay between the assertion of the Chip Select signal till the time the memory controller starts sending commands to the memory bank. The lower the value, the sooner the memory controller can send commands out to the activated memory bank.

When this feature is disabled, the memory controller will only insert a command delay of one clock cycle or 1T.

When this feature is enabled, the memory controller will insert a command delay of two clock cycles or 2T.

The Auto option allows the memory controller to use the memory module’s SPD value for command delay.

If the SDRAM command delay is too long, it can reduce performance by unnecessarily preventing the memory controller from issuing the commands sooner.

[adrotate group=”2″]

However, if the SDRAM command delay is too short, the memory controller may not be able to translate the addresses in time and the “bad commands” that result will cause data loss and corruption.

Fortunately, all unbuffered SDRAM modules are capable of a 1T command delay up to four memory banks per channel. After that, a 2T command delay may be required. However, support for 1T command delay varies from chipset to chipset and even from one motherboard model to another. You should consult your motherboard manufacturer to see if your motherboard supports a command delay of 1T.

It is recommended that you try disabling 2T Command for better memory performance. But if you face stability issues, enable this BIOS feature.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Idle Cycle Limit – The BIOS Optimization Guide

Idle Cycle Limit

Common Options : 0T, 16T, 32T, 64T, 96T, Infinite, Auto

 

Quick Review of Idle Cycle Limit

The Idle Cycle Limit BIOS feature sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 8T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

 

Details of Idle Cycle Limit

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles.

[adrotate group=”1″]

This is where the Idle Cycle Limit BIOS feature comes in. It sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged. The default value is 16T which forces the memory controller to close the open pages once sixteen idle cycles have passed.

Increasing this BIOS feature to more than the default of 16T forces the memory controller to keep the activated pages opened longer during times of no activity. This allows for quicker data access if the next data request can be satisfied by the open pages.

However, this is limited by the refresh cycle already set by the BIOS. This means the open pages will automatically close when the memory bank needs to be recharged, even if the number of idle cycles have not reached the Idle Cycle Limit. So, this BIOS option can only be used to force the precharging of the memory bank before the set refresh cycle but not to actually delay the refresh cycle.

Reducing the number of cycles from the default of 16T to 0T forces the memory controller to close all open pages once there are no data requests. In short, the open pages are refreshed as soon as there are no further data requests. This may increase the efficiency of the memory subsystem by masking the bank precharge during idle cycles. However, prematurely closing the open pages may convert what could have been a page hit (and satisfied immediately) into a page miss which will have to wait for the bank to precharge and the same page reopened.

Because refreshes do not occur that often (usually only about once every 64 msec), the impact of refreshes on memory performance is really quite minimal. The apparent benefits of masking the refreshes during idle cycles will not be noticeable, especially since memory systems these days already use bank interleaving to mask refreshes.

With a 0T setting, data requests are also likely to get stalled because even a single idle cycle will cause the memory controller to close all open pages! In desktop applications, most memory reads follow the spatial locality concept where if one data bit is read, chances are high that the next data bit will also need to be read. That’s why closing open pages prematurely using DRAM Idle Timer will most likely cause reduced performance in desktop applications.

On the other hand, using a 0 or 16 idle cycles limit will ensure that the memory cells will be refreshed more often, thereby preventing the loss of data due to insufficiently refreshed memory cells. Forcing the memory controller to close open pages more often will also ensure that in the event of a very long read, the pages can be opened long enough to fulfil the data request.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 16T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

Alternatively, you can greatly increase the value of the Refresh Interval or Refresh Mode Select feature to boost bandwidth and use this BIOS feature to maintain the data integrity of the memory cells. As ultra-long refresh intervals (i.e. 64 or 128 µsec) can cause memory cells to lose their contents, setting a low Idle Cycle Limit like 0T or 16T allows the memory cells to be refreshed more often, with a high chance of those refreshes being done during idle cycles.

[adrotate group=”2″]

This appears to combine the best of both worlds – a long bank active period when the memory controller is being stressed and more refreshes when the memory controller is idle. However, this is not a reliable way of ensuring sufficient refresh cycles since it depends on the vagaries of memory usage to provide sufficient idle cycles to trigger the refreshes.

If your memory subsystem is under extended load, there may not be any idle cycle to trigger an early refresh. This may cause the memory cells to lose their contents. Therefore, it is still recommended that you maintain a proper refresh interval and set this feature to 16T for desktops.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Termination – The BIOS Optimization Guide

DRAM Termination

Common Options : 50 Ohms, 75 Ohms, 150 Ohms (DDR2) / 40 Ohms, 60 Ohms, 120 Ohms (DDR3)

 

Quick Review of DRAM Termination

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal and higher power consumption.

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

 

Details of DRAM Termination

Like a ball thrown against a wall, electrical signals reflect (bounce) back when they reach the end of a transmission path. They also reflect at points where points where there is a change in impedance, e.g. at connections to DRAM devices or a bus. These reflected signals are undesirable because they distort the actual signal, impairing the signal quality and the data being transmitted.

Prior to the introduction of DDR2 memory, motherboard designers use line termination resistors at the end of the DRAM signal lines to reduce signal reflections. However, these resistors are only partially effective because they cannot reduce reflections generated by the stub lines that lead to the individual DRAM chips on the memory module (see illustration below). Even so, this method worked well enough with the lower operating frequency and higher signal voltages of SDRAM and DDR SDRAM modules.

Line termination resistors on the motherboard (Courtesy of Rambus)

The higher speed (and lower signal voltages) of DDR2 and DDR3 memory though require much better signal quality and these high-speed modules have much lower tolerances for noise. The problem is also compounded by the higher number of memory modules used. Line termination resistors are no longer good enough to tackle the problem of signal reflections. This is where On-Die Termination (ODT) comes in.

On-Die Termination shifts the termination resistors from the motherboard to the DRAM die itself. These resistors can better suppress signal reflections, providing much better a signal-to-noise ratio in DDR2 and DDR3 memory. This allows for much higher clock speeds at much lower voltages.

It also reduces the cost of motherboard designs. In addition, the impedance value of the termination resistors can be adjusted, or even turned off via the memory module’s Extended Mode Register Set (EMRS).

On-die termination (Courtesy of Rambus)

Unlike the termination resistors on the motherboard, the on-die termination resistors can be turned on and off as required. For example, when a DIMM is inactive, its on-die termination resistors turn on to prevent signals from the memory controller reflecting to the active DIMMs. The impedance value of the resistors are usually programmed by the BIOS at boot-time, so the memory controller only turns it on or off (unless the system includes a self-calibration circuit).

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal, and higher power consumption.

[adrotate group=”2″]

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM Precharge Control – The BIOS Optimization Guide

SDRAM Precharge Control

Common Options : Enabled, Disabled

 

Quick Review of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank. This is useful in cases where subsequent data requests will also result in page misses.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

 

Details of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This naturally improves performance.

But if read request cannot be satisfied by any of the four open pages, there are two possibilities. Either one page is closed and the correct page opened; or all open pages are closed and new pages opened up. Either way, the read request suffers the full latency penalty.

[adrotate group=”1″]

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur. Naturally, this greatly impacts memory performance.

Fortunately, after the four full latency reads, the memory controller can often predict what pages will be needed next. It can then open them for minimum latency reads . This somewhat reduces the negative effect of consecutive page misses.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank.

This is useful in cases where subsequent data requests will also result in page misses. This is because the memory banks will already be precharged and ready to be activated. There is no need to wait for the memory banks to precharge before they can be activated. However, it also means that you won’t be able to benefit from data accesses that could have been satisfied by the previously opened pages.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

LD-Off Dram RD/WR Cycles – The BIOS Optimization Guide

LD-Off Dram RD/WR Cycles

Common Options : Delay 1T, Normal

 

Quick Review

The LD-Off Dram RD/WR Cycles BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

 

Details

At the beginning of a memory transaction (read or write), the memory controller normally sends the address and command signals simultaneously to the memory bank. This allows for the quickest activation of the memory bank.

However, this may cause problems with certain memory modules. In these memory modules, the target row may not be activated quickly enough to allow the memory controller to read from or write to it. This is where the LD-Off Dram RD/WR Cycles BIOS feature comes in.

[adrotate group=”1″]

This BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle. This ensures there is enough time for the memory bank to be activated before the read or write command arrives.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

MCLK Spread Spectrum – The BIOS Optimization Guide

MCLK Spread Spectrum

Common Options : 0.25%, 0.5%, 0.75%, Disabled

 

Quick Review

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The MCLK Spread Spectrum BIOS feature controls spread spectrum clocking of the memory bus. It usually offers three levels of modulation – 0.25%, 0.5% or 0.75%. They denote the amount of modulation around the memory bus frequency. The greater the modulation, the greater the reduction of EMI. Therefore, if you need to significantly reduce EMI, a modulation of 0.75% is recommended.

Generally, frequency modulation through spread spectrum clocking should not cause any problems. However, system stability may be compromised if you are overclocking the memory bus.

Therefore, it is recommended that you disable the MCLK Spread Spectrum feature if you are overclocking the memory bus. Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the memory bus frequency a little to provide a margin of safety.

If you are not overclocking the memory bus, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature. Otherwise, disable it to remove even the slightest possibility of stability issues.

 

Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted.

To prevent EMI from causing problems to other electronics, the FCC enacted Part 15 of the FCC regulations in 1975. It regulates the power output of such clock generators by limiting the amount of EMI they can generate. As a result, engineers use spread spectrum clocking to ensure that their motherboards comply with the FCC regulation on EMI levels.

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. Instead of generating a typical waveform, the clock signal continuously varies around the target frequency within a tight range. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The MCLK Spread Spectrum BIOS feature controls spread spectrum clocking of the memory bus. It usually offers three levels of modulation – 0.25%, 0.5% or 0.75%. They denote the amount of modulation around the memory bus frequency. The greater the modulation, the greater the reduction of EMI. Therefore, if you need to significantly reduce EMI, a modulation of 0.75% is recommended.

[adrotate group=”2″]

Generally, frequency modulation through spread spectrum clocking should not cause any problems. However, system stability may be compromised if you are overclocking the memory bus. Of course, this depends on the amount of modulation, the extent of overclocking and other factors like temperature, voltage levels, etc. As such, the problem may not readily manifest itself immediately.

Therefore, it is recommended that you disable the MCLK Spread Spectrum feature if you are overclocking the memory bus. You will be able to achieve better overclockability, at the expense of higher EMI. Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the memory bus frequency a little to provide a margin of safety.

If you are not overclocking the memory bus, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature. Otherwise, disable it to remove even the slightest possibility of stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Auto Detect DIMM/PCI Clk – The BIOS Optimization Guide

Auto Detect DIMM/PCI Clk

Common Options : Enabled, Disabled

 

Quick Review

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI (Electromagnetic Interference) and reduce power consumption by turning off unoccupied or inactive PCI and memory slots.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

It is recommended that you enable this feature to save power and reduce EMI.

[adrotate banner=”5″]

 

Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted. To reduce this problem, the motherboard can either modulate the pulses (see Spread Spectrum) or turn off unused AGP, PCI or memory clock signals.

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI and reduce power consumption by turning off unoccupied or inactive PCI and memory slots. It is similar to the Smart Clock option of the Spread Spectrum BIOS feature.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

This method allows you to reduce the motherboard’s EMI levels without compromising system stability. It also allows the motherboard to reduce power consumption because the clock signals will only be generated for PCI and memory slots that are occupied and active.

The choice of whether to enable or disable this feature is really up to your personal preference. But since this feature reduces EMI and power consumption without compromising system stability, it is recommended that you enable it.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Read-Around-Write – The BIOS Optimization Guide

Read-Around-Write

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature allows the processor to execute read commands out of order, as if they are independent from the write commands. It does this by using a Read-Around-Write buffer.

If this BIOS feature is enabled, all processor writes to memory are first accumulated in that buffer. This allows the processor to execute read commands without waiting for the write commands to be completed.

The buffer will then combine the writes and write them to memory as burst transfers. This reduces the number of writes to memory and boosts the processor’s write performance.

If this BIOS feature is disabled, the processor writes directly to the memory controller. This reduces the processor’s read performance.

Therefore, it is highly recommended that you enable the Read-Around-Write BIOS feature for better processor read and write performance.

 

Details

This BIOS feature allows the processor to execute read commands out of order, as if they are independent from the write commands. It does this by using a Read-Around-Write buffer.

If this BIOS feature is enabled, all processor writes to memory are first accumulated in that buffer. This allows the processor to execute read commands without waiting for the write commands to be completed.

The buffer will then combine the writes and write them to memory as burst transfers. This reduces the number of writes to memory and boosts the processor’s write performance.

Incidentally, until its contents have been written to memory, the Read-Around-Write buffer also serves as a cache of the data that it is storing. These tend to be the most up-to-date data since the processor has just written them to the buffer.

[adrotate banner=”4″]

Therefore, if the processor sends out a read command for data that is still in the Read-Around-Write buffer, the processor can read directly from the buffer instead. This greatly improves read performance because the processor bypasses the memory controller to access the data. The buffer is much closer logically, so reading from it will be much faster than reading from memory.

If this BIOS feature is disabled, the processor writes directly to the memory controller. All writes have to be completed before the processor can execute a read command. It also prevents the buffer from being used as a temporary cache of processor writes. This reduces the processor’s read performance.

Therefore, it is highly recommended that you enable the Read-Around-Write BIOS feature for better processor read and write performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Memory Hole At 15M-16M – The BIOS Optimization Guide

Memory Hole At 15M-16M

Common Options : Enabled, Disabled

 

Quick Review

Certain ISA cards require exclusive access to the 1 MB block of memory, from the 15th to the 16th megabyte, to work properly. The Memory Hole At 15M-16M BIOS feature allows you to reserve that 1 MB block of memory for such cards to use.

If you enable this feature, 1 MB of memory (the 15th MB) will be reserved exclusively for the ISA card’s use. This effectively reduces the total amount of memory available to the operating system by 1 MB.

Please note that in certain motherboards, enabling this feature may actually render all memory above the 15th MB unavailable to the operating system!

If you disable this feature, the 15th MB of RAM will not be reserved for the ISA card’s use. The full range of memory is therefore available for the operating system to use. However, if your ISA card requires the use of that memory area, it may then fail to work.

Since ISA cards are a thing of the past, it is highly recommended that you disable this feature. Even if you have an ISA card that you absolutely have to use, you may not actually need to enable this feature.

Most ISA cards do not need exclusive access to this memory area. Make sure that your ISA card requires this memory area before enabling this feature. You should use this BIOS feature only in a last-ditch attempt to get a stubborn ISA card to work.

 

Details

Certain ISA cards require exclusive access to the 1 MB block of memory, from the 15th to the 16th megabyte, to work properly. The Memory Hole At 15M-16M BIOS feature allows you to reserve that 1 MB block of memory for such cards to use.

If you enable this feature, 1 MB of memory (the 15th MB) will be reserved exclusively for the ISA card’s use. This effectively reduces the total amount of memory available to the operating system by 1 MB. Therefore, if you have 256 MB of memory, the usable amount of memory will be reduced to 255 MB.

Please note that in certain motherboards, enabling this feature may actually render all memory above the 15th MB unavailable to the operating system! In such cases, you will end up with only 14 MB of usable memory, irrespective of how much memory your system actually has.

[adrotate banner=”4″]

If you disable this feature, the 15th MB of RAM will not be reserved for the ISA card’s use. The full range of memory is therefore available for the operating system to use. However, if your ISA card requires the use of that memory area, it may then fail to work.

Since ISA cards are a thing of the past, it is highly recommended that you disable this feature. Even if you have an ISA card that you absolutely have to use, you may not actually need to enable this feature.

Most ISA cards do not need exclusive access to this memory area. Make sure that your ISA card requires this memory area before enabling this feature. You should use this BIOS feature only in a last-ditch attempt to get a stubborn ISA card to work.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Synchronous Mode Select – The BIOS Optimization Guide

Synchronous Mode Select

Common Options : Synchronous, Asynchronous

 

Quick Review

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

 

Details

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

[adrotate banner=”4″]

However, the Asynchronous mode does have its uses. Users of multiplier-locked processors and slow memory modules may find that using the Asynchronous mode allows them to overclock the processor much higher without the need to buy faster memory modules.

The Asynchronous mode is also useful for those who have very fast memory modules and multiplier-locked processors with low bus speeds. Running the fast memory modules synchronously with the low CPU bus speed would force the memory modules will have to run at the same slow speed. Running asynchronously will therefore allow the memory modules to run at a much higher speed than the CPU bus.

But please note that the performance gains of running synchronously cannot be underestimated. Synchronous operations are generally much faster than asychronous operations running at a higher clock speed. It is advisable that you compare benchmark scores of your computer running asynchronously (at a higher clock speed) and synchronously to determine the best option for your system.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

CPU Hardware Prefetch – The BIOS Optimization Guide

CPU Hardware Prefetch

Common Options : Enabled, Disabled

 

Quick Review

The processor has a hardware prefetcher that automatically analyzes its requirements and prefetches data and instructions from the memory into the Level 2 cache that are likely to be required in the near future. This reduces the latency associated with memory reads.

When enabled, the processor’s hardware prefetcher will be enabled and allowed to automatically prefetch data and code for the processor.

When disabled, the processor’s hardware prefetcher will be disabled.

If you are using a C1 stepping (or older) of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, it is recommended that you enable this BIOS feature so that the hardware prefetcher is enabled for maximum performance.

But if you are using an older version of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, then you should disable the CPU Hardware Prefetch BIOS feature to circumvent the O37 bug which causes data corruption when the hardware prefetcher is operational.

 

Details

CPU Hardware Prefetch is a BIOS feature specific to processors based on the Intel NetBurst microarchitecture (e.g. Intel Pentium 4 and Intel Pentium 4 Xeon).

These processors have a hardware prefetcher that automatically analyzes the processor’s requirements and prefetches data and instructions from the memory into the Level 2 cache that are likely to be required in the near future. This reduces the latency associated with memory reads.

When it works, the hardware prefetcher does a great job of keeping the processor loaded with code and data. However, it doesn’t always work right.

Prior to the C1 stepping of the Intel Pentium 4 and Intel Pentium 4 Xeon, these processors shipped with a bug that causes data corruption when the hardware prefetcher was enabled. According to Intel, Errata O37 causes the processor to “use stale data from the cache while the Hardware Prefetcher is enabled“.

Unfortunately, the only solution for the affected processors is to disable the hardware prefetcher. This is where the CPU Hardware Prefetch BIOS feature comes in.

[adrotate banner=”4″]

When enabled, the processor’s hardware prefetcher will be enabled and allowed to automatically prefetch data and code for the processor.

When disabled, the processor’s hardware prefetcher will be disabled.

If you are using a C1 stepping (or older) of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, it is recommended that you enable this BIOS feature so that the hardware prefetcher is enabled for maximum performance.

But if you are using an older version of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, then you should disable this BIOS feature to circumvent the O37 bug which causes data corruption when the hardware prefetcher is operational.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ZADAK Announces The ZADAK511 Shield RGB Series

Taipeh, 22 December 2016ZADAK Lab is thrilled to achieve the honor of being the only brand with the distinction of having overcame the challenges of developing multi-colored LED illuminated SSDs and memory modules with their new SHIELD RGB series.

The ZADAK team is happy to announce that both SHIELD RGB DDR4 and SHIELD RGB SSD are ground-breaking products unlike anything else in the market right now. ZADAK511 SHIELD RGB memory and SSD have set the standard for both design and functionality for both markets for the next one or two years. This is truly a revolution brought on by the SHIELD series from ZADAK511.

ZADAK511 Shield RGB Series: SHIELD RGB DDR4 and SHIELD RGB SSD

ZADAK LAB is the only brand to have an RGB dual-interface SSD, the SHIELD, is the world’s first and only SSD featuring dual-interface connections for both SATAIII and USB3.1 Type-C. The patented SHIELD RGB Dual-Interface SSD is one of the most beautiful designs to ever be seen on an SSD and is easily distinguishable from other SSDs.

The ZADAK team utilized various metal materials, forged by exquisite worksmanship to creative the multi-level design of the upper cover of the SHIELD RGB SSD, making the look pop with a 3D effect. This makes this product very exciting to look at and is simply an amazing adornment for PC enthusiasts, standing as a legend amongst SSDs because of its craftsmanship.

Function-wise, the high-speed SATAIII and USB3.1 Type-C Gen2 deliver superior performance from the SHIELD SSD’s MLC NAND flash which underwent rigorous testing for guaranteed reliability so your files are safe in the drive. The ZADAK511 SHIELD RGB SSD is rated for up to 550MB/s read and 480MB/s write speed with capacities up to 480GB.

The ZADAK511 SHIELD SSD also comes with the ZArsenal software which regulates the colors and lighting patterns and can sync with all types of g aming motherboards. It also allows instant firmware updates as well as displaying status information about the SSD like health status, disk information, capacity and alerts for maximum peace of mind so that measures can be taking when issues occur.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Bus Selection – The BIOS Optimization Guide

DRAM Bus Selection

Common Options : Auto, Single Channel, Dual Channel

 

Quick Review

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

 

Details

Many motherboards now come with dual memory channels. Each channel can be accessed by the memory controller concurrently, thereby improving memory throughput, as well as reducing memory latency.

Depending on the chipset and motherboard design, each memory channel may support one or more DIMM slots. But for the dual channel feature to be work properly, at least one DIMM slot from each memory channel must be filled.

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Rank Interleave – The BIOS Optimization Guide

Rank Interleave

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Details

Rank is a new term used to differentiate physical banks on a particular memory module from internal banks within the memory chip. Single-sided memory modules have a single rank while double-sided memory modules have two ranks.

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

[adrotate banner=”4″]

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

Please note that Rank Interleave currently works only if you are using double-sided memory modules. Rank Interleave will not work with two or more single-sided memory modules. The interleaving ranks must be on the same memory module.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!