Tag Archives: BIOS Guide

Errata 94 Enhancement - The BIOS Optimization Guide

Errata 94 Enhancement – The BIOS Optimization Guide

Errata 94 Enhancement - The BIOS Optimization Guide

Errata 94 Enhancement

Common Options : Enabled, Disabled

 

Quick Review

Errata 94 refers to the 94th bug identified in AMD Athlon and Opteron processors. This bug affects the sequential prefetch feature in those processors.

When there’s an instruction cache miss, the sequential prefetch mechanism in affected processors may incorrectly prefetch the next sequential cache line. This may cause the processor to hang. Affected 64-bit processors that run 32-bit applications may end up executing incorrect code.

This bug affects the following processor families :

  • AMD Opteron (Socket 940)
  • AMD Athlon 64 (Socket 754, 939)
  • AMD Athlon 64 FX (Socket 940, 939)
  • Mobile AMD Athlon 64 (Socket 754)
  • AMD Sempron (Socket 754, 939)
  • Mobile AMD Sempron (Socket 754)
  • Mobile AMD Athlon XP-M (Socket 754)

This BIOS feature is a workaround for the bug. It allows you to disable the sequential prefetch mechanism and avoid the bug from manifesting.

When enabled, the BIOS will disable the processor’s sequential prefetch mechanism for any software that operates in Long Mode.

When disabled, the BIOS will not disable the processor’s sequential prefetch mechanism. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or processing incorrect code. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

 

Details

As processors get more and more complex, every new processor design inevitably comes with a plethora of bugs. Those that are identified are given errata numbers.

Errata 94 refers to the 94th bug identified in AMD Athlon and Opteron processors. This bug affects the sequential prefetch feature in those processors.

When there’s an instruction cache miss, the sequential prefetch mechanism in affected processors may incorrectly prefetch the next sequential cache line. This may cause the processor to hang. Affected 64-bit processors that run 32-bit applications may end up executing incorrect code.

This bug is present in AMD’s processor revisions of SH-B3, SH-C0, SH-CG, DH-CG and CH-CG. These revisions affect the following processor families :

[adrotate banner=”4″]
  • AMD Opteron (Socket 940)
  • AMD Athlon 64 (Socket 754, 939)
  • AMD Athlon 64 FX (Socket 940, 939)
  • Mobile AMD Athlon 64 (Socket 754)
  • AMD Sempron (Socket 754, 939)
  • Mobile AMD Sempron (Socket 754)
  • Mobile AMD Athlon XP-M (Socket 754)

The processor families that are not affected are :

  • Dual Core AMD Opteron
  • AMD Athlon 64 X2
  • AMD Turion Mobile Technology

This BIOS feature is a workaround for the bug. It allows you to disable the sequential prefetch mechanism and avoid the bug from manifesting.

When enabled, the BIOS will disable the processor’s sequential prefetch mechanism for any software that operates in Long Mode.

When disabled, the BIOS will not disable the processor’s sequential prefetch mechanism. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or processing incorrect code. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

IDE HDD Block Mode – The BIOS Optimization Guide

IDE HDD Block Mode

Common Options : Enabled, Disabled

 

Quick Review

The IDE HDD Block Mode BIOS feature speeds up hard disk drive access by transferring multiple sectors of data per interrupt instead of using the usual single-sector transfer mode. This mode of transferring data is known as block transfers.

When you enable this feature, the BIOS will automatically detect if your hard disk drive supports block transfers and set the proper block transfer settings for it. Depending on the IDE controller, up to 64 KB of data can be transferred per interrupt when block transfers are enabled. Since all current hard disk drives support block transfers, there is usually no reason why IDE HDD Block Mode should be disabled.

Please note that if you disable IDE HDD Block Mode, only 512 bytes of data can transferred per interrupt. Needless to say, this will significantly degrade performance.

Therefore, you should disable IDE HDD Block Mode only if you actually face the possibility of data corruption (with an unpatched version of Windows NT 4.0). Otherwise, it is highly recommended that you enable this BIOS feature for significantly better hard disk drive performance!

 

Details

The IDE HDD Block Mode BIOS feature speeds up hard disk drive access by transferring multiple sectors of data per interrupt instead of using the usual single-sector transfer mode. This mode of transferring data is known as block transfers.

When you enable this feature, the BIOS will automatically detect if your hard disk drive supports block transfers and set the proper block transfer settings for it. Depending on the IDE controller, up to 64 KB of data can be transferred per interrupt when block transfers are enabled. Since all current hard disk drives support block transfers, there is usually no reason why IDE HDD Block Mode should be disabled.

However, if you are running on Windows NT 4.0, you might need to disable this BIOS feature because Windows NT 4.0 has a problem with block transfers. According to Chris Bope, Windows NT does not support IDE HDD Block Mode and enabling this feature can cause data to be corrupted.

[adrotate banner=”4″]

Ryu Connor confirmed this by sending me a link to a Microsoft article (Enhanced IDE operation under Windows NT 4.0). According to this article, IDE HDD Block Mode and 32-bit Disk Access have been found to cause data corruption in some cases. Therefore, Microsoft recommends that Windows NT 4.0 users disable IDE HDD Block Mode.

Lord Mike asked ‘someone in the know‘ about this matter and he was told that the data corruption issue was taken very seriously at Microsoft and that it had been corrected through the Windows NT 4.0 Service Pack 2. Although he could not get an official statement from Microsoft, it is probably safe enough to enable IDE HDD Block Mode on a Windows NT 4.0 system, just as long as it has been upgraded with Service Pack 2.

Please note that if you disable IDE HDD Block Mode, only 512 bytes of data can transferred per interrupt. Needless to say, this will significantly degrade performance.

Therefore, you should disable IDE HDD Block Mode only if you actually face the possibility of data corruption (with an unpatched version of Windows NT 4.0). Otherwise, it is highly recommended that you enable this BIOS feature for significantly better hard disk drive performance!

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Differential Current – The BIOS Optimization Guide

Differential Current

Common Options : 4x Iref, 5x Iref, 6x Iref, 7x Iref

 

Quick Review

The Differential Current BIOS feature allows you to change the amount of differential current produced by the clock driver pairs, effectively changing the voltage swing of the system clocks.

When set to 4x Iref, the current difference is four times that of Iref, the reference current source.

When set to 5x Iref, the current difference is five times that of Iref, the reference current source.

When set to 6x Iref, the current difference is six times that of Iref, the reference current source.

When set to 7x Iref, the current difference is seven times that of Iref, the reference current source.

By default, the Differential Current BIOS feature is set to 4x Iref. Unfortunately, it is not known what that translate to in voltage. Not even the Iref value is known. However, the higher the differential current, the greater the voltage swing.

As a higher voltage swing improves integrity of the clock signals and overall system stability, it is recommended that you set this BIOS feature to 7x Iref for a higher differential current. However, please note that this will increase the amount of EMI (Electromagnetic Interference) produced by the motherboard.

 

Details

In the Intel Pentium 4 platform, the voltage swing used by the system clocks is not derived from a common voltage source. Instead, it uses Iref or the reference current source to drive pairs of clock drivers that produce differential currents. These differential currents are used to set the voltage swing of the various system clocks.

This new clocking method reduces the effect of noise on the voltage swing of the system clocks. This results in better timing margins which can translate into tighter, faster timings or better stability.

The Differential Current BIOS feature allows you to change the amount of differential current produced by the clock driver pairs, effectively changing the voltage swing of the system clocks.

[adrotate banner=”4″]

When set to 4x Iref, the current difference is four times that of Iref, the reference current source.

When set to 5x Iref, the current difference is five times that of Iref, the reference current source.

When set to 6x Iref, the current difference is six times that of Iref, the reference current source.

When set to 7x Iref, the current difference is seven times that of Iref, the reference current source.

By default, the Differential Current BIOS feature is set to 4x Iref. Unfortunately, it is not known what that translate to in voltage. Not even the Iref value is known.

However, the higher the differential current, the greater the voltage swing. In other words, 4x Iref produces the lowest voltage swing while 7x Iref produces the highest voltage swing.

As a higher voltage swing improves integrity of the clock signals and overall system stability, it is recommended that you set this BIOS feature to 7x Iref for a higher differential current. However, please note that this will increase the amount of EMI (Electromagnetic Interference) produced by the motherboard.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

DRAM Bus Selection – The BIOS Optimization Guide

DRAM Bus Selection

Common Options : Auto, Single Channel, Dual Channel

 

Quick Review

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

 

Details

Many motherboards now come with dual memory channels. Each channel can be accessed by the memory controller concurrently, thereby improving memory throughput, as well as reducing memory latency.

Depending on the chipset and motherboard design, each memory channel may support one or more DIMM slots. But for the dual channel feature to be work properly, at least one DIMM slot from each memory channel must be filled.

The DRAM Bus Selection BIOS feature allows you to manually set the functionality of the dual channel feature. By default, it’s set to Auto. However, it is recommended that you manually select either Single Channel or Dual Channel.

If you are only using a single memory module; or your memory modules are installed on the same channel, you should select Single Channel. You should also select Single Channel, if your memory modules do not function properly in dual channel mode.

If you have at least one memory module installed on both memory channels, you should select Dual Channel for improved bandwidth and reduced latency. But if your system does not function properly after doing so, your memory modules may not be able to support dual channel transfers. If so, set it back to Single Channel.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

CPU to PCI Write Buffer – The BIOS Optimization Guide

CPU to PCI Write Buffer

Common Options : Enabled, Disabled

 

Quick Review

The CPU to PCI Write Buffer BIOS feature controls the chipset’s CPU-to-PCI write buffer. It is used to store PCI writes from the processor before they are written to the PCI bus.

When enabled, all PCI writes from the processor will go directly to the write buffer. This allows the processor to work on something else while the write buffer writes the data to the PCI bus on the next available PCI cycle.

When disabled, the processor bypasses the buffer and writes directly to the PCI bus. This ties up the processor for the entire length of the transaction.

It is recommended that you enable this BIOS feature for better performance.

 

Details

The CPU to PCI Write Buffer BIOS feature controls the chipset’s CPU-to-PCI write buffer. It is used to store PCI writes from the processor before they are written to the PCI bus.

If this buffer is disabled, the processor bypasses the buffer and writes directly to the PCI bus. Although this may seem like the faster and better method, it really isn’t so.

When the processor wants to write to the PCI bus, it has to arbitrate for control of the PCI bus. This takes time, especially when there are other devices requesting access to the PCI bus as well. During this time, the processor cannot do anything else but wait for its turn.

Even when it gets control of the PCI bus, the processor still has to wait until the PCI bus is free. Because the processor bus (which can be as fast as 533 MHz) is many times faster than the PCI bus (at only 33 MHz), the processor wastes many clock cycles just waiting for the PCI bus. And it hasn’t even begun writing to the PCI bus yet! The entire transaction, therefore, puts the processor out of commission for many clock cycles.

[adrotate banner=”5″]

This is where the CPU-to-PCI write buffer comes in. It is a small memory buffer built into the chipset. The actual size of the buffer varies from chipset to chipset. But in most cases, it is big enough for four words or 64-bits worth of data.

When this write buffer is enabled, all PCI writes from the processor will go straight into it, instead of the PCI bus. This is virtually instantaneous since the processor does not have to arbitrate or wait for the PCI bus. That task is now left to the chipset and its write buffer. The processor is thus free to work on something else.

It is important to note that the write buffer won’t be able to write the data to the PCI bus any faster than the processor can. This is because the write buffer still has to arbitrate and wait for control of the PCI bus! But the difference here is that the entire transaction can now be carried out without tying up the processor.

To sum it all up, enabling the CPU to PCI write buffer frees up CPU cycles that would normally be wasted waiting for the PCI bus. Therefore, it is recommended that you enable this feature for better performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Graphics Aperture Size – The BIOS Optimization Guide

Graphics Aperture Size

Common Options : 4, 8, 16, 32, 64, 128, 256 (in MB)

 

Quick Review

The Graphics Aperture Size BIOS feature does two things. It selects the size of the AGP aperture and it determines the size of the GART (Graphics Address Relocation Table).

The aperture is a portion of the PCI memory address range that is dedicated for use as AGP memory address space while the GART is a translation table that translates AGP memory addresses into actual memory addresses which are often fragmented. The GART allows the graphics card to see the memory region available to it as a contiguous piece of memory range.

Host cycles that hit the aperture range are forwarded to the AGP bus without need for translation. The aperture size also determines the maximum amount of system memory that can be allocated to the AGP graphics card for texture storage.

Please note that the AGP aperture is merely address space, not actual physical memory in use. Although it is very common to hear people recommending that the AGP aperture size should be half the size of system memory, that is wrong!

The requirement for AGP memory space shrinks as the graphics card’s local memory increases in size. This is because the graphics card will have more local memory to dedicate to texture storage. So, if you upgrade to a graphics card with more memory, you shouldn’t be “deceived” into thinking that you will need even more AGP memory! On the contrary, a smaller AGP memory space will be required.

It is recommended that you keep the Graphics Aperture Size around 64 MB to 128 MB in size, even if your graphics card has a lot of onboard memory. This allows flexibility in the event that you actually need extra memory for texture storage. It will also keep the GART (Graphics Address Relocation Table) within a reasonable size.

 

Details

The Graphics Aperture Size BIOS feature does two things. It selects the size of the AGP aperture and it determines the size of the GART (Graphics Address Relocation Table).

The aperture is a portion of the PCI memory address range that is dedicated for use as AGP memory address space while the GART is a translation table that translates AGP memory addresses into actual memory addresses which are often fragmented. The GART allows the graphics card to see the memory region available to it as a contiguous piece of memory range.

Host cycles that hit the aperture address range are forwarded to the AGP bus without need for translation. The aperture size also determines the maximum amount of system memory that can be allocated to the AGP graphics card for texture storage.

The graphics aperture size is calculated using this formula :

AGP Aperture Size = (Maximum usable AGP memory size x 2) + 12 MB

As you can see, the actual available AGP memory space is less than half the AGP aperture size set in the BIOS. This is because the AGP controller needs a write combined memory areaequal in size to the actual AGP memory area (uncached) plus an additional 12MB for virtual addressing.

Therefore, it isn’t simply a matter of determining how much AGP memory space you need. You also need to calculate the final aperture size by doubling the amount of AGP memory space desired and adding 12MB to the total.

Please note that the AGP aperture is merely address space, not actual physical memory in use. It doesn’t lock up any of your system memory. The physical memory is allocated and released as needed whenever Direct3D makes a “create non-local surface” call.

Windows 95 (with VGARTD.VXD) and later versions of Microsoft Windows use a waterfall method of memory allocation. Surfaces are first created in the graphics card’s local memory. When that memory is full, surface creation spills over into AGP memory and then system memory. So, memory usage is automatically optimized for each application. AGP and system memory are not used unless absolutely necessary.

Unfortunately, it is very common to hear people recommending that the AGP aperture size should be half the size of system memory. However, this is wrong for the same reason why swapfile size should not be fixed at 1/4 of system memory. Like the swapfile, the requirement for AGP memory space shrinks as the graphics card’s local memory increases in size. This is because the graphics card will have more local memory to use for texture storage!

This reduces the need for AGP memory. Therefore, when you upgrade to a graphics card with more memory, you shouldn’t be “deceived” into thinking that you will need even more AGP memory! On the contrary, a smaller AGP memory space will be required.

[adrotate banner=”5″]

If your graphics card has very little graphics memory (4 MB16 MB), you may need to create a large AGP aperture, up to half the size of the system memory. The graphics card’s local memory and the AGP aperture size combined should be roughly around 64 MB. Please note that the size of the aperture does not correspond to performance! Increasing it to gargantuan proportions will not improve performance.

Still, it is recommended that you keep the Graphics Aperture Size around 64 MB to 128 MB. Now, why should we use such a large aperture size when most graphics cards come with large amounts of local memory? Shouldn’t we set it to the absolute minimum to save system memory?

  1. First of all, setting it to a lower memory won’t save you memory! Don’t forget that all the AGP aperture size does is limit the amount of system memory the AGP bus can appropriate whenever it needs more memory. It is not used unless absolutely necessary. So, setting the AGP aperture size to 64 MB doesn’t mean that 64 MB of your system memory will be appropriated and reserved for the AGP bus’ use. What it does it limit the AGP bus to a maximum of 64 MB of system memory when the need arises.
  2. Next, most graphics cards require an AGP aperture of at least 16MB in size to work properly. Many new graphics cards require even more. This is probably because the virtual addressing space is already 12 MB in size! So, setting the AGP Aperture Size to 4 MB or 8 MB is a big no-no.
  3. We should also remember that many software have AGP aperture size and texture storage requirements that are mostly unspecified. Some applications will not work with AGP apertures that are too small. And some games use so much textures that a large AGP aperture is needed even with graphics cards with large memory buffers.
  4. Finally, you should remember that the actual available AGP memory space is less than half the size of the AGP aperture size you set. If you want just 15 MB of AGP memory for texture storage, the AGP aperture has to be at least 42 MB in size! Therefore, it makes sense to set a large AGP aperture size in order to cater for all eventualities.

Now, while increasing the AGP aperture size beyond 128 MB won’t take up system memory, it would still be best to keep the aperture size in the 64 MB – 128 MB range so that the GART (Graphics Address Relocation Table) won’t become too big. The larger the GART gets, the longer it takes to scan through the GART and find the translated address for each AGP memory address request.

With local memory on graphics cards increasing to incredible sizes and texture compression commonplace, there’s really not much need for the AGP aperture size to grow beyond 64 MB. Therefore, it is recommended that you set the Graphics Aperture Size to 64 MB or at most, 128 MB.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Rank Interleave – The BIOS Optimization Guide

Rank Interleave

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Details

Rank is a new term used to differentiate physical banks on a particular memory module from internal banks within the memory chip. Single-sided memory modules have a single rank while double-sided memory modules have two ranks.

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

[adrotate banner=”4″]

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

Please note that Rank Interleave currently works only if you are using double-sided memory modules. Rank Interleave will not work with two or more single-sided memory modules. The interleaving ranks must be on the same memory module.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

PNP OS Installed – The BIOS Optimization Guide

PNP OS Installed

Common Options : Yes, No

 

Quick Review

What this BIOS feature actually does is determine what devices are configured by the BIOS when the computer boots up and what are left to the operating system.

Non-ACPI BIOSes are found in older motherboards that do not support the new ACPI (Advanced Configuration and Power Interface) initiative. With such a BIOS, setting the PNP OS Installed feature to No allows the BIOS to configure all devices under the assumption that the operating system cannot do so. Therefore, all hardware settings are fixed by the BIOS at boot up and will not be changed by the operating system.

On the other hand, if you set the feature to Yes, the BIOS will only configure critical devices that are required to boot up the system. The other devices are then configured by the operating system. This allows the operating system some flexibility in shuffling system resources like IRQs and IO ports to avoid conflicts. It also gives you some degree of freedom when you want to manually assign system resources.

Of course, all current motherboards now ship with the new ACPI BIOS. If you are using an ACPI-compliant operating system (i.e. Windows 98 and above) with an ACPI BIOS, then this PNP OS Installed feature is no longer relevant. This is because the operating system will use the ACPI BIOS interface to configure all devices as well as retrieve system information.

But if your operating system does not support ACPI, then the BIOS will fall back to PNP mode. In this situation, consider the BIOS as you would a Non-ACPI BIOS. If there is no need to configure any hardware manually, it is again recommended that you set this feature to No.

If you are using an old Linux kernel (prior to 2.6.0), Jonathan has the following advice –

Although Linux (prior to kernel 2.6) is not really PnP-compatible, most distributions use a piece of software called ISAPNPTOOLS to setup ISA cards. If you have PnP OS set to No, the BIOS will attempt to configure ISA cards itself. This does not make them work with Linux, though, you still need to use something like ISAPNPTOOLS. However, having both the BIOS and ISAPNPTOOLS attempting to configure ISA cards can lead to problems where the two don’t agree.

The solution? Set PnP OS to Yes, and let ISAPNPTOOLS take care of ISA cards in Linux, as BIOS configuration of ISA cards doesn’t work for Linux anyway (with the current stable and development kernels). Most times, it probably won’t make a difference, but someone somewhere will have problems, and Linux will always work with PnP OS set to Yes.

Britt Turnbull recommends disabling this feature if you are running the OS/2 operating system, especially in a multi-boot system. This is because booting another operating system can update the BIOS which may later cause problems when you boot up OS/2.

To sum it all up, except for certain cases, it is highly recommended that you to set this BIOS feature to No, irrespective of the operating system you actually use. Exceptions to this would be the inability of the BIOS to configure the devices properly in PnP mode and a specific need to manually configure one or more of the devices.

 

Details

This BIOS feature is quite misleading because its name alludes that you should set it to Yes if you have an operating system that supports Plug and Play (PnP). Unfortunately, it isn’t quite so simple.

What this BIOS feature actually does is determine what devices are configured by the BIOS when the computer boots up and what are left to the operating system. This is rather different from what the name implies, right?

Before you can determine the appropriate setting for this feature, you should first determine the kind of BIOS that came with your motherboard. For the purpose of this discussion, the BIOS can be divided into two types – ACPI BIOS and Non-ACPI BIOS.

You will also need to find out if your operating system supports and is currently running in ACPI mode. Please note that while an operating system may tout ACPI support, it is possible to force the operating system to use the older PnP mode. So, find out if your operating system is actually running in ACPI mode. Of course, this is only possible if your motherboard comes with an ACPI BIOS. With a Non-ACPI BIOS, all ACPI-compliant operating systems automatically revert to PnP mode.

[adrotate banner=”4″]

Non-ACPI BIOSes are found in older motherboards that do not support the new ACPI (Advanced Configuration and Power Interface) initiative. This can be either the ancient non-PnP BIOS (or Legacy BIOS) or the newer PnP BIOS. With such a BIOS, setting the PNP OS Installed feature to No allows the BIOS to configure all devices under the assumption that the operating system cannot do so. Therefore, all hardware settings are fixed by the BIOS at boot up and will not be changed by the operating system.

On the other hand, if you set the feature to Yes, the BIOS will only configure critical devices that are required to boot up the system. For example, the graphics card and the hard disk. The other devices are then configured by the operating system. This allows the operating system some flexibility in shuffling system resources like IRQs and IO ports to avoid conflicts. It also gives you some degree of freedom when you want to manually assign system resources.

While all this flexibility in hardware configuration sounds like a good idea, shuffling resources can sometimes cause problems, especially with a buggy BIOS. Therefore, it is recommended that you set this feature to No, to allow the BIOS to configure all devices. You should only set this feature to Yes if the Non-ACPI BIOS cannot configure the devices properly or if you want to manually reallocate hardware resources in the operating system.

Of course, all current motherboards now ship with the new ACPI BIOS. If you are using an ACPI-compliant operating system (i.e. Windows 98 and above) with an ACPI BIOS, then this PNP OS Installed feature is no longer relevant. It actually does not matter what setting you select. This is because the operating system will use the ACPI BIOS interface to configure all devices as well as retrieve system information. There is no longer a need to specifically split the job up between the BIOS and the operating system.

But if your operating system does not support ACPI, then the BIOS will fall back to PNP mode. In this situation, consider the BIOS as you would a Non-ACPI BIOS. If there is no need to configure any hardware manually, it is again recommended that you set this feature to No.

Please note that bugs in some ACPI BIOS can cause even an ACPI-compliant operating system to disable ACPI. This reverts the BIOS to PnP mode. However, there is an additional catch to it. Certain operating systems (i.e. Windows 98 and above) will only access the buggy BIOS in read-only mode. This means the operating system will rely entirely on the BIOS to configure all devices and provide it with the hardware configuration. As such, you must set the feature to No if you have a buggy ACPI BIOS.

If you are using an old Linux kernel (prior to 2.6.0), Jonathan has the following advice –

Although Linux (prior to kernel 2.6) is not really PnP-compatible, most distributions use a piece of software called ISAPNPTOOLS to setup ISA cards. If you have PnP OS set to No, the BIOS will attempt to configure ISA cards itself. This does not make them work with Linux, though, you still need to use something like ISAPNPTOOLS. However, having both the BIOS and ISAPNPTOOLS attempting to configure ISA cards can lead to problems where the two don’t agree.

The solution? Set PnP OS to Yes, and let ISAPNPTOOLS take care of ISA cards in Linux, as BIOS configuration of ISA cards doesn’t work for Linux anyway (with the current stable and development kernels). Most times, it probably won’t make a difference, but someone somewhere will have problems, and Linux will always work with PnP OS set to Yes.

Britt Turnbull recommends disabling this feature if you are running the OS/2 operating system, especially in a multi-boot system. This is because booting another operating system can update the BIOS which may later cause problems when you boot up OS/2. In addition, if you add or change hardware, you should enable full hardware detection during the initial boot sequence of OS/2 (ALT-F1 at boot screen -> F5) so that the new hardware can be registered correctly.

Thomas McGuire of 3D Spotlight sent me this e-mail from Robert Kirk at IBM :-

“Actually, the setting “PnP OS” is really misnamed. A better thing would be to say “do you want the system to attempt to resolve resource conflicts, or do you want the OS to resolve system conflict?”. Setting the system to PnP OS says that even if the machine determines some kind of resource problem, it should not attempt to handle it… Rather, it should pass it on to the OS to resolve the issue. Unfortunately, the OS can’t resolve some issues…. which sometimes results in a lock or other problems.

For stability reasons, it is better to set EVERY motherboard’s PnP OS option to No, regardless of manufacturer but still allow the BIOS to auto configure PnP devices. Just leave the PnP OS to No. It won’t hurt a thing, you lose nothing, your machine will still autoconfigure PnP devices and it will make your system more stable.”

Thanks, Thomas! That was really useful info.

To sum it all up, except for certain cases, it is highly recommended that you to set this BIOS feature to No, irrespective of the operating system you actually use. Exceptions to this would be the inability of the BIOS to configure the devices properly in PnP mode and a specific need to manually configure one or more of the devices.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

K7 CLK_CTL Select – The BIOS Optimization Guide

K7 CLK_CTL Select

Common Options : Default, Optimal

 

Quick Review

As the name suggests, this is an AMD-specific BIOS feature. It controls the Clock Control (CLK_CTL) Model Specific Register (MSR) which is part of the AMD Athlon’s power management control system.

The older Athlons have a bug that causes the system to hang, when the processor overshoots the nominal clock speed while recovering from a power-saving session. Hence, a workaround for this bug was devised whereupon the BIOS will manually reprogram the CLK_CTL register to reduce the ramp-up time.

By default, the BIOS programs the CLK_CTL register with a value of 6003_1223h during the POST routine. To increase the ramp-up speed, the BIOS has to change the value to 2003_1223h.

This is where the K7 CLK_CTL Select BIOS feature comes in. When set to Default, the BIOS will program the CLK_CTL register with a value of 6003_1223h. Setting to Optimal causes the BIOS to program the CLK_CTL register with a value of 2003_1223h.

If you are using an AMD Athlon processor with a Palomino or older core, it is recommended that you set K7 CLK_CTL Select to Optimal. This will prevent the bug from manifesting itself and may even provide a speed boost by allowing the processor to disconnect and connect to the system bus faster.

From the Thoroughbred-A core (CPUID 680) onwards, AMD started using an internal clock divider of only 1/8 with the CLK_CTL value of 6003_1223h. This neatly circumvents the Errata No. 11 problem, although AMD also corrected that bug. With such processors, the CLK_CTL should be set to the Default value of 6003_1223h.

Unfortunately, AMD then did an about-turn with the Thoroughbred-B core (CPUID 681) and changed the value associated with the 1/8 divider from 6003_1223h to 2003_1223h. Unless the BIOS was updated to recognize this difference, it would probably write the 6003_1223h value used for the Thoroughbred-A core into the register, instead of the correct 2003_1223h required by the Thoroughbred-B core. When this happens, the processor may become unstable during transitions from sleep mode to active mode.

Therefore, for Throughbred-B cores and above, you should set the K7 CLK_CTL Select BIOS feature to Optimal setting to ensure proper setting of the internal clock divider.

 

Details

As the name suggests, this is an AMD-specific BIOS feature. It controls the Clock Control (CLK_CTL) Model Specific Register (MSR) which is part of the AMD Athlon’s power management control system.

First of all, we should be aware that the AMD Athlon family of processors has four different power management states :-

Working State (C0)
Halt State (C1)
Stop Grant States (C2 and S1)
Probe State

The Athlon processor can switch to its power-saving mode when it is in the Halt state or one of the Stop Grant states. In those power management states, the processor will send a HLT or STPCLK# special bus cycle to the north bridge which disconnects the Athlon system bus. The processor will then enter into its power saving mode.

Unlike the Intel Pentium 4 processor, the AMD Athlon processor saves power by actually reducing its internal clock speed. The Athlon bus clock speed remains constant but by using an internal clock divider, the Athlon processor can reduce its internal clock speed to 1/64th (Palomino cores and older) or 1/8th (Thoroughbred cores and newer) of its nominal clock speed. That means a 2.0GHz Athlon processor with a Palomino or older core will have an internal clock speed of only 31.25 MHz in power saving mode! But if the same processor has a Thoroughbred core, the internal clock speed in power saving mode will be 250 MHz.

[adrotate banner=”4″]

As you can see, the older Athlon cores run at a much lower internal speed compared to the newer cores. This translates into a much lower power consumption in power-saving modes. For example, Athlon processors with Palomino cores use only 0.86 W of power in power saving mode. In contrast, the newer Athlon Thoroughbred-B processors in power saving mode will consume about 8.9 W of power. However, the extremely low internal clock speed in the older Athlon cores meant that these cores will take a much longer time to ramp up to full clock speed when it “awakes” from its power saving mode. This can sometimes cause problems.

The older Athlons have a bug (Errata No. 11) called PLL Overshoot on Wake-Up from Disconnect Causes Auto-Compensation Circuit to Fail. What happens is the processor can sometimes overshoot the nominal clock speed when it ramps up after a power-saving session. This causes a reduction in the Athlon bus’ I/O drive strength levels which the auto-compensation circuitry will attempt to correct. But because there is not enough time, the proper drive strengths cannot be attained before the processor reconnects to the system bus. This causes the system bus to fail, which results in a system hang.

This bug is particularly prominent in the older Athlons that use the 1/64 internal divider because they normally require a longer ramp-up time which increases the chance for the processor to overshoot the nominal clock speed. Hence, a workaround for this bug was devised whereupon the BIOS will manually reprogram the CLK_CTL register to reduce the ramp-up time. With a reduced ramp-up time, there will be very little chance of the processor overshooting and causing a failure of the system bus.

By default, the BIOS programs the CLK_CTL register with a value of 6003_1223h during the POST routine. To increase the ramp-up speed, the BIOS has to change the value to 2003_1223h.

This is where the K7 CLK_CTL Select BIOS feature comes in. When set to Default, the BIOS will program the CLK_CTL register with a value of 6003_1223h. Setting to Optimal causes the BIOS to program the CLK_CTL register with a value of 2003_1223h.

If you are using an AMD Athlon processor with a Palomino or older core, it is recommended that you set K7 CLK_CTL Select to Optimal. This will prevent Errata No. 11 from manifesting itself and may even provide a speed boost by allowing the processor to disconnect and connect to the system bus faster.

From the Thoroughbred-A core (CPUID 680) onwards, AMD started using an internal clock divider of only 1/8 with the CLK_CTL value of 6003_1223h. While this means that the newer cores will consume more power during power-saving states, the 1/8 divider allows a much faster ramp-up time. This neatly circumvents the Errata No. 11 problem although AMD also corrected that bug. With such processors, the CLK_CTL should be set to the Default value of 6003_1223h.

Unfortunately, AMD then did an about-turn with the Thoroughbred-B core (CPUID 681) and changed the value associated with the 1/8 divider from 6003_1223h to 2003_1223h. Unless the BIOS was updated to recognize this difference, it would probably write the 6003_1223h value used for the Thoroughbred-A core into the register, instead of the correct 2003_1223h required by the Thoroughbred-B core. When this happens, the processor may become unstable during transitions from sleep mode to active mode.

Therefore, for Throughbred-B cores and above, you should set the K7 CLK_CTL Select BIOS feature to Optimal setting to ensure proper setting of the internal clock divider.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Digital Locked Loop (DLL) – The BIOS Optimization Guide

Digital Locked Loop (DLL)

Common Options : Enabled, Disabled

 

Quick Review

The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL). It is a digital circuit that aligns the data strobe signal (DQS) with the data signal (DQ) to ensure proper data transfer of DDR, DDR2, DDR3 and DDR4 memory. However, it can be disabled to allow the memory chips to run beyond a fixed frequency range.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

It is recommended that you keep this BIOS feature enabled at all times. The digital locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Details

DDR, DDR2, DDR3 and DDR4 SDRAM deliver data on both rising and falling edges of the signal. This requires much tighter timings, necessitating the use of a data strobe signal (DQS) generated by differential clocks. This data strobe is then aligned to the data signal (DQ) using a delay-locked loop (DLL) circuit.

The DQS and DQ signals must be aligned with minimal skew to ensure proper data transfer. Otherwise, data transferred on the DQ signal will be read incorrectly, causing the memory contents to be corrupted and the system to malfunction.

However, the delay-locked loop circuit of every DDR, DDR2, DDR3 or DDR4 chip is tuned for a certain fixed frequency range. If you run the chip beyond that frequency rate, the DLL circuit may not work correctly. That’s why DDR, DDR2, DDR3 and DDR4 SDRAM chips can have problems running at clock speeds slower than what they are rated for.

[adrotate banner=”5″]

If you encounter such a problem, it is possible to disable the DLL. Disabling the DLL will allow the chip to run beyond the frequency range for which the DLL is tuned for. This is where the Digital Locked Loop (DLL) BIOS feature comes in.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

Note : The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL).

It is recommended that you keep this BIOS feature enabled at all times. The delay-locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Speed Error Hold – The BIOS Optimization Guide

Speed Error Hold

Common Options : Enabled, Disabled

 

Quick Review

The Speed Error Hold BIOS feature prevents accidental overclocking by preventing the system from booting up if the processor clock speed was not properly set.

When enabled, the BIOS will check the processor clock speed at boot up and halt the boot process if the clock speed is different from that imprinted in the processor ID. It will also display an error message to warn you that the processor is running at the wrong speed.

If you are thinking of overclocking the processor, you must disable the Speed Error Hold BIOS feature as it prevents the motherboard from booting up with an overclocked processor.

When disabled, the BIOS will not check the processor clock speed at boot up. It will allow the system to boot with the clock speed set in the BIOS, even if it does not match the processor’s rated clock speed (as imprinted in the processor ID).

Although this may seem really obvious, I have seen countless overclocking initiates puzzling over the error message whenever they try to overclock their processors. So, before you start pulling your hair out and screaming hysterically that Intel or AMD has finally implemented a clock speed lock on their processors, try disabling this feature. 😉

 

Details

The Speed Error Hold BIOS feature prevents accidental overclocking by preventing the system from booting up if the processor clock speed was not properly set.

It is very useful for novice users who want nothing to do with overclocking. Yet, they may inadvertently set the wrong processor speed in the BIOS and either prevent the system from booting up at all or cause the system to crash or hang.

When enabled, the BIOS will check the processor clock speed at boot up and halt the boot process if the clock speed is different from that imprinted in the processor ID. It will also display an error message to warn you that the processor is running at the wrong speed.

To correct the situation, you will have to access the BIOS and correct the processor speed. Most BIOSes, however, will automatically reset the processor to the correct speed. All you have to do then is access the BIOS, verify the clock speed and save the changes made in the BIOS.

[adrotate banner=”4″]

If you are thinking of overclocking the processor, you must disable the Speed Error Hold BIOS feature as it prevents the motherboard from booting up with an overclocked processor.

When disabled, the BIOS will not check the processor clock speed at boot up. It will allow the system to boot with the clock speed set in the BIOS, even if it does not match the processor’s rated clock speed (as imprinted in the processor ID).

Although this may seem really obvious, I have seen countless overclocking initiates puzzling over the error message whenever they try to overclock their processors. So, before you start pulling your hair out and screaming hysterically that Intel or AMD has finally implemented a clock speed lock on their processors, try disabling this feature. 😉

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Vanderpool Technology – The BIOS Optimization Guide

Vanderpool Technology

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature is used to enable or disable the Intel Virtualization Technology (IVT) extension, which is also known by the development code name of Vanderpool. It allows multiple operating systems to run simultaneously on the same computer.

When enabled, the IVT extensions will be enabled, allowing for hardware-assisted virtual machine management.

When disabled, the IVT extensions will be disabled. However, software virtual machine managers like VMware can still be used if virtualization is required.

Whether you use virtual machines or not, there is no disadvantage in keeping this BIOS feature enabled.

We recommend that you keep Vanderpool Technology enabled. It should only be disabled for troubleshooting purposes.

 

Details

Vanderpool is the development code name for the Intel Virtualization Technology (IVT). It is a extension of the Intel x86 architecture that allows multiple operating systems to run simultaneously on the same computer. It does this by creating virtual machines, each running its own x86 operating system.

[adrotate banner=”4″]

Although software virtualization through VMware and similar software is possible, they incur significant performance penalties. The IVT extensions allow operating systems that support them to create virtual machines with far less overhead.

Virtualization do more than just allow you to run different operating systems simultaneously. It also isolates the operating systems and applications from hardware resources. Each virtual machine will runs in its own memory space. This improves reliability and reduce damage from software faults or malicious attacks.

The Vanderpool Technology BIOS feature is used to enable or disable the Intel Virtualization Technology (IVT) extensions.

When enabled, the IVT extensions will be enabled, allowing for hardware-assisted virtual machine management.

When disabled, the IVT extensions will be disabled. However, software virtual machine managers like VMware can still be used if virtualization is required.

If you create and run virtual machines, you should keep this BIOS option enabled for much better performance.

If you have never heard of virtualization or virtual machines, you are unlikely to use it. However, there is no disadvantage in keeping this BIOS feature enabled.

We recommend that you keep Vanderpool Technology enabled. It should only be disabled for troubleshooting purposes.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

USB Zip Emulation – The BIOS Optimization Guide

USB Zip Emulation

Common Options : Enabled, Disabled

 

Quick Review

The USB Zip Emulation BIOS feature allows you to boot up off a USB Zip drive by making it emulate a floppy drive or hard drive.

When enabled, you can boot from any attached USB Zip drive, even in the absence of a USB driver.

When disabled, you will not be able to boot from any attached USB Zip drive. You can only access the Zip drive after a USB-compatible operating system has fully loaded with a USB driver.

If you intend to boot off your USB Zip drive, you must enable this BIOS feature. You should also enable this BIOS feature if you wish to access the Zip drive in the absence of a proper USB driver. Otherwise, you can leave it disabled.

 

Details

The USB Zip Emulation BIOS feature is a subset of the ARMD Emulation Type BIOS feature. It is not normally possible to boot off a USB Zip drive because a USB-compatible operating system must first boot up with the USB driver loaded, before any attached USB Zip drive can be accessible.

The USB Zip Emulation BIOS feature allows you to boot up off a USB Zip drive by making it emulate a floppy drive or hard drive.

When enabled, you can boot from any attached USB Zip drive, even in the absence of a USB driver.

When disabled, you will not be able to boot from any attached USB Zip drive. You can only access the Zip drive after a USB-compatible operating system has fully loaded with a USB driver.

If you intend to boot off your USB Zip drive, you must enable this BIOS feature. You should also enable this BIOS feature if you wish to access the Zip drive in the absence of a proper USB driver. For example, with operating systems that do not support USB (e.g. DOS). Otherwise, you can leave it disabled.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Chipkill – The BIOS Optimization Guide

Chipkill

Common Options : Enabled, Disabled

 

Quick Review

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Details

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

Normal ECC technology make use of eight ECC bits for every 64-bits of data and the Hamming code. This allows it to detect all single-bit and double-bit errors, but correct only single bit errors.

IBM’s Chipkill technology makes use of the BCH (Bose, Ray-Chaudhuri, Hocquenghem) code with sixteen ECC bits for every 128-bits of data. It can detect all single-symbol and double-symbol errors, but correct only single-symbol errors.

A symbol, by the way, is a group of 4-bits. A single symbol error is any error combination within that symbol. That means a single symbol error can consist of anything from one to four corrupted bits. Chipkill is therefore capable of detecting and correcting more errors than standard ECC.

Unlike standard ECC, Chipkill can only be used in systems with two channels of ECC memory (128-bits data width configuration). This is because it requires sixteen ECC bits, which can only be obtained using two ECC memory modules. However, it won’t work if you place both ECC modules in the same memory channel. Both memory channels must be active for Chipkill to work.

[adrotate banner=”5″]

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

CPU Differential Amplitude – The BIOS Optimization Guide

CPU Differential Amplitude

Common Options : Auto, 700mV, 800mV, 900mV, 1000mV

 

Quick Review

This is an Intel Core i7-specific BIOS option. It allows you to increase the amplitude of the differential clock signals to increase their noise immunity.

As clock speed increases, so does the noise level. If the noise level is high enough to be mistaken for a proper clock signal, this results in errors in the transmitted data. Thus, it is important for the differential clocks to generate a clock signal with sufficient amplitude (voltage difference) to avoid noise from introducing errors.

When set to Auto, the CPU will use the default differential amplitude of 610 mV.

When set to 700mV, the CPU will use an increased differential amplitude of 700 mV.

When set to 800mV, the CPU will use an increased differential amplitude of 800 mV.

When set to 900mV, the CPU will use an increased differential amplitude of 900 mV.

When set to 1000mV, the CPU will use an increased differential amplitude of 1000 mV.

Increasing the CPU differential amplitude increases the noise immunity of the processor’s reference clocks and indirectly increases the overclockability of the processor. Thus, if you face problems overclocking the processor, try increasing the CPU differential amplitude.

 

Details

This is an Intel Core i7-specific BIOS option. It controls the amplitude of the processor’s differential clocks.

The Intel Core i7 processor uses pairs of differential clocks to generate reference clock signals for the processor core, the QuickPath interconnect and the DDR3 memory controller. Each pair consists of a positive (P) signal and a negative (N) signal. Combined, they generate a clock signal with twice the voltage difference, which greatly increases the clock signal’s resistance to noise.

Take for example, a single clock signal with a voltage of +1 V. The high logic level would be +1 V while the low logic level would be 0 V, resulting in a voltage difference of 1 V. A pair of differential clocks, on the other hand, would have a high logic level of +1 V and a low logic level of -1 V. This results in a voltage difference of 2 V, twice that of a single clock signal. It is this increased voltage difference that improves the signal’s immunity to noise.

As clock speed increases, so does the noise level. If the noise level is high enough to be mistaken for a proper clock signal, this results in errors in the transmitted data. Thus, it is important for the differential clocks to generate a clock signal with sufficient amplitude (voltage difference) to avoid noise from introducing errors.

[adrotate banner=”5″]

This is where the CPU Differential Amplitude BIOS option comes in. It allows you to increase the amplitude of the differential clock signals to increase their resistance to noise.

When set to Auto, the CPU will use the default differential amplitude of 610 mV.

When set to 700mV, the CPU will use an increased differential amplitude of 700 mV.

When set to 800mV, the CPU will use an increased differential amplitude of 800 mV.

When set to 900mV, the CPU will use an increased differential amplitude of 900 mV.

When set to 1000mV, the CPU will use an increased differential amplitude of 1000 mV.

Increasing the CPU differential amplitude increases the noise immunity of the processor’s reference clocks and indirectly increases the overclockability of the processor. Thus, if you face problems overclocking the processor, try increasing the CPU differential amplitude.

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP ISA Aliasing – The BIOS Optimization Guide

AGP ISA Aliasing

Common Options : Enabled, Disabled

 

Quick Review

The AGP ISA Aliasing BIOS feature allows you to determine if the system controller will perform ISA aliasing to prevent conflicts between ISA devices.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10]. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Details

The origin of the AGP ISA Aliasing feature can be traced back all the way to the original IBM PC. When the IBM PC was designed, it only had ten address lines (10-bits) for I/O space allocation. Therefore, the I/O space back in those days was only 1KB or 1024 bytes in size. Out of those 1024 available addresses, the first 256 addresses were reserved exclusively for the motherboard’s use, leaving the last 768 addresses for use by add-in devices. This would become a critical factor later on.

Later, motherboards began to utilize 16 address lines for I/O space allocation. This was supposed to create a contiguous I/O space of 64KB in size. Unfortunately, many ISA devices by then were only capable of doing 10-bit decodes. This was because they were designed for computers based on the original IBM design which only supported 10 address lines.

To circumvent this problem, they fragmented the 64KB I/O space into 1KB chunks. Unfortunately, because the first 256 addresses must be reserved exclusively for the motherboard, this means that only the first (or lower) 256 bytes of each 1KB chunk would be decoded in full 16-bits. All 10-bits-decoding ISA devices are, therefore, restricted to the last (or top) 768 bytes of the 1KB chunk of I/O space.

As a result, such ISA devices only have 768 I/O locations to use. Because there were so many ISA devices back then, this limitation created a lot of compatibility problems because the chances of two ISA cards using the same I/O space were high. When that happened, one or both of the cards would not work. Although they tried to reduce the chance of such conflicts by standardizing the I/O locations used by different classes of ISA devices, it was still not good enough.

Eventually, they came up with a workaround. Instead of giving each ISA device all the I/O space it wants in the 10-bit range, they gave each a much ISA device smaller number of I/O locations and made up for the difference by “borrowing” them from the 16-bit I/O space! Here’s how they did it.

The ISA device would first take up a small number of I/O locations in the 10-bit range. It then extends its I/O space by using 16-bit aliases of the few 10-bit I/O locations taken up earlier. Because each I/O location in the 10-bit decode area has sixty-three16-bit aliases, the total number of I/O locations expands from just 768 locations to a maximum of 49,152 locations!

More importantly, each ISA card will now require very few I/O locations in the 10-bit range. This drastically reduced the chances of two ISA cards conflicting each other in the limited 10-bit I/O space. This workaround naturally became known as ISA Aliasing.

Now, that’s all well and good for ISA devices. Unfortunately, the 10-bit limitation of ISA devices becomes a liability to devices that require 16-bit addressing. AGP and PCI devices come to mind. As noted earlier, only the first 256 addresses of the 1KB chunks support 16-bit addressing. What that really means is all 16-bit addressing devices are thus limited to only 256 bytes of contiguous I/O space!

When a 16-bit addressing device requires a larger contiguous I/O space, it will have to encroach on the 10-bit ISA I/O space. For example, if an AGP card requires 8KB of contiguous I/O space, it will take up eight of the 1KB I/O chunks (which will comprise of eight 16-bit areas and eight 10-bit areas!). Because ISA devices are using ISA Aliasing to extend their I/O space, there’s now a high chance of I/O space conflicts between ISA devices and the AGP card. When that happens, the affected cards will most probably fail to work.

[adrotate banner=”5″]

There are two ways out of this mess. Obviously, you can limit the AGP card to a maximum of 256 bytes of contiguous I/O space. Of course, this is not an acceptable solution.

The second, and the preferred method, would be to throw away the restriction and provide the AGP card with all the contiguous I/O space it wants.

Here’s where the AGP ISA Aliasing BIOS feature comes in.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10] – the last 6-bits. Only the first 10-bits (address bits 0 to 9) are used for decoding. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

PCI Dynamic Bursting – The BIOS Optimization Guide

PCI Dynamic Bursting

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature is similar to the Byte Merge feature.

When enabled, the PCI write buffer accumulates and merges 8-bit and 16-bit writes into 32-bit writes. This increases the efficiency of the PCI bus and improves its bandwidth.

When disabled, the PCI write buffer will not accumulate or merge 8-bit or 16-bit writes. It will just write them to the PCI bus as soon as the bus is free. As such, there may be a loss of PCI bus efficiency when 8-bit or 16-bit data is written to the PCI bus.

Therefore, it is recommended that you enable PCI Dynamic Bursting for better performance.

However, please note that PCI Dynamic Bursting may be incompatible with certain PCI network interface cards (also known as NICs). So, if your NIC won’t work properly, try disabling this feature.

 

Details

This BIOS feature is similar to the Byte Merge feature.

If you have already read about the CPU to PCI Write Buffer feature, you should know that the chipset has an integrated PCI write buffer which allows the CPU to immediately write up to four words (or 64-bits) of PCI writes to it. This frees up the CPU to work on other tasks while the PCI write buffer writes them to the PCI bus.

Now, the CPU doesn’t always write 32-bit data to the PCI bus. 8-bit and 16-bit writes can also take place. But while the CPU may only write 8-bits of data to the PCI bus, it is still considered as a single PCI transaction. This makes it equivalent to a 16-bit or 32-bit write in terms of PCI bandwidth! This reduces the effective PCI bandwidth, especially if there are many 8-bit or 16-bit CPU-to-PCI writes.

To solve this problem, the write buffer can be programmed to accumulate and merge 8-bit and 16-bit writes into 32-bit writes. The buffer then writes the merged data to the PCI bus. As you can see, merging the smaller 8-bit or 16-bit writes into a few large 32-bit writes reduces the number of PCI transactions required. This increases the efficiency of the PCI bus and improves its bandwidth.

This is where the PCI Dynamic Bursting BIOS feature comes in. It controls the byte merging capability of the PCI write buffer.

If it is enabled, every write transaction will go straight to the write buffer. They are accumulated until there is enough to be written to the PCI bus in a single burst. This improves the PCI bus’ performance.

If you disable byte merging, all writes will still go to the PCI write buffer (if the CPU to PCI Write Buffer feature has been enabled). But the buffer won’t accumulate and merge the data. The data is written to the PCI bus as soon as the bus becomes free. This reduces PCI bus efficiency, particularly when 8-bit or 16-bit data is written to the PCI bus.

Therefore, it is recommended that you enable PCI Dynamic Bursting for better performance.

Please note that like Byte Merge, this feature may not be compatible with certain PCI network interface cards. For more details, please check out the Byte Merge feature.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

PCI Timeout – The BIOS Optimization Guide

PCI Timeout

Common Options : Enabled, Disabled

 

Quick Review

To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed. According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles for the initial read and 8 PCI clock cycles for each subsequent read.

If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Timeout feature.

With PCI Timeout enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the master device will rearbitrate for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, the target device just sends the completion status to the master device to complete the transaction.

One advantage of using PCI Timeout is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Timeout also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. Without PCI Timeout, all write-posted data will have to be flushed before another PCI transaction can occur.

It is highly recommended that you enable PCI Timeout for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

 

Details

This is the same as the Delayed Transaction BIOS feature because it refers to the PCI Delayed Transaction feature which is part of the PCI Revision 2.1 specifications.

On the PCI bus, there are many devices that may not meet the PCI target latency rule. Such devices include I/O controllers and bridges (i.e. PCI-to-PCI and PCI-to-ISA bridges). To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed.

According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles (32 clock cycles for a host bus bridge) for the initial read and 8 PCI clock cycles for each subsequent read. If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Timeout feature.

When a master device reads from a target device on the PCI bus but fails to meet the latency requirements; the transaction will be terminated with a Retry command. The master device will then have to rearbitrate for bus access. But if PCI Timeout had been enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the target device latches on the data and terminates the transaction if it cannot be completed within the target latency period. The master device then rearbitrates for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, instead of returning data (in the case of a read transaction), the target device sends the completion status to the master device to complete the transaction.

[adrotate banner=”5″]

One advantage of using PCI Timeout is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Timeout also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. The write-posted data will be written to memory while the target device is working on the non-postable transaction and flushed before the transaction is completed on the master device. Without PCI Timeout, all write-posted data will have to be flushed before another PCI transaction can occur.

As you can see, the PCI Timeout feature allows for more efficient use of the PCI bus as well as better PCI performance by allowing write-posting to occur concurrently with non-postable transactions. In this BIOS, the PCI 2.1 Compliance option allows you to enable or disable the PCI Timeout feature.

It is highly recommended that you enable PCI Timeout for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Errata 94 Option – The BIOS Optimization Guide

Errata 94 Option

Common Options : Enabled, Disabled

 

Quick Review

Errata 94 refers to the 94th bug identified in AMD Athlon and Opteron processors. This bug affects the sequential prefetch feature in those processors. This bug affects the following processor families :

  • AMD Opteron (Socket 940)
  • AMD Athlon 64 (Socket 754, 939)
  • AMD Athlon 64 FX (Socket 940, 939)
  • Mobile AMD Athlon 64 (Socket 754)
  • AMD Sempron (Socket 754, 939)
  • Mobile AMD Sempron (Socket 754)
  • Mobile AMD Athlon XP-M (Socket 754)

This BIOS feature is a workaround for the bug. It allows you to disable the sequential prefetch mechanism and avoid the bug from manifesting.

When enabled, the BIOS will disable the processor’s sequential prefetch mechanism for any software that operates in Long Mode.

When disabled, the BIOS will not disable the processor’s sequential prefetch mechanism. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or processing incorrect code. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

 

Details

As processors get more and more complex, every new processor design inevitably comes with a plethora of bugs. Those that are identified are given errata numbers.

Errata 94 refers to the 94th bug identified in AMD Athlon and Opteron processors. This bug affects the sequential prefetch feature in those processors.

When there’s an instruction cache miss, the sequential prefetch mechanism in affected processors may incorrectly prefetch the next sequential cache line. This may cause the processor to hang. Affected 64-bit processors that run 32-bit applications may end up executing incorrect code.

This bug is present in AMD’s processor revisions of SH-B3, SH-C0, SH-CG, DH-CG and CH-CG. These revisions affect the following processor families :

[adrotate banner=”4″]
  • AMD Opteron (Socket 940)
  • AMD Athlon 64 (Socket 754, 939)
  • AMD Athlon 64 FX (Socket 940, 939)
  • Mobile AMD Athlon 64 (Socket 754)
  • AMD Sempron (Socket 754, 939)
  • Mobile AMD Sempron (Socket 754)
  • Mobile AMD Athlon XP-M (Socket 754)

The processor families that are not affected are :

  • Dual core AMD Opteron processors (or newer)
  • AMD Athlon 64 X2 processors (or newer)
  • AMD Turion processors (or newer)

This BIOS feature is a workaround for the bug. It allows you to disable the sequential prefetch mechanism and avoid the bug from manifesting.

When enabled, the BIOS will disable the processor’s sequential prefetch mechanism for any software that operates in Long Mode.

When disabled, the BIOS will not disable the processor’s sequential prefetch mechanism. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or processing incorrect code. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Idle Cycle Counter – The BIOS Optimization Guide

Dynamic Idle Cycle Counter

Common Options : Enabled, Disabled

 

Quick Review

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

 

Details

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind DRAM Idle Timer.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate banner=”5″]

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

Go Back To > The BIOS Optimization Guide | Home

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ACPI SRAT Table – BIOS Optimization Guide

ACPI SRAT Table

Common Options : Enabled, Disabled

 

Quick Review

The ACPI Static Resource Affinity Table (SRAT) stores topology information for all the processors and memory, describing the physical locations of the processors and memory in the system. It also describes what memory is hot-pluggable, and what is not.

The operating system scans the ACPI SRAT at boot time and uses the information to better allocate memory and schedule software threads for maximum performance. This BIOS feature controls whether the SRAT is made available to the operating system at boot up, or not.

When enabled, the BIOS will build the Static Resource Affinity Table (SRAT) and allow the operating system to access and use the information to optimize software thread allocation and memory usage.

When disabled, the BIOS will not build the Static Resource Affinity Table (SRAT). Alternative optimizations like Node Memory Interleaving can then be enabled.

If you are using an operating system that supports ACPI SRAT (e.g. Windows Server 2003, Windows XP SP2 with Physical Address Extensions or PAE enabled), it is recommended that you enable this BIOS feature to allow the operating system to dynamically allocate threads and memory according to the SRAT data.

Please note that you must disable Node Memory Interleave if you intend to enable this BIOS feature. Node Memory Interleave is a static optimization that cannot work in tandem with the dynamic optimizations that the operating system can perform using information from the ACPI SRAT.

If you are using an operating system that does not support ACPI SRAT (e.g. Windows 2000, Windows 98 ), it is recommended that you disable this BIOS feature, and possibly enable Node Memory Interleaving instead.

 

Details

Although multiple cores and increased clock speeds have increased computing performance, the processor bus and memory bus are becoming significant bottlenecks. Even SMP (Symmetric MultiProcessor) systems are limited by their dependence on a processor and memory bus.

To allow computing performance to scale better, system designers are building smaller systems, called nodes, each containing their own processors and memory. These are connected together using a high-speed cache-coherent interconnect, forming a larger system. This architecture is known as ccNUMA, short for Cache-Coherent Non-Uniform Memory Access.

The cache-coherent interconnect may be a network switch, or the interconnect within a multi-core processor (e.g. the HyperTransport bus between the two cores of an AMD Opteron processors). Any processor in any node can access and use memory in other nodes through this interconnect. In multi-core processors, this allows one core to read from another core’s memory.

However, while memory accesses within the node itself (or local memory access by one core) is fast, access to memory in other nodes (or another core’s memory) is several times slower. Therefore, improving performance on a ccNUMA system would involve optimizations based on prioritizing threads to processors in the same node, and ensuring processors use memory closest to them.

Older operating systems like Windows 2000 are not capable of determining the design of the system, and therefore cannot perform such optimizations. However, newer operating systems like Windows Server 2003 can readily identify the system’s hardware topology, and allocate software threads and memory in a more optimal fashion.

[adrotate banner=”5″]

This is where the ACPI Static Resource Affinity Table (SRAT) comes in. The SRAT stores topology information for all the processors and memory, describing the physical locations of the processors and memory in the system. It also describes what memory is hot-pluggable, and what is not.

The operating system scans the ACPI SRAT at boot time and uses the information to better allocate memory and schedule software threads for maximum performance. This BIOS feature controls whether the SRAT is made available to the operating system at boot up, or not.

When enabled, the BIOS will build the Static Resource Affinity Table (SRAT) and allow the operating system to access and use the information to optimize software thread allocation and memory usage.

When disabled, the BIOS will not build the Static Resource Affinity Table (SRAT). Alternative optimizations like Node Memory Interleaving can then be enabled.

If you are using an operating system that supports ACPI SRAT (e.g. Windows Server 2003, Windows XP SP2 with Physical Address Extensions or PAE enabled), it is recommended that you enable this BIOS feature to allow the operating system to dynamically allocate threads and memory according to the SRAT data.

Please note that you must disable Node Memory Interleave if you intend to enable this BIOS feature. Node Memory Interleave is a static optimization that cannot work in tandem with the dynamic optimizations that the operating system can perform using information from the ACPI SRAT.

If you are using an operating system that does not support ACPI SRAT (e.g. Windows 2000, Windows 98 ), it is recommended that you disable this BIOS feature, and possibly enable Node Memory Interleaving instead.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Video Memory Cache Mode – BIOS Optimization Guide

Video Memory Cache Mode

Common Options : USWC, UC

 

Quick Review

Video Memory Cache Mode is yet another BIOS feature with a misleading name. It does not cache the video memory or even graphics data (such data is uncacheable anyway).

This BIOS feature allows you to control the USWC (Uncached Speculative Write Combining) write combine buffers.

When set to USWC, the write combine buffers will accumulate and combine partial or smaller graphics writes from the processor and write them to the graphics card as burst writes.

When set to UC, the write combine buffers will be disabled. All graphics writes from the processor will be written to the graphics card directly.

It is highly recommended that you set the Video Memory Cache Mode option to USWC for improved graphics and processor performance.

However, if you are using an older graphics card, it may not be compatible with this feature. Enabling this feature with such graphics cards will cause a host of problems like graphics artifacts, system crashes and even the inability to boot up properly.

If you face such problems, you should set this BIOS feature to UC immediately.

 

Details

Video Memory Cache Mode is yet another BIOS feature with a misleading name. It does not cache the video memory or even graphics data (such data is uncacheable anyway). It is actually similar to the USWC Write Posting BIOS feature.

Current processors are heavily optimized for burst operations which allows for very high memory bandwidth. Unfortunately, graphics writes from the processor are mostly pixel writes which are 8 to 32-bits in nature. Because they do not fill up an entire cache line, such writes are not burstable. This results in poor graphics write performance.

To correct this deficiency, processors now come with one or more internal write combine buffers. These buffers are designed to accumulate graphics writes from the processor. These partial or smaller writes are then combined and written to the graphics card as burst writes.

The use of these internal write combine buffers provides many benefits :-

  1. Partial or smaller graphics writes from the processor are now combined into burstable writes. This greatly increases the performance of the processor and AGP (or PCI) buses.
  2. Graphics writes will require fewer transactions on the processor and AGP (or PCI) bus. This improves the bandwidth of those buses.
  3. The processor will only need write to its internal write combine buffers, instead of the processor bus. This improves its performance by allowing it to work on other tasks while the write combine buffers handle the actual write transaction.

Because the write combine buffers allow speculative reads, this feature is known as the USWC (Uncached Speculative Write Combining) feature. The older method of writing all processor writes directly to the graphics card is known as UC (UnCached).

[adrotate banner=”5″]

This BIOS feature allows you to control the USWC (Uncached Speculative Write Combining) write combine buffers.

When set to USWC, the write combine buffers will accumulate and combine partial or smaller graphics writes from the processor and write them to the graphics card as burst writes.

When set to UC, the write combine buffers will be disabled. All graphics writes from the processor will be written to the graphics card directly.

It is highly recommended that you set the Video Memory Cache Mode option to USWC for improved graphics and processor performance.

Please note that this feature must also be supported by the graphics card, the operating system and the graphics driver for it to work properly.

All Microsoft operating systems from Windows NT 4.0 onwards support USWC, so you do not need to worry if you are using a Windows NT 4.0 or newer operating system from Microsoft. As this feature has been around for some time, drivers of USWC-compatible graphics cards will fully support this feature.

However, if you are using an older graphics card, it may not be compatible with this feature. Older graphics cards make use of a FIFO (First In, First Out) I/O model which can only support the UnCached (UC) type of transaction. Enabling this feature with such graphics cards will cause a host of problems like graphics artifacts, system crashes and even the inability to boot up properly.

If you face such problems, you should set this BIOS feature to UC immediately.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Fast R-W Turn Around – BIOS Optimization Guide

Fast R-W Turn Around

Common Options : Enabled, Disabled

 

Quick Review

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

As its name suggests, this BIOS feature allows you to skip that delay. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

 

Details

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

Please note that this extra delay is only introduced when there is a switch from reads to writes. Switching from writes to reads will not suffer from such a delay.

As its name suggests, this BIOS feature allows you to skip that delay so that the memory controller can switch or “turn around” from reads to writes faster than normal. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

DRAM Read Latch Delay – BIOS Optimization Guide

DRAM Read Latch Delay

Common Options : Enabled, Disabled

Quick Review

This BIOS feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. Start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. If your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Details

This feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading. As such, a lone single-sided memory module provides the lowest DRAM load possible.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The longer the delay, the poorer the read performance of your memory modules. However, the stability of your memory modules won’t increase together with the length of the delay. Remember, the purpose of the feature is only to ensure that the memory controller will be able to latch onto the DRAM device with all sorts of DRAM loadings.

[adrotate banner=”5″]

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. It isn’t going to increase stability. In fact, it may just make things worse! So, start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. This forces the memory controller to latch onto the DRAM devices without delay, even if the BIOS presets indicate that a delay is required. Naturally, this can potentially cause stability problems if you actually have a heavy DRAM load. Therefore, if your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP Always Compensate – BIOS Optimization Guide

AGP Always Compensate

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature determines if the AGP controller should be allowed to dynamically adjust the AGP driving strength or use preset drive strength values.

By default, it is set to automatically adjust the AGP drive strength once or at regular intervals. The circuitry can also be disabled or bypassed and a user setting used. However, this BIOS feature does not allow manual configuration.

When you enable AGP Always Compensate, the auto-compensation circuitry will automatically adjust the AGP drive strength at regular intervals.

If you disable it, the circuitry will only adjust the drive strength once at boot-up. The drive strength values derived at boot-up will remain until the system is rebooted.

It is recommended that you enable AGP Always Compensate so that the AGP controller can dynamically adjust the AGP driving strength at regular intervals.

 

Details

This feature is somewhat similar to the AGP Drive Strength feature. It determines if the AGP controller should be allowed to dynamically adjust the AGP driving strength or use preset drive strength values.

Due to the tighter tolerances of the AGP 8X and AGP 4X bus, the AGP controller features auto-compensation circuitry that compensate for the motherboard’s impedance on the AGP bus. It does this by dynamically adjusting the drive strength of the I/O pads over a range of temperature and voltages.

The auto-compensation circuitry has two operating modes. By default, it is set to automatically compensate for the impedance once or at regular intervals by dynamically adjusting the AGP drive strength. The circuitry can also be disabled or bypassed. In this case, it is up to the user (through the BIOS) to write the desired drive strength value to the AGP I/O pads.

[adrotate banner=”5″]

This is where AGP Always Compensate differs from the AGP Drive Strength feature. While AGP Drive Strength allows you to switch to manual configuration by the user, AGP Always Compensate does not. It only allows you to change the auto-compensation mode.

When you enable AGP Always Compensate, the auto-compensation circuitry will dynamically compensate for changes in the impendance at regular intervals.

If you disable it, the circuitry will only compensate for the impedance once at boot-up. The drive strength values derived at boot-up will remain until the system is rebooted.

It is recommended that you enable AGP Always Compensate so that the AGP controller can initiate dynamic compensation at regular intervals. This will allow it to compensate for any changes in the impedance.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

CPU Adjacent Sector Prefetch – BIOS Optimization Guide

CPU Adjacent Sector Prefetch

Common Options : Enabled, Disabled

 

Quick Review

CPU Adjacent Sector Prefetch is a BIOS feature specific to the Intel processors (from Pentium 4 onwards), including Intel Xeon processors.

When enabled, the processor will fetch the cache line containing the currently requested data, and prefetch the following cache line.

When disabled, the processor will only fetch the cache line containing the currently requested data.

In a desktop system, CPU Adjacent Sector Prefetch improves the processor’s performance since there’s a high probability of the processor requiring the next cache line as well. It is recommended that you enable this BIOS feature in a desktop system.

But in a server, this feature may actually degrade performance since data requests in servers are of a more random nature. You will need to evaluate the performance effect of CPU Adjacent Sector Prefetch on your server and determine if it should be disabled or enabled for better performance. But servers should generally disable this feature.

 

Details

CPU Adjacent Sector Prefetch is a BIOS feature specific to the Intel processors (from Pentium 4 onwards), including Intel Xeon processors. When one of these processors receives data from the cache, it can also prefetch the next 64-byte cache line. This may reduce cache latency by making the next cache line immediately available if the processor requires it as well.

When enabled, the processor will fetch the cache line containing the currently requested data, and prefetch the following cache line.

When disabled, the processor will only fetch the cache line containing the currently requested data.

In a desktop system, CPU Adjacent Sector Prefetch improves the processor’s performance since there’s a high probability of the processor requiring the next cache line as well. It is recommended that you enable this BIOS feature in a desktop system.

But in a server, this feature may actually degrade performance since data requests in servers are of a more random nature. The probability of the next cache line being required by the processor is lower than that of a desktop system. If the processor prefetches the second cache line and it is not required by the processor, it is discarded and the processor requests for the data it needs. This incurs a slight penalty in performance.

You will need to evaluate the performance effect of CPU Adjacent Sector Prefetch on your server and determine if it should be disabled or enabled for better performance. But servers should generally disable this feature.

Go Back To > The BIOS Optimization Guide | Home

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Direct Frame Buffer – BIOS Optimization Guide

Direct Frame Buffer

Common Options : Enabled, Disabled

 

Quick Review

The Direct Frame Buffer BIOS feature controls the processor’s access to the section of system memory reserved for use by the integrated graphics processor as graphics memory. Please note that we are referring to the CPU, not the graphics processor.

When enabled, the processor is allowed to directly write to the section of system memory reserved as graphics memory. This increases the performance of applications that write directly to the frame buffer.

When disabled, the processor is not allowed to directly write to the section of system memory reserved as graphics memory. This reduces the performance of applications that write directly to the frame buffer.

It is recommended that you enable this BIOS feature for maximum performance. Of course, this only improves performance of applications that write directly to the frame buffer.

 

Details

This BIOS feature is found in VIA-based motherboards with integrated graphics processors. The integration of the graphics processor into the motherboard chipset reduces the cost of building the PC.

To further reduce cost, the integrated graphics processor does not come with dedicated graphics memory. Instead, part of system memory is cordoned off and used exclusively by the graphics processor as graphics memory.

The Direct Frame Buffer BIOS feature controls the processor’s access to the section of system memory reserved for use by the integrated graphics processor as graphics memory. Please note that we are referring to the CPU, not the graphics processor.

When enabled, the processor is allowed to directly write to the section of system memory reserved as graphics memory. This increases the performance of applications that write directly to the frame buffer.

When disabled, the processor is not allowed to directly write to the section of system memory reserved as graphics memory. This reduces the performance of applications that write directly to the frame buffer.

It is recommended that you enable this BIOS feature for maximum performance. Of course, this only improves performance of applications that write directly to the frame buffer.

Please note that this BIOS feature has no effect on the frame buffer of any discrete graphics card that you install. It only controls the processor’s access to the section of system memory reserved for use by the integrated graphics processor as graphics memory.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

SATA Mode – BIOS Optimization Guide

SATA Mode

Common Options : RAID, SATA or AHCI, IDE

 

Quick Review

The SATA Mode BIOS feature is similar to the SATA Operation Mode BIOS feature, but with different options available. It controls the SATA controller’s operating mode.

When set to SATA or AHCI, the SATA controller enables its AHCI functionality. However, its RAID functions will be disabled and you won’t be able to access the RAID setup utility at boot time. You can find more information on AHCI in the SATA AHCI Mode BIOS feature.

When set to RAID, the SATA controller enables both its RAID and AHCI functions. You will be allowed to access the RAID setup utility at boot time.

When set to IDE, the SATA controller disables its RAID and AHCI functions and runs in the IDE emulation mode. You won’t have access to the RAID setup utility.

If you intend to create or use a RAID array, you should set this BIOS feature to RAID. The BIOS will load the RAID setup utility which you can access at boot time.

If you do not wish to create or use a RAID array but would like to make use of the SATA controller’s AHCI features, you should set this BIOS feature to SATA or AHCI. This skips the loading of the SATA controller’s RAID functions, which speeds up the boot process.

Even if you do not intend to use a RAID array, it is recommended that you set this BIOS feature to SATA or AHCI, even if you do not intend to use features like hot-plugging. This is because switching from the IDE emulation mode to AHCI mode is often problematic.

On the other hand, the IDE mode allows for maximum compatibility with older hardware. Even with the proper SATA driver installed, it is possible for a system to crash while installing or booting up an operating system. Disabling this BIOS in such cases will normally resolve the issue.

[adrotate banner=”5″]

 

Details

The SATA Mode BIOS feature is similar to the SATA Operation Mode BIOS feature, but with different options available. It controls the SATA controller’s operating mode. There are three available modes – IDE, SATA or AHCI and RAID.

When set to SATA or AHCI, the SATA controller enables its AHCI functionality. However, its RAID functions will be disabled and you won’t be able to access the RAID setup utility at boot time. You can find more information on AHCI in the SATA AHCI Mode BIOS feature.

When set to RAID, the SATA controller enables both its RAID and AHCI functions. You will be allowed to access the RAID setup utility at boot time.

When set to IDE, the SATA controller disables its RAID and AHCI functions and runs in the IDE emulation mode. You won’t have access to the RAID setup utility.

If you intend to create or use a RAID array, you should set this BIOS feature to RAID. The BIOS will load the RAID setup utility which you can access at boot time.

If you do not wish to create or use a RAID array but would like to make use of the SATA controller’s AHCI features, you should set this BIOS feature to SATA or AHCI. This skips the loading of the SATA controller’s RAID functions, which speeds up the boot process.

Please note that both RAID and SATA/AHCI modes require you to load the SATA controller driver during the Microsoft Windows XP installation routine. When you load the Windows XP installation routine, the following message will appear on screen :

Press F6 if you have to install a third-party SCSI or RAID driver.

At this point, press the F6 key and insert the floppy disk containing the motherboard’s SATA controller driver. Once the driver is loaded, the Microsoft Windows XP installation will proceed as usual. This step is not required if the SATA controller is set to SATA or AHCI and the operating system has native support for AHCI.

Even if you do not intend to use a RAID array, it is recommended that you set this BIOS feature to SATA or AHCI, even if you do not intend to use features like hot-plugging. This is because switching from the IDE emulation mode to AHCI mode is often problematic. For example, switching from IDE mode to AHCI after installing Microsoft Windows 7 in IDE mode will result in a Blue Screen Of Death (BSOD).

On the other hand, the IDE mode allows for maximum compatibility with older hardware. Even with the proper SATA driver installed, it is possible for a system to crash while installing or booting up an operating system. Disabling this BIOS in such cases will normally resolve the issue.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

SDRAM Burst Len – BIOS Optimization Guide

SDRAM Burst Len

Common Options : 4, 8

 

Quick Review

This BIOS feature allows you to control the length of a burst transaction.

When this feature is set to 4, a burst transaction can only comprise of up to four reads or four writes.

When this feature is set to 8, a burst transaction can only comprise of up to eight reads or eight writes.

As the initial CAS latency is fixed for each burst transaction, a longer burst transaction will allow more data to be read or written for less delay than a shorter burst transaction. Therefore, a burst length of 8 will be faster than a burst length of 4.

Therefore, it is recommended that you select the longer burst length of 8 for better performance.

 

Details

This is the same as the SDRAM Burst Length BIOS feature, only with a weirdly truncated name. Surprisingly, many manufacturers are using it. Why? Only they know. 🙂

Burst transactions improve SDRAM performance by allowing the reading or writing of whole ‘blocks’ of contiguous data with only one column address.

In a burst sequence, only the first read or write transfer incurs the initial latency of activating the column. The subsequent reads or writes in that burst sequence can then follow behind without any further delay. This allows blocks of data to be read or written with far less delay than non-burst transactions.

For example, a burst transaction of four writes can incur the following latencies : 4-1-1-1. In this example, the total time it takes to transact the four writes is merely 7 clock cycles.

In contrast, if the four writes are not written by burst transaction, they will incur the following latencies : 4-4-4-4. The time it takes to transact the four writes becomes 16 clock cycles, which is 9 clock cycles longer or more than twice as slow as a burst transaction.

This is where the SDRAM Burst Len BIOS feature comes in. It is a BIOS feature that allows you to control the length of a burst transaction.

When this feature is set to 4, a burst transaction can only comprise of up to four reads or four writes.

When this feature is set to 8, a burst transaction can only comprise of up to eight reads or eight writes.

As the initial CAS latency is fixed for each burst transaction, a longer burst transaction will allow more data to be read or written for less delay than a shorter burst transaction. Therefore, a burst length of 8 will be faster than a burst length of 4.

[adrotate banner=”5″]

For example, if the memory controller wants to write a block of contiguous data eight units long to memory, it can do it as a single burst transaction 8 units long or two burst transactions, each 4 units in length. The hypothetical latencies incurred by the single 8-unit long transaction would be 4-1-1-1-1-1-1-1 with a total time of 11 clock cycles for the entire transaction.

But if the eight writes are written to memory as two burst transactions of 4 units in length, the hypothetical latencies incurred would be 4-1-1-1-4-1-1-1. The time taken for the two transactions to complete would be 14 clock cycles. As you can see, this is slower than a single transaction, 8 units long.

Therefore, it is recommended that you select the longer burst length of 8 for better performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

CPU Direct Access FB – BIOS Optimization Guide

CPU Direct Access FB

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature controls the processor’s access to the section of system memory reserved for use by the integrated graphics processor as graphics memory. Please note that we were referring to the CPU, not the graphics processor.

When enabled, the processor is allowed to directly write to the section of system memory reserved as graphics memory. This increases the performance of applications that write directly to the frame buffer.

When disabled, the processor is not allowed to directly write to the section of system memory reserved as graphics memory. This reduces the performance of applications that write directly to the frame buffer.

It is recommended that you enable this BIOS feature for maximum performance. Of course, this only improves performance of applications that write directly to the frame buffer.

[adrotate banner=”5″]

 

Details

This BIOS feature is found in VIA-based motherboards with integrated graphics processors. The integration of the graphics processor into the motherboard chipset reduces the cost of building the PC.

To further reduce cost, the integrated graphics processor does not come with dedicated graphics memory. Instead, part of system memory is cordoned off and used exclusively by the graphics processor as graphics memory.

This BIOS feature controls the processor’s access to the section of system memory reserved for use by the integrated graphics processor as graphics memory. Please note that we were referring to the CPU, not the graphics processor.

When enabled, the processor is allowed to directly write to the section of system memory reserved as graphics memory. This increases the performance of applications that write directly to the frame buffer.

When disabled, the processor is not allowed to directly write to the section of system memory reserved as graphics memory. This reduces the performance of applications that write directly to the frame buffer.

It is recommended that you enable this BIOS feature for maximum performance. Of course, this only improves performance of applications that write directly to the frame buffer.

Please note that this BIOS feature has no effect on the frame buffer of any discrete graphics card that you install. It only controls the processor’s access to the section of system memory reserved for use by the integrated graphics processor as graphics memory.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!