Category Archives: The Famous Tech ARP BIOS Guide!

This is the new home of the famous Tech ARP BIOS Guide.

Created more than twenty years ago, this is the most extensive guide on BIOS settings in your motherboard, and source of the published text – Breaking The BIOS Barrier : The Definitive BIOS Optimization Guide!

It currently covers over 400 BIOS settings, with more being added on a weekly basis. So make sure you check back often!

SDRAM Precharge Control – The BIOS Optimization Guide

SDRAM Precharge Control

Common Options : Enabled, Disabled

 

Quick Review of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank. This is useful in cases where subsequent data requests will also result in page misses.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

 

Details of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This naturally improves performance.

But if read request cannot be satisfied by any of the four open pages, there are two possibilities. Either one page is closed and the correct page opened; or all open pages are closed and new pages opened up. Either way, the read request suffers the full latency penalty.

[adrotate group=”1″]

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur. Naturally, this greatly impacts memory performance.

Fortunately, after the four full latency reads, the memory controller can often predict what pages will be needed next. It can then open them for minimum latency reads . This somewhat reduces the negative effect of consecutive page misses.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank.

This is useful in cases where subsequent data requests will also result in page misses. This is because the memory banks will already be precharged and ready to be activated. There is no need to wait for the memory banks to precharge before they can be activated. However, it also means that you won’t be able to benefit from data accesses that could have been satisfied by the previously opened pages.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PCI-E Maximum Payload Size – The BIOS Optimization Guide

PCI-E Maximum Payload Size

Common Options : 128, 256, 512, 1024, 2048, 4096

 

Quick Review

The PCI-E Maximum Payload Size BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size used by the PCI Express controller. The TLP payload size determines the amount of data transmitted within each data packet.

When set to 128, the PCI Express controller will only use a maximum data payload of 128 bytes within each TLP.

When set to 256, the PCI Express controller will only use a maximum data payload of 256 bytes within each TLP.

When set to 512, the PCI Express controller will only use a maximum data payload of 512 bytes within each TLP.

When set to 1024, the PCI Express controller will only use a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the PCI Express controller will only use a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the PCI Express controller uses the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set PCI-E Maximum Payload Size to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the motherboard chipset will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set this BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

 

Details of PCI-E Maximum Payload Size

The PCI Express protocol transmits data as well as control messages on the same links. This differs the PCI Express interconnect from the PCI bus and the AGP port, which make use of separate sideband signalling for control messages.

Control messages are delivered as Data Link Layer Packets or DLLPs, while data packets are sent out as Transaction Layer Packets or TLPs. However, TLPs are not pure data packets. They have a header which carries information like packet size, message type, traffic class, etc.

In addition, the actual data (known as the “payload”) is encoded with the 8B/10B encoding scheme. This replaces 8 uncoded bits with 10 encoded bits. This itself results in a 20% “loss” of bandwidth. The TLP overhead is further exacerbated by a 32-bit LCRC error-checking code.

Therefore, the size of the data payload is an important factor in determining the efficiency of the PCI Express interconnect. As the data payload gets smaller, the TLP becomes less efficient, because the overhead will then take up a more significant amount of bandwidth. To achieve maximum efficiency, the TLP should be as large as possible.

The PCI Express specifications defined the following TLP payload sizes :

  • 128 bytes
  • 256 bytes
  • 512 bytes
  • 1024 bytes
  • 2048 bytes
  • 4096 bytes

However, it is up to the manufacturer to set the maximum TLP payload size supported by the PCI Express device. It determines the maximum TLP payload size the device can send or receive. When two PCI Express devices communicate with each other, the largest TLP payload size supported by both devices will be used.

[adrotate group=”1″]

The PCI-E Maximum Payload Size BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size used by the PCI Express controller. The TLP payload size, as mentioned earlier, determines the amount of data transmitted within each data packet.

When set to 128, the PCI Express controller will only use a maximum data payload of 128 bytes within each TLP.

When set to 256, the PCI Express controller will only use a maximum data payload of 256 bytes within each TLP.

When set to 512, the PCI Express controller will only use a maximum data payload of 512 bytes within each TLP.

When set to 1024, the PCI Express controller will only use a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the PCI Express controller will only use a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the PCI Express controller uses the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set PCI-E Maximum Payload Size to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the motherboard chipset will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set this BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PCI Express Burn-in Mode – The BIOS Optimization Guide

PCI Express Burn-in Mode

Common Options : Default, 101.32MHz, 102.64MHz, 103.96MHz, 105.28MHz, 106.6MHz, 107.92MHz, 109.24MHz

 

Quick Review

The PCI Express Burn-in Mode BIOS feature allows you to overclock the PCI Express bus, even if Intel stamps its foot petulantly and insist that it is not meant for this purpose. While it does not give you direct control of the bus clocks, it allows some overclocking of the PCI Express bus.

When this BIOS feature is set to Default, the PCI Express bus runs at its normal speed of 33MHz.

When this BIOS feature is set to 101.32MHz, the PCI Express bus runs at a higher speed of 101.32MHz.

When this BIOS feature is set to 102.64MHz, the PCI Express bus runs at a higher speed of 102.64MHz.

When this BIOS feature is set to 103.96MHz, the PCI Express bus runs at a higher speed of 103.96MHz.

When this BIOS feature is set to 105.28MHz, the PCI Express bus runs at a higher speed of 105.28MHz.

When this BIOS feature is set to 106.6MHz, the PCI Express bus runs at a higher speed of 106.6MHz.

When this BIOS feature is set to 107.92MHz, the PCI Express bus runs at a higher speed of 107.92MHz.

When this BIOS feature is set to 109.24MHz, the PCI Express bus runs at a higher speed of 109.24MHz.

For better performance, it is recommended that you set this BIOS feature to 109.24MHz. This overclocks the PCI Express bus by about 9%, which should not cause any stability problems with most PCI Express devices. But if you encounter any stability issues, use a lower setting.

[adrotate group=”1″]

 

Details of PCI Express Burn-in Mode

While many motherboard manufacturers allow you to overclock various system clocks, Intel officially does not condone or support overclocking. Therefore, motherboards sold by Intel lack BIOS features that allow you to directly modify bus clocks.

However, some Intel motherboards come with a PCI Express Burn-in Mode BIOS feature. This ostensibly allows you to “burn-in” PCI Express devices with a slightly higher bus speed before settling back to the normal bus speed.

Of course, you can use this BIOS feature to overclock the PCI Express bus, even if Intel stamps its foot petulantly and insist that it is not meant for this purpose. While it does not give you direct control of the bus clocks, it allows some overclocking of the PCI Express bus.

When this BIOS feature is set to Default, the PCI Express bus runs at its normal speed of 33MHz.

When this BIOS feature is set to 101.32MHz, the PCI Express bus runs at a higher speed of 101.32MHz.

When this BIOS feature is set to 102.64MHz, the PCI Express bus runs at a higher speed of 102.64MHz.

When this BIOS feature is set to 103.96MHz, the PCI Express bus runs at a higher speed of 103.96MHz.

When this BIOS feature is set to 105.28MHz, the PCI Express bus runs at a higher speed of 105.28MHz.

When this BIOS feature is set to 106.6MHz, the PCI Express bus runs at a higher speed of 106.6MHz.

When this BIOS feature is set to 107.92MHz, the PCI Express bus runs at a higher speed of 107.92MHz.

When this BIOS feature is set to 109.24MHz, the PCI Express bus runs at a higher speed of 109.24MHz.

As you can see, this BIOS feature doesn’t allow much play with the clock speed. You can only adjust the clock speeds upwards by about 9%.

For better performance, it is recommended that you set this BIOS feature to 109.24MHz. This overclocks the PCI Express bus by about 9%, which should not cause any stability problems with most PCI Express devices. But if you encounter any stability issues, use a lower setting.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Execute Disable Bit – The BIOS Optimization Guide

Execute Disable Bit

Common Options : Enabled, Disabled

 

Execute Disable Bit Quick Review

This BIOS feature is a toggle for the processor’s Execute Disable Bit option. In fact, the acronym XD is short for Execute Disable and is specific to Intel’s implementation. AMD’s implementation is called NX, short for No Execute.

When enabled, the processor prevents the execution of code in data-only memory pages. This provides some protection against buffer overflow attacks.

When disabled, the processor will not restrict code execution in any memory area. This makes the processor more vulnerable to buffer overflow attacks.

It is highly recommended that you enable this BIOS feature for increased protection against buffer overflow attacks.

However, please note that the Execute Disable Bit feature is a hardware feature present only in newer Intel processors. If your processor does not support Execute Disable Bit, then this BIOS feature will have no effect.

In addition, you must use an operating system that supports the Execute Disable Bit feature. Currently, that includes the following operating systems :

  • Microsoft Windows Server 2003 with Service Pack 1, or later.
  • Microsoft Windows XP with Service Pack 2, or later.
  • Microsoft Windows XP Tablet PC Edition 2005, or later.
  • SUSE Linux 9.2, or later.
  • Red Hat Enterprise Linux 3 Update 3, or later.

Incidentally, some applications and device drivers attempt to execute code from the kernel stack for improved performance. This will cause a page-fault error if Execute Disable Bit is enabled. In such cases, you will need to disable this BIOS feature.

 

Execute Disable Bit Details

Buffer overflow attacks are a major threat to networked computers. For example, a worm may infect a computer and flood the processor with code, bringing the system down to a halt. The worm will also propagate throughout the network, paralyzing each and every system it infects.

Due to the prevalence of such attacks, Intel enhanced their processor architecture with a feature called Execute Disable Bit, which is designed to protect the computer against certain buffer overflow attacks. First released for the 64-bit Intel Itanium processor in 2001, this feature only appeared in Intel desktop and workstation processors from November 2004 onwards. Intel mobile processors with Execute Disable Bit only started shipping in February, 2005.

Processors that come with this feature can restrict memory areas in which application code can be executed. When paired with an operating system that supports the Execute Disable Bit feature, the processor adds a new attribute bit (the Execute Disable Bit) in the paging structures used for address translation.

If the Execute Disable Bit of a memory page is set to 1, that page can only be used to store data. It will not be used to store executable code. But if the Execute Disable Bit of a memory page is set to 0, that page can be used to store data or executable code.

The processor will henceforth check the Execute Disable Bit whenever it executes code. It will not execute code in a memory page with the Execute Disable Bit set to 1. Any attempt to execute code in such a protected memory page will result in a page-fault exception.

So, if a worm or virus inserts code into the buffer, the processor prevents the code from being executed and the attack fails. This also prevents the worm or virus from propagating to other computers on the network.

[adrotate group=”1″]

This BIOS feature is a toggle for the processor’s Execute Disable Bit option. In fact, the acronym XD is short for Execute Disable and is specific to Intel’s implementation. AMD’s implementation is called NX, short for No Execute.

When enabled, the processor prevents the execution of code in data-only memory pages. This provides some protection against buffer overflow attacks.

When disabled, the processor will not restrict code execution in any memory area. This makes the processor more vulnerable to buffer overflow attacks.

It is highly recommended that you enable this BIOS feature for increased protection against buffer overflow attacks.

However, please note that the Execute Disable Bit feature is a hardware feature present only in newer Intel processors. If your processor does not support Execute Disable Bit, then this BIOS feature will have no effect.

In addition, you must use an operating system that supports the Execute Disable Bit feature. Currently, that includes the following operating systems :

  • Microsoft Windows Server 2003 with Service Pack 1, or later.
  • Microsoft Windows XP with Service Pack 2, or later.
  • Microsoft Windows XP Tablet PC Edition 2005, or later.
  • SUSE Linux 9.2, or later.
  • Red Hat Enterprise Linux 3 Update 3, or later.

Incidentally, some applications and device drivers attempt to execute code from the kernel stack for improved performance. This will cause a page-fault error if Execute Disable Bit is enabled. In such cases, you will need to disable this BIOS feature.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Gate A20 Option – The BIOS Optimization Guide

Gate A20 Option

Common Options : Normal, Fast

 

Quick Review

The Gate A20 Option BIOS feature is used to determine the method by which Gate A20 is controlled. The Normal option forces the chipset to use the slow keyboard controller to do the switching. The Fast option, on the other hand, allows the chipset to use its own 0x92 port for faster switching. No candy for guessing which is the recommended setting!

Please note this feature is only important for operating systems that switch a lot between real mode and protected mode. These operating systems include 16-bit operating systems like MS-DOS and 16-bit/32-bit hybrid operating systems like Microsoft Windows 98.

This feature has no effect if the operating system only runs in real mode (no operating system currently in use does that, as far as I know!) or if the operating system operates entirely in protected mode (i.e. Microsoft Windows XP or newer). This is because if A20 mode switching is not required, then it does not matter at all if the switching was done by the slow keyboard controller or the faster 0x92 port.

With all that said and done, the recommended setting for this BIOS feature is still Fast, even with operating systems that don’t do much mode switching. Although using the 0x92 port to control Gate A20 has been known to cause spontaneous reboots in certain, very rare instances, there is really no reason why you should keep using the slow keyboard controller to turn A20 or or off.

 

Details

The A20 address line is a relic from the past. It came about because the father of x86 processors – the Intel 8088 had only 20 address lines! That meant that it could only address 1 MB of memory. Of course, in the 8088 days, 1 MB of memory was a LOT!

When the Intel 80286 processor was introduced, it had 24 address lines. This represented a tremendous leap in the amount of addressable memory. Although there were only four extra address lines, that allowed the 80286 to address up to 16 MB of memory.

To maintain 100% software compatibility with the 8088, the 80286 had a real mode that would truncate addresses to 20-bits. Unfortunately, a design bug prevented it from truncating the addresses properly. Thus, the 80286 was unable to run many 8088-compatible software.

To solve this problem, IBM designed an AND gate switch to control the 20th address bit. This switch was henceforth known as the Gate A20. When enabled, all available address lines would be used by the processor for access to memory above the first megabyte.

In the 8088-compatible real mode, the Gate A20 would be used to clear the 20th bit of all addresses. This allowed the 80286 to function like a superfast 8088 processor with access only to the first megabyte of memory.

Even today, Gate A20 is still an important part of the computer. The processor needs to turn A20 on and off when it switches between real mode and protected mode. Since operating systems like Microsoft Windows 98 switch a lot between real mode and protected mode, relying on the understandably slow keyboard controller was not acceptable.

The motherboard chipset’s I/O port 0x92 (System Control Port A) was summarily recruited to take over the job. A lot faster than the keyboard controller, the 0x92 port allows the processor to switch much faster between real mode and protected mode. This translates into faster memory access and better system performance.

[adrotate group=”2″]

The Gate A20 Option BIOS feature is used to determine the method by which Gate A20 is controlled. The Normal option forces the chipset to use the slow keyboard controller to do the switching. The Fast option, on the other hand, allows the chipset to use its own 0x92 port for faster switching. No candy for guessing which is the recommended setting!

Please note this feature is only important for operating systems that switch a lot between real mode and protected mode. These operating systems include 16-bit operating systems like MS-DOS and 16-bit/32-bit hybrid operating systems like Microsoft Windows 98.

This feature has no effect if the operating system only runs in real mode (no operating system currently in use does that, as far as I know!) or if the operating system operates entirely in protected mode (i.e. Microsoft Windows XP or newer). This is because if A20 mode switching is not required, then it does not matter at all if the switching was done by the slow keyboard controller or the faster 0x92 port.

With all that said and done, the recommended setting for this BIOS feature is still Fast, even with operating systems that don’t do much mode switching. Although using the 0x92 port to control Gate A20 has been known to cause spontaneous reboots in certain, very rare instances, there is really no reason why you should keep using the slow keyboard controller to turn A20 or or off.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Errata 123 Option – The BIOS Optimization Guide

Errata 123 Option

Common Options : Auto, Enabled, Disabled

 

Quick Review of Errata 123 Option

Errata 123 refers to the 123rd bug identified in AMD Athlon and Opteron processors. This bug affects the cache bypass feature in those processors.

These processors have an internal data path that allows the processor to bypass the L2 cache and initiate an early DRAM read for certain cache line fill requests, even before receiving the hit/miss status from the L2 cache.

However, at low core frequencies, the DRAM data read may reach the processor core before it is ready. This causes data corruption and/or the processor to hang.

This bug affects the following processor families :

  • Dual core AMD Opteron (Socket 940 and Socket 939) processors
  • AMD Athlon 64 X2 (Socket 939) processors

The Errata 123 Option BIOS feature is a workaround for the bug. It allows you to disable the cache bypass feature and avoid the bug from manifesting.

When enabled, the processor will not bypass the L2 cache to prefetch data from the system memory.

When disabled, the processor will continue to bypass the L2 cache for certain cache line fill requests. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or corrupting data. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

 

Details of Errata 123 Option

As processors get more and more complex, every new processor design inevitably comes with a plethora of bugs. Those that are identified are given errata numbers.

Errata 123 refers to the 123rd bug identified in AMD Athlon and Opteron processors. This bug affects the cache bypass feature in those processors.

These processors have an internal data path that allows the processor to bypass the L2 cache and initiate an early DRAM read for certain cache line fill requests, even before receiving the hit/miss status from the L2 cache.

However, at low core frequencies, the DRAM data read may reach the processor core before it is ready. This causes data corruption and/or the processor to hang.

This bug is present in AMD’s processor revisions of JH-E1BH-E4 and JH-E6. These revisions affect the following processor families :

  • Dual core AMD Opteron (Socket 940 and Socket 939) processors[adrotate group=”2″]
  • AMD Athlon 64 X2 (Socket 939) processors

The processor families that are not affected are :

  • Single core AMD Opteron (Socket 940) processors
  • AMD Athlon 64 (Socket 754, 939) processors
  • AMD Athlon 64 FX (Socket 940, 939) processors
  • Mobile AMD Athlon 64 (Socket 754) processors
  • AMD Sempron (Socket 754, 939) processors
  • Mobile AMD Sempron (Socket 754) processors
  • Mobile AMD Athlon XP-M (Socket 754) processors
  • AMD Turion processors

The Errata 123 Option BIOS feature is a workaround for the bug. It allows you to disable the cache bypass feature and avoid the bug from manifesting.

When enabled, the processor will not bypass the L2 cache to prefetch data from the system memory.

When disabled, the processor will continue to bypass the L2 cache for certain cache line fill requests. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or corrupting data. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Delayed Transaction – The BIOS Optimization Guide

Delayed Transaction

Common Options : Enabled, Disabled

 

Quick Review of Delayed Transaction

To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed. According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles for the initial read and 8 PCI clock cycles for each subsequent read.

If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Delayed Transaction feature.

With PCI Delayed Transaction enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the master device will rearbitrate for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, the target device just sends the completion status to the master device to complete the transaction.

One advantage of using PCI Delayed Transaction is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Delayed Transaction also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. Without PCI Delayed Transaction, all write-posted data will have to be flushed before another PCI transaction can occur.

It is highly recommended that you enable Delayed Transaction for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

Please note that while many manuals and even earlier versions of the BIOS Optimization Guide have stated that this is an ISA bus-specific BIOS feature which enables a 32-bit write-posted buffer for faster PCI-to-ISA writes, they are incorrect! This BIOS feature is not ISA bus-specific and it does not control any write-posted buffers. It merely allows write-posting to continue while a non-postable PCI transaction is underway.

 

Details of Delayed Transaction

On the PCI bus, there are many devices that may not meet the PCI target latency rule. Such devices include I/O controllers and bridges (i.e. PCI-to-PCI and PCI-to-ISA bridges). To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed.

According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles (32 clock cycles for a host bus bridge) for the initial read and 8 PCI clock cycles for each subsequent read. If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Delayed Transaction feature.

When a master device reads from a target device on the PCI bus but fails to meet the latency requirements; the transaction will be terminated with a Retry command. The master device will then have to rearbitrate for bus access. But if PCI Delayed Transaction had been enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the target device latches on the data and terminates the transaction if it cannot be completed within the target latency period. The master device then rearbitrates for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, instead of returning data (in the case of a read transaction), the target device sends the completion status to the master device to complete the transaction.

[adrotate group=”1″]

One advantage of using PCI Delayed Transaction is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Delayed Transaction also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. The write-posted data will be written to memory while the target device is working on the non-postable transaction and flushed before the transaction is completed on the master device. Without PCI Delayed Transaction, all write-posted data will have to be flushed before another PCI transaction can occur.

As you can see, the PCI Delayed Transaction feature allows for more efficient use of the PCI bus as well as better PCI performance by allowing write-posting to occur concurrently with non-postable transactions. In this BIOS, the Delayed Transaction option allows you to enable or disable the PCI Delayed Transaction feature.

It is highly recommended that you enable Delayed Transaction for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

Please note that while many manuals and even earlier versions of the BIOS Optimization Guide have stated that this is an ISA bus-specific BIOS feature which enables a 32-bit write-posted buffer for faster PCI-to-ISA writes, they are incorrect! This BIOS feature is not ISA bus-specific and it does not control any write-posted buffers. It merely allows write-posting to continue while a non-postable PCI transaction is underway.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

LD-Off Dram RD/WR Cycles – The BIOS Optimization Guide

LD-Off Dram RD/WR Cycles

Common Options : Delay 1T, Normal

 

Quick Review

The LD-Off Dram RD/WR Cycles BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

 

Details

At the beginning of a memory transaction (read or write), the memory controller normally sends the address and command signals simultaneously to the memory bank. This allows for the quickest activation of the memory bank.

However, this may cause problems with certain memory modules. In these memory modules, the target row may not be activated quickly enough to allow the memory controller to read from or write to it. This is where the LD-Off Dram RD/WR Cycles BIOS feature comes in.

[adrotate group=”1″]

This BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle. This ensures there is enough time for the memory bank to be activated before the read or write command arrives.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PAVP Mode – The BIOS Optimization Guide

PAVP Mode

Common Options : Paranoid, Lite, Disabled

 

PAVP Mode Quick Review

PAVP (Protected Audio Video Path) controls the hardware-accelerated decoding of encrypted video streams by Intel integrated graphics processors. Intel offers two PAVP modes – Paranoid and Lite.

When set to Paranoid, the video stream is encrypted and its decoding is accelerated by the integrated graphics processor. In addition, 96 MB of system memory will be reserved exclusively for use by PAVP.

When set to Lite, the video stream is encrypted and its decoding is accelerated by the integrated graphics processor. No system memory will be reserved for use by PAVP.

When set to Disabled, the hardware-accelerated decoding of video content protected by HDCP is disabled.

If you wish to play HDCP-protected content, you should select the Lite option. It allows hardware-accelerated decoding of the video stream. The graphics core will grab system memory for use by PAVP only when it is needed and release it after use.

The allocation of PAVP stolen memory may be necessary to allow some applications to stream lossless audio formats like Dolby TrueHD or DTS-HS MA. In such cases, you will need to set the PAVP Mode BIOS option to Paranoid. However, this takes up 96 MB of system memory and also disables the Windows Aero interface.

You should only use the Disabled setting if you intend to use an external graphics card to accelerate the decoding of the video stream, or if you wish to test the ability of the CPU to handle decryption of the video stream.

 

PAVP Mode Details

PAVP (Protected Audio Video Path) is a feature available on some Intel chipsets with integrated graphics. It ensures a secure content protection path for high-definition video sources like Blu-ray discs. It also controls the hardware-accelerated decoding of encrypted video streams by the integrated graphics processor.

Intel offers two PAVP modes – Paranoid and Lite. Here is a table that summarizes the difference between the two modes :

Feature

PAVP Paranoid

PAVP Lite

Compressed video buffer is encrypted

Yes

Yes

Hardware acceleration of 128-bit AES decryption

Yes

Yes

Protected memory (96 MB reserved during boot)

Yes

No

In other words, the two modes only differ in whether 96 MB of system memory should be reserved for use by PAVP.

When set to Paranoid, the video stream is encrypted and its decoding is accelerated by the integrated graphics processor. In addition, 96 MB of system memory will be reserved exclusively for use by PAVP. This reserved memory (also known as the PAVP Stolen Memory) will not be visible to the operating system or applications.

When set to Lite, the video stream is encrypted and its decoding is accelerated by the integrated graphics processor. No system memory will be reserved for use by PAVP.

When set to Disabled, the hardware-accelerated decoding of video content protected by HDCP is disabled.

If you wish to play HDCP-protected content, you should select the Lite option. It allows hardware-accelerated decoding of the video stream. The graphics core will grab system memory for use by PAVP only when it is needed and release it after use.

The allocation of PAVP stolen memory may be necessary to allow some applications to stream lossless audio formats like Dolby TrueHD or DTS-HS MA. In such cases, you will need to set the PAVP Mode BIOS option to Paranoid. However, this takes up 96 MB of system memory and also disables the Windows Aero interface.

You should only use the Disabled setting if you intend to use an external graphics card to accelerate the decoding of the video stream, or if you wish to test the ability of the CPU to handle decryption of the video stream.

Go Back To > The BIOS Optimization Guide | Home

[adrotate group=”1″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

AGP 2X Mode – The BIOS Optimization Guide

AGP 2X Mode

Common Options : Enabled, Disabled

 

Quick Review

The AGP 2X Mode BIOS feature is a toggle for the motherboard’s AGP 2X support.

When enabled, it allows the AGP bus to make use of the AGP 2X transfer protocol to boost the AGP bus bandwidth. If it’s disabled, then the AGP bus will only use the standard AGP 1X transfer protocol.

The AGP 2X protocol must be supported by both the motherboard and graphics card for this feature to work. Of course, this feature will only appear in your BIOS if your motherboard supports the AGP 2X transfer protocol!

All you need to do is make sure your graphics card supports AGP 2X transfers. If it does, enable AGP 2X Mode to take advantage of the faster transfer mode.

Disable it only if you are facing stability issues or if you intend to overclock the AGP bus beyond 75 MHz with sidebanding support enabled.

 

Details

The AGP 2X Mode BIOS feature is found on AGP 2X-capable motherboards. When enabled, it allows the AGP bus to make use of the AGP 2X transfer protocol to boost the AGP bus bandwidth. If it’s disabled, then the AGP bus will only use the standard AGP 1X transfer protocol.

The baseline AGP 1X protocol only makes use of the rising edge of the AGP signal for data transfer. This translates into a bandwidth of 264 MB/s. But enabling AGP 2X Mode doubles that bandwidth by transferring data on both the rising and falling edges of the signal. Through this method, the effective bandwidth of the AGP bus is doubled even though the AGP clock speed remains at the standard 66 MHz. This is the same method by which UltraDMA/33 derives its performance boost.

[adrotate group=”2″]

The AGP 2X protocol must be supported by both the motherboard and graphics card for this feature to work. Of course, this feature will only appear in your BIOS if your motherboard supports the AGP 2X transfer protocol!

All you need to do is make sure your graphics card supports AGP 2X transfers. If it does, enable AGP 2X Mode to take advantage of the faster transfer mode.

Disable it only if you are facing stability issues or if you intend to overclock the AGP bus beyond 75 MHz with sidebanding support enabled.

Please note that doubling the AGP bus bandwidth through the AGP 2X transfer protocol won’t double the performance of your AGP graphics card. The performance of the graphics card relies on far more than the bandwidth of the AGP bus. The performance boost is most apparent when the AGP bus is really stressed (i.e. during a texture-intensive game).

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

MCLK Spread Spectrum – The BIOS Optimization Guide

MCLK Spread Spectrum

Common Options : 0.25%, 0.5%, 0.75%, Disabled

 

Quick Review

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The MCLK Spread Spectrum BIOS feature controls spread spectrum clocking of the memory bus. It usually offers three levels of modulation – 0.25%, 0.5% or 0.75%. They denote the amount of modulation around the memory bus frequency. The greater the modulation, the greater the reduction of EMI. Therefore, if you need to significantly reduce EMI, a modulation of 0.75% is recommended.

Generally, frequency modulation through spread spectrum clocking should not cause any problems. However, system stability may be compromised if you are overclocking the memory bus.

Therefore, it is recommended that you disable the MCLK Spread Spectrum feature if you are overclocking the memory bus. Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the memory bus frequency a little to provide a margin of safety.

If you are not overclocking the memory bus, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature. Otherwise, disable it to remove even the slightest possibility of stability issues.

 

Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted.

To prevent EMI from causing problems to other electronics, the FCC enacted Part 15 of the FCC regulations in 1975. It regulates the power output of such clock generators by limiting the amount of EMI they can generate. As a result, engineers use spread spectrum clocking to ensure that their motherboards comply with the FCC regulation on EMI levels.

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. Instead of generating a typical waveform, the clock signal continuously varies around the target frequency within a tight range. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The MCLK Spread Spectrum BIOS feature controls spread spectrum clocking of the memory bus. It usually offers three levels of modulation – 0.25%, 0.5% or 0.75%. They denote the amount of modulation around the memory bus frequency. The greater the modulation, the greater the reduction of EMI. Therefore, if you need to significantly reduce EMI, a modulation of 0.75% is recommended.

[adrotate group=”2″]

Generally, frequency modulation through spread spectrum clocking should not cause any problems. However, system stability may be compromised if you are overclocking the memory bus. Of course, this depends on the amount of modulation, the extent of overclocking and other factors like temperature, voltage levels, etc. As such, the problem may not readily manifest itself immediately.

Therefore, it is recommended that you disable the MCLK Spread Spectrum feature if you are overclocking the memory bus. You will be able to achieve better overclockability, at the expense of higher EMI. Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the memory bus frequency a little to provide a margin of safety.

If you are not overclocking the memory bus, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature. Otherwise, disable it to remove even the slightest possibility of stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Maximum TLP Payload – The BIOS Optimization Guide

Maximum TLP Payload

Common Options : 128, 256, 512, 1024, 2048, 4096

 

Quick Review of Maximum TLP Payload

The Maximum TLP Payload BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size that the motherboard’s PCI Express controller should use. The TLP payload size determines the amount of data transmitted within each data packet.

When set to 128, the motherboard’s PCI Express controller will only support a maximum data payload of 128 bytes within each TLP.

When set to 256, the motherboard’s PCI Express controller will only support a maximum data payload of 256 bytes within each TLP.

When set to 512, the motherboard’s PCI Express controller will only support a maximum data payload of 512 bytes within each TLP.

When set to 1024, the motherboard’s PCI Express controller will only support a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the motherboard’s PCI Express controller will only support a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the motherboard’s PCI Express controller supports the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set the Maximum TLP Payload BIOS feature to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the PCI Express controller will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set the Maximum TLP Payload BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

 

Details of Maximum TLP Payload

The PCI Express protocol transmits data as well as control messages on the same links. This differs the PCI Express interconnect from the PCI bus and the AGP port, which make use of separate sideband signalling for control messages.

Control messages are delivered as Data Link Layer Packets or DLLPs, while data packets are sent out as Transaction Layer Packets or TLPs. However, TLPs are not pure data packets. They have a header which carries information like packet size, message type, traffic class, etc.

In addition, the actual data (known as the “payload”) is encoded with the 8B/10B encoding scheme. This replaces 8 uncoded bits with 10 encoded bits. This itself results in a 20% “loss” of bandwidth. The TLP overhead is further exacerbated by a 32-bit LCRC error-checking code.

Therefore, the size of the data payload is an important factor in determining the efficiency of the PCI Express interconnect. As the data payload gets smaller, the TLP becomes less efficient, because the overhead will then take up a more significant amount of bandwidth. To achieve maximum efficiency, the TLP should be as large as possible.

The PCI Express specifications defined the following TLP payload sizes :

  • 128 bytes
  • 256 bytes
  • 512 bytes
  • 1024 bytes
  • 2048 bytes
  • 4096 bytes

However, it is up to the manufacturer to set the maximum TLP payload size supported by the PCI Express device. It determines the maximum TLP payload size the device can send or receive. When two PCI Express devices communicate with each other, the largest TLP payload size supported by both devices will be used.

[adrotate group=”1″]

The Maximum TLP Payload BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size that the motherboard’s PCI Express controller should use. The TLP payload size, as mentioned earlier, determines the amount of data transmitted within each data packet.

When set to 128, the motherboard’s PCI Express controller will only support a maximum data payload of 128 bytes within each TLP.

When set to 256, the motherboard’s PCI Express controller will only support a maximum data payload of 256 bytes within each TLP.

When set to 512, the motherboard’s PCI Express controller will only support a maximum data payload of 512 bytes within each TLP.

When set to 1024, the motherboard’s PCI Express controller will only support a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the motherboard’s PCI Express controller will only support a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the motherboard’s PCI Express controller supports the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set the Maximum TLP Payload BIOS feature to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the PCI Express controller will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set the Maximum TLP Payload BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

Go Back To > BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

LAN Boot ROM – The BIOS Optimization Guide

LAN Boot ROM

Common Options : Enabled, Disabled

 

Quick Review of LAN Boot ROM

Newer motherboards has Gigabit LAN controllers that boast throughputs of up to 1 Gbps (1000 Mbps). However, the newer Gigabit LAN controllers are only supported by newer operating systems. If you use older operating systems like MS-DOS or operating systems that do not have driver support , the Gigabit LAN controller will only operate in the 10/100 Mbps mode.

This is where the LAN Boot ROM BIOS option comes in.

When enabled, the motherboard will load the Gigabit LAN controller’s boot ROM when it boots up. This allows the LAN controller to operate at its full 1000 Mbps speed with operating systems that do not have proper driver support.

When disabled, the Gigabit LAN controller’s boot ROM will not be loaded when the motherboard boots up. The LAN controller will only operate at its full 1000 Mbps speed with proper driver support. Otherwise, it reverts to the 10/100 Mbps mode.

If you have multiple operating systems installed (and at least one do not have driver support for the LAN controller), enable this BIOS option to ensure the Gigabit LAN operates in its full 1000 Mbps mode in all operating systems.

If you are using only operating systems that have driver support for the Gigabit LAN controller, then you should disable the LAN Boot ROM BIOS option. This reduces the boot time (slightly) and frees up memory that would have been taken up by the boot ROM.

 

Details of LAN Boot ROM

Many motherboards have integrated LAN controllers with 1 or 2 LAN ports. Older motherboards come with 10/100 LAN controllers but newer motherboards has Gigabit LAN controllers that boast throughputs of up to 1 Gbps (1000 Mbps). However, the newer Gigabit LAN controllers are only supported by newer operating systems, using either a native (built-in) driver or a driver provided by the Gigabit LAN controller’s manufacturer.

Older operating systems like MS-DOS or operating systems that do not have driver support will not be able to utilize the Gigabit LAN controller’s full capabilities. If you use such operating systems, the Gigabit LAN controller will only operate in the 10/100 Mbps mode. This is where the LAN Boot ROM BIOS option comes in.

When enabled, the motherboard will load the Gigabit LAN controller’s boot ROM when it boots up. This allows the LAN controller to operate at its full 1000 Mbps speed with operating systems that do not have proper driver support.

When disabled, the Gigabit LAN controller’s boot ROM will not be loaded when the motherboard boots up. The LAN controller will only operate at its full 1000 Mbps speed with proper driver support. Otherwise, it reverts to the 10/100 Mbps mode.

Note that even if the LAN controller’s boot ROM is loaded, the driver will take over when you boot into an operating system with proper driver support. So, if you have multiple operating systems installed (and at least one do not have driver support for the LAN controller), enable this BIOS option to ensure the Gigabit LAN operates in its full 1000 Mbps mode in all operating systems.

If you are using only operating systems that have driver support for the Gigabit LAN controller, then you should disable the LAN Boot ROM BIOS option. This reduces the boot time (slightly) and frees up memory that would have been taken up by the boot ROM. The LAN boot ROM only uses 16 KB to 256 KB of memory, but why waste it if you won’t use it?

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Fixed Disk Boot Sector – The BIOS Optimization Guide

Fixed Disk Boot Sector

Common Options : Normal, Write Protect

 

Quick Review

The Fixed Disk Boot Sector BIOS feature provides rudimentary anti-virus protection by write-protecting the boot sector.

If this feature is enabled, the BIOS will block any attempt to write to the boot sector and flash a warning message. This protects the system from boot sector viruses. Please note that it offers no protection against other types of viruses.

If this feature is disabled, the BIOS will not block any writes to the boot sector.

This feature can cause problems with software that need to write to the boot sector. One good example is the installation routine of all versions of Microsoft Windows, from Windows 95 onwards. When enabled, this feature causes the installation routine to fail.

Many hard drive diagnostic utilities that access the boot sector can also trigger the system halt and error message as well. Therefore, you should disable this feature before running such utilities, or when you intend to install a new operating system.

 

Details

The Fixed Disk Boot Sector BIOS feature provides rudimentary anti-virus protection by write-protecting the boot sector.

If this feature is enabled, the BIOS will block any attempt to write to the boot sector and flash a warning message. This protects the system from boot sector viruses. Please note that it offers no protection against other types of viruses.

If this feature is disabled, the BIOS will not block any writes to the boot sector.

This feature can cause problems with software that need to write to the boot sector. One good example is the installation routine of all versions of Microsoft Windows, from Windows 95 onwards. When enabled, this feature causes the installation routine to fail.

[adrotate banner=”4″]

Many hard drive diagnostic utilities that access the boot sector can also trigger the system halt and error message as well. Therefore, you should disable this feature before running such utilities, or when you intend to install a new operating system.

Please note that this BIOS feature is useless for storage drives that run on external controllers with their own BIOS. Boot sector viruses will bypass the system BIOS with such anti-virus protection features, and write directly to the drives. Such controllers include additional IDE, SATA or SCSI controllers that are either built into the motherboard or part of add-on PCI Express or PCI cards.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ATAPI 80-Pin Cable Detection – The BIOS Optimization Guide

ATAPI 80-Pin Cable Detection

Common Options : Host & Device, Host, Device

 

Quick Review

The ATAPI 80-Pin Cable Detection BIOS feature was incorrectly named because it actually refers to the 40-pin, 80-conductor IDE cable. Despite the misleading name, the IDE cable does not have 80-pins. The 80-conductor cable only adds 40 additional ground wires to the 40 ground wires already nestled between the 40 signal wires.

The ATAPI 80-Pin Cable Detection BIOS feature controls whether both IDE controller and IDE device should be allowed to detect the type of IDE cable used.

When set to Host & Device, both the IDE controller and the IDE device will be able to detect the type of IDE cable used.

When set to Host, only the IDE controller will be able to detect the type of IDE cable used.

When set to Device, only the IDE device will be able to detect the type of IDE cable used.

The higher Ultra DMA transfer modes will only be allowed if the 80-conductor cable is used and detected by the system. Otherwise, the system defaults to slower transfer modes, even if you set the drives to use the faster transfer modes.

It is recommended that you leave this BIOS feature at the default setting of Host & Device. This ensures that the system will never incorrectly detect a 40-conductor cable as an 80-conductor cable, preventing data corruption.

 

Details

The ATAPI 80-Pin Cable Detection BIOS feature was incorrectly named because it actually refers to the 40-pin, 80-conductor IDE cable. Despite the misleading name, the IDE cable does not have 80-pins. It actually uses the same 40-pin connector as the original 40-conductor IDE cable. In fact, it is electrically and logically similar to the 40-conductor cable.

The 80-conductor cable only adds 40 additional ground wires to the 40 ground wires already nestled between the 40 signal wires. These ground wires reduce cross-talk between the signal wires and improve signal integrity. They allow the cable to reliably support transfer rates of 66 MB/s and 100 MB/s. Hence, these 80-conductor cables are essential if you want to use those higher transfer rates.

The 40-pin, 80-conductor cable was first introduced with the ATA/ATAPI-4 standard but was not mandatory until ATA/ATAPI-5 was introduced. You must use the 80-conductor cable if you intend to use the faster 66 MB/s and 100 MB/s Ultra DMA modes. Using a 40-conductor cable will force the system to revert to slower Ultra DMA modes.

Both IDE controller and IDE devices (e.g. hard disk drives, DVD writers) can detect 80-conductor cables by checking if Pin #34 of the interface is grounded. 80-conductor cables have this pin grounded while 40-conductor cables do not.

The ATAPI 80-Pin Cable Detection BIOS feature controls whether both IDE controller and IDE device should be allowed to detect the type of IDE cable used.

When set to Host & Device, both the IDE controller and the IDE device will be able to detect the type of IDE cable used.

When set to Host, only the IDE controller will be able to detect the type of IDE cable used.

When set to Device, only the IDE device will be able to detect the type of IDE cable used.

The higher Ultra DMA transfer modes will only be allowed if the 80-conductor cable is used and detected by the system. Otherwise, the system defaults to slower transfer modes, even if you set the drives to use the faster transfer modes.

It is recommended that you leave this BIOS feature at the default setting of Host & Device. This ensures that the system will never incorrectly detect a 40-conductor cable as an 80-conductor cable, preventing data corruption.

[adrotate banner=”4″]

You should only change this BIOS feature to Host or Device if the IDE controller or the IDE device cannot correctly detect the 80-conductor cable. In other words, this is a workaround for situations where the IDE controller or IDE device cannot correctly detect 80-conductor cables.

You must be sure, though, that you have 80-conductor cables installed before changing this BIOS feature to Host or Device. Both 40-conductor and 80-conductor cables are similar in length and width. They even use the same 40-pin connector.

However, 40-conductor cables are made up of 40 thicker wires, while 80-conductor cables are made up of 80 thinner wires. 80-conductor cables also have colour-coded blue, gray and black connectors.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Auto Detect DIMM/PCI Clk – The BIOS Optimization Guide

Auto Detect DIMM/PCI Clk

Common Options : Enabled, Disabled

 

Quick Review

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI (Electromagnetic Interference) and reduce power consumption by turning off unoccupied or inactive PCI and memory slots.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

It is recommended that you enable this feature to save power and reduce EMI.

[adrotate banner=”5″]

 

Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted. To reduce this problem, the motherboard can either modulate the pulses (see Spread Spectrum) or turn off unused AGP, PCI or memory clock signals.

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI and reduce power consumption by turning off unoccupied or inactive PCI and memory slots. It is similar to the Smart Clock option of the Spread Spectrum BIOS feature.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

This method allows you to reduce the motherboard’s EMI levels without compromising system stability. It also allows the motherboard to reduce power consumption because the clock signals will only be generated for PCI and memory slots that are occupied and active.

The choice of whether to enable or disable this feature is really up to your personal preference. But since this feature reduces EMI and power consumption without compromising system stability, it is recommended that you enable it.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Init Display First – The BIOS Optimization Guide

Init Display First

Common Options : AGP or PCIe, PCI

 

Quick Review

The Init Display First BIOS feature allows you to select whether to boot the system using the PCIe / AGP graphics card or the PCI graphics card. This is important if you have both PCIe / AGP and PCI graphics cards.

If you are only using a single graphics card, the BIOS will ignore this BIOS setting and boot the computer using that graphics card. However, there may be a slight reduction in the time taken to detect and initialize the card if you select the proper setting. For example, if you only use a PCIe / AGP graphics card, then setting Init Display First to PCIe or AGP may speed up your system’s booting-up process.

If you are only using a single graphics card, it is recommended that you set the Init Display First feature to the proper setting for your system :

  • PCIe for a single PCIe card,
  • AGP for a single AGP card, and
  • PCI for a single PCI card.

But if you are using multiple graphics cards, it is up to you which card you want to use as your primary display card. It is recommended that you select the fastest graphics card as the primary display card.

 

Details

Although the PCI Express and AGP buses were designed exclusively for the graphics subsystem, some users still have to use PCI graphics cards for multi-monitor support. This was more common with AGP motherboards because there can be only one AGP port, while PCI Express motherboards can have multiple PCIe slots.

If you want to use multiple monitors on AGP motherboards, you must either get an AGP graphics card with multi-monitor support, or use PCI graphics cards. PCI Express motherboards usually have multiple PCIe slots, but there may still not be enough PCIe slots, and you may need to install PCI graphics cards.

For those who upgraded from a PCI graphics card to an AGP graphics card, it is certainly enticing to use the old PCI graphics card to support a second monitor. The PCI card would do the job just fine as it merely sends display data to the second monitor. You don’t need a powerful graphics card to run the second monitor, if it’s merely for display purposes.

When it comes to a case of a PCI Express or an AGP graphics card working in tandem with a PCI graphics card, the BIOS has to determine which graphics card is the primary graphics card. Naturally, the default would be the PCIe or AGP graphics card since it would naturally be the faster graphics card.

However, there are situations in which you may want to manually select the PCI graphics card instead. For example – you have a PCIe / AGP graphics card as well as a PCI graphics card, but only one monitor. This is where the Init Display First BIOS feature comes in. It allows you to select whether to boot the system using the PCIe / AGP graphics card or the PCI graphics card.

[adrotate banner=”5″]

If you are only using a single graphics card, the BIOS will ignore this BIOS setting and boot the computer using that graphics card. However, there may be a slight reduction in the time taken to detect and initialize the card if you select the proper setting. For example, if you only use a PCIe / AGP graphics card, then setting Init Display First to PCIe or AGP may speed up your system’s booting-up process.

If you are only using a single graphics card, it is recommended that you set the Init Display First feature to the proper setting for your system :

  • PCIe for a single PCIe card,
  • AGP for a single AGP card, and
  • PCI for a single PCI card.

But if you are using multiple graphics cards, it is up to you which card you want to use as your primary display card. It is recommended that you select the fastest graphics card as the primary display card.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ECP Mode Use DMA – The BIOS Optimization Guide

ECP Mode Use DMA

Common Options : Channel 1, Channel 3

 

Quick Review

This BIOS feature determines which DMA channel the parallel port should use when it is in ECP mode.

The ECP mode uses the DMA protocol to achieve data transfer rates of up to 2.5 Mbits/s and provides symmetric bidirectional communications. For all this, it requires the use of a DMA channel.

By default, the parallel port uses DMA Channel 3 when it is in ECP mode. This works fine in most situations.

This feature was provided just in case one of your add-on cards requires the use of DMA Channel 3. In such a case, you can use this BIOS feature to force the parallel port to use the alternate DMA Channel 1.

Please note that there is no performance advantage in choosing DMA Channel 3 over DMA Channel 1 or vice versa. As long as either Channel 3 or Channel 1 is available for your parallel port to use, the parallel port will be able to function properly in ECP mode.

[adrotate banner=”5″]

 

Details

This BIOS feature is usually found under the Parallel Port Mode feature. It is slaved to the ECP (Extended Capabilities Port) option. Therefore, if you do not enable either ECP or ECP+EPP, this feature will disappear from the screen or appear grayed out.

This BIOS feature determines which DMA channel the parallel port should use when it is in ECP mode.

The ECP mode uses the DMA protocol to achieve data transfer rates of up to 2.5 Mbits/s and provides symmetric bidirectional communications. For all this, it requires the use of a DMA channel.

By default, the parallel port uses DMA Channel 3 when it is in ECP mode. This works fine in most situations.

This feature was provided just in case one of your add-on cards requires the use of DMA Channel 3. In such a case, you can use this BIOS feature to force the parallel port to use the alternate DMA Channel 1.

Please note that there is no performance advantage in choosing DMA Channel 3 over DMA Channel 1 or vice versa. As long as either Channel 3 or Channel 1 is available for your parallel port to use, the parallel port will be able to function properly in ECP mode.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Intel RAID Technology – The BIOS Optimization Guide

Intel RAID Technology

Common Options : Enabled, Disabled

 

Quick Review

The Intel RAID Technology BIOS feature controls the RAID function of the Intel SATA controller.

When enabled, the SATA controller enables its RAID features when the computer boots up. You can then press Ctrl-I, when prompted at the boot screen, to access the RAID setup utility.

When disabled, the SATA controller disables its RAID functions when the computer boots up.

If you would like to make use of the SATA controller’s RAID features, you should enable this BIOS feature. But please note that enabling this feature requires you to load the Intel Matrix Storage Manager during the Windows installation routine.

If you do not intend to use the RAID features, it’s recommended that you disable this BIOS feature. This allows you to use the native Windows SATA driver. You won’t need to load the Intel Matrix Storage Manager during the Windows installation routine.

Please note that changing the Intel RAID Technology BIOS feature after installing the operating system may cause a boot failure. You may be required to reinstall the operating system.

 

Details

Intel introduced their unique Matrix Storage Technology with the Intel 82801 (ICH6R) series of I/O controller hubs. Also known as Matrix RAID Technology, it combines the performance benefits of RAID 0 and the data protection of RAID 1 using just two hard disk drives.

For more information on the Intel Matrix Storage Technology, please refer to our Intel Matrix RAID Guide.

The Intel RAID Technology BIOS feature controls the RAID function of the Intel SATA controller.

When enabled, the SATA controller enables its RAID features when the computer boots up. You can then press Ctrl-I, when prompted at the boot screen, to access the RAID setup utility.

When disabled, the SATA controller disables its RAID functions when the computer boots up.

If you would like to make use of the SATA controller’s RAID features, you should enable this BIOS feature. But please note that enabling this feature requires you to load the Intel Matrix Storage Manager during the Windows installation routine.

[adrotate banner=”5″]

When you load the Windows installation routine, the following message will appear on screen :

Press F6 if you have to install a third-party SCSI or RAID driver.

At this point, press the F6 key and insert the drive containing the Intel Matrix Storage Manager. Once it is loaded, the Windows installation will proceed as usual.

If you do not intend to use the RAID features, it’s recommended that you disable this BIOS feature. This allows you to use the native Windows SATA driver. You won’t need to load the Intel Matrix Storage Manager during the Windows installation routine.

Please note that changing the Intel RAID Technology BIOS feature after installing the operating system may cause a boot failure. You may be required to reinstall the operating system.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Floppy 3 Mode Support – The BIOS Optimization Guide

Floppy 3 Mode Support

Common Options : Disabled, Drive A, Drive B, Both

 

Quick Review

For reasons best known to the Japanese, their computers come with special 3 mode 3.5″ floppy drives. While physically similar to the standard 3.5″ floppy drives used by the rest of the world, these 3 mode floppy drives differ in the disk formats they support.

Unlike normal floppy drives, 3 mode floppy drives support three different floppy disk formats – 1.44 MB, 1.2 MB and 720 KB. Hence, their name. They allow the system to support the Japanese 1.2 MB floppy disk format, as well as the standard 1.44 MB and 720 KB (obsolete) disk formats.

If you own a 3 mode floppy drive and need to use the Japanese 1.2 MB disk format, you must enable this feature by selecting either Drive A, Drive B or Both (if you have two 3 mode floppy drives). Otherwise, your 3 mode floppy drive won’t be able to read the special 1.2 MB format properly.

However, if you only have a standard floppy drive, you must disable the Floppy 3 Mode Support feature or your floppy drive may not function properly.

 

Details

For reasons best known to the Japanese, their computers come with special 3 mode 3.5″ floppy drives. While physically similar to the standard 3.5″ floppy drives used by the rest of the world, these 3 mode floppy drives differ in the disk formats they support.

[adrotate banner=”4″]

Unlike normal floppy drives, 3 mode floppy drives support three different floppy disk formats – 1.44 MB, 1.2 MB and 720 KB. Hence, their name. They allow the system to support the Japanese 1.2 MB floppy disk format, as well as the standard 1.44 MB and 720 KB (obsolete) disk formats.

If you own a 3 mode floppy drive and need to use the Japanese 1.2 MB disk format, you must enable this feature by selecting either Drive A, Drive B or Both (if you have two 3 mode floppy drives). Otherwise, your 3 mode floppy drive won’t be able to read the special 1.2 MB format properly.

However, if you only have a standard floppy drive, you must disable the Floppy 3 Mode Support feature or your floppy drive may not function properly.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

CPU Hyper-Threading – The BIOS Optimization Guide

CPU Hyper-Threading

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature controls the functionality of the Intel Hyper-Threading Technology.

The Intel Hyper-Threading Technology allows a single processor to execute two or more separate threads concurrently. When it is enabled, multi-threaded software applications can execute their threads in parallel, thereby improving their performance.

The Intel Hyper-Threading Technology is only supported by certain Intel processors, from the Intel Pentium 4 onwards. Please note that for it to work, you should have the following :

  • an Intel processor that supports Hyper-Threading
  • a motherboard with a chipset and BIOS that support Hyper-Threading
  • an operating system which supports Hyper-Threading (Microsoft Windows XP or Linux 2.4.x, or better)

Since it behaves like two separate processors with their own APICs, you should also enable APIC Function in the BIOS, which is required for multi-processing.

It is highly recommended that you enable CPU Hyper-Threading for improved processor performance.

 

Details

The Intel Hyper-Threading Technology is an extension to the IA-32 architecture which allows a single processor to execute two or more separate threads concurrently. When it is enabled, multi-threaded software applications can execute their threads in parallel, thereby improving the processor’s performance.

The current implementation involves two logical processors sharing the processor’s execution engine and its bus interface. Each logical processor, though, will come with its own APIC. The other features of the processor are either shared or duplicated in each logical processor.

Here is a list of the features duplicated in each logical processor :-

  • General registers (EAX, EBX, ECX, EDX, ESI, EDI, ESP and EBP)
  • Segment registers (CS, DS, SS, ES, FS and GS)
  • EFLAGS and EIP registers
  • x87 FPU registers (ST0 to ST7, status word, control word, tag word, data operand pointer and instruction pointer)
  • MMX registers (MM0 to MM7)
  • XMM registers (XMM0 to XMM7)
  • MXCSR register
  • Control registers (CR0, CR2, CR3, CR4)
  • System table pointer registers (GDTR, LDTR, IDTR, task register)
  • Debug registers (DR0, DR1, DR2, DR3, DR6, DR7)
  • Debug control MSR (IA32_DEBUGCTL)
  • Machine check global status MSR (IA32_MCG_STATUS)
  • Machine check capability MSR (IA32_MCG_CAP)
  • Thermal clock modulation and ACPI power management control MSRs
  • Time stamp counter MSRs
  • Most of the other MSR registers including Page Attribute Table (PAT)
  • Local APIC registers
[adrotate banner=”4″]

Here are the features shared by the two logical processors :-

  • IA32_MISC_ENABLE MSR
  • Memory type range registers (MTRRs)

And the following are features that can be duplicated or shared according to requirements :-

  • Machine check architecture (MCA) MSRs
  • Performance monitoring control and counter MSRs

The Intel Hyper-Threading Technology is only supported by certain Intel processors, from the Intel Pentium 4 onwards. Please note that for it to work, you should have the following :

  • an Intel processor that supports Hyper-Threading
  • a motherboard with a chipset and BIOS that support Hyper-Threading
  • an operating system which supports Hyper-Threading (Microsoft Windows XP or Linux 2.4.x, or better)

Since it behaves like two separate processors with their own APICs, you should also enable APIC Function in the BIOS, which is required for multi-processing.

It is highly recommended that you enable CPU Hyper-Threading for improved processor performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PCI Prefetch – The BIOS Optimization Guide

PCI Prefetch

Common Options : Enabled, Disabled

 

Quick Review

The PCI Prefetch feature controls the PCI controller’s prefetch capability.

When enabled, the PCI controller will prefetch data whenever the PCI device reads from the system memory. This speeds up PCI reads as it allows contiguous memory reads by the PCI device to proceed with minimal delay.

Therefore, it is recommended that you enable this feature for better PCI read performance.

 

Details

The PCI Prefetch feature controls the PCI controller’s prefetch capability.

When enabled, the system controller will prefetch eight quadwords (one cache line) of data whenever a PCI device reads from the system memory.

Therefore, it is recommended that you enable this feature for better PCI read performance. Please note that PCI writes to the system memory do not benefit from this feature.

[adrotate banner=”4″]

Here’s how it works.

Whenever the PCI controller reads PCI-requested data from the system memory, it also reads the subsequent cache line of data. This is done on the assumption that the PCI device will request for the subsequent cache line.

When the PCI device actually initiates a read command for that cache line, the system controller can immediately send it to the PCI device.

This speeds up PCI reads as the PCI device won’t need to wait for the system controller to read from the system memory. As such, PCI Prefetch allows contiguous memory reads by the PCI device to proceed with minimal delay.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Read-Around-Write – The BIOS Optimization Guide

Read-Around-Write

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature allows the processor to execute read commands out of order, as if they are independent from the write commands. It does this by using a Read-Around-Write buffer.

If this BIOS feature is enabled, all processor writes to memory are first accumulated in that buffer. This allows the processor to execute read commands without waiting for the write commands to be completed.

The buffer will then combine the writes and write them to memory as burst transfers. This reduces the number of writes to memory and boosts the processor’s write performance.

If this BIOS feature is disabled, the processor writes directly to the memory controller. This reduces the processor’s read performance.

Therefore, it is highly recommended that you enable the Read-Around-Write BIOS feature for better processor read and write performance.

 

Details

This BIOS feature allows the processor to execute read commands out of order, as if they are independent from the write commands. It does this by using a Read-Around-Write buffer.

If this BIOS feature is enabled, all processor writes to memory are first accumulated in that buffer. This allows the processor to execute read commands without waiting for the write commands to be completed.

The buffer will then combine the writes and write them to memory as burst transfers. This reduces the number of writes to memory and boosts the processor’s write performance.

Incidentally, until its contents have been written to memory, the Read-Around-Write buffer also serves as a cache of the data that it is storing. These tend to be the most up-to-date data since the processor has just written them to the buffer.

[adrotate banner=”4″]

Therefore, if the processor sends out a read command for data that is still in the Read-Around-Write buffer, the processor can read directly from the buffer instead. This greatly improves read performance because the processor bypasses the memory controller to access the data. The buffer is much closer logically, so reading from it will be much faster than reading from memory.

If this BIOS feature is disabled, the processor writes directly to the memory controller. All writes have to be completed before the processor can execute a read command. It also prevents the buffer from being used as a temporary cache of processor writes. This reduces the processor’s read performance.

Therefore, it is highly recommended that you enable the Read-Around-Write BIOS feature for better processor read and write performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Duplex Select – The BIOS Optimization Guide

Duplex Select

Common Options : Full-Duplex, Half-Duplex

 

Quick Review

The Duplex Select BIOS feature allows you to determine the transmission mode of the IR (Infra-Red) communications port.

Selecting Full-Duplex permits simultaneous two-way transmission, like a conversation over the phone.

Selecting Half-Duplex, on the other hand, only permits transmission in one direction at any one time, which is more like a conversation over the walkie-talkie.

Naturally, the Full-Duplex mode is the faster and more desirable choice. You should use Full-Duplex if possible.

Consult your IR peripheral’s manual to determine if it supports Full-Duplex transmission. The IR peripheral must support Full-Duplex for this option to work.

[adrotate banner=”5″]

 

Details

The Duplex Select BIOS feature is usually found under the Onboard Serial Port 2 BIOS feature. It is slaved to the second serial port so if you disable that serial port, this option will disappear from the screen or appear grayed out.

The Duplex Select BIOS feature allows you to determine the transmission mode of the IR (Infra-Red) communications port.

Selecting Full-Duplex permits simultaneous two-way transmission, like a conversation over the phone.

Selecting Half-Duplex, on the other hand, only permits transmission in one direction at any one time, which is more like a conversation over the walkie-talkie.

Naturally, the Full-Duplex mode is the faster and more desirable choice. You should use Full-Duplex if possible.

Consult your IR peripheral’s manual to determine if it supports Full-Duplex transmission. The IR peripheral must support Full-Duplex for this option to work.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

32-bit Transfer Mode – The BIOS Optimization Guide

32-bit Transfer Mode

Common Options : On, Off

 

Quick Review

This BIOS feature allows you to command the IDE controller to combine two 16-bit hard disk reads into a single 32-bit data transfer to the processor. This greatly improves the performance of the IDE controller as well as the PCI bus.

Therefore, it is highly advisable to enable 32-bit Transfer Mode. If you disable it, data transfers from the IDE controller to the processor will only occur in 16-bits chunks.

 

Details

[adrotate banner=”4″]

This BIOS feature is similar to the 32-bit Disk Access BIOS feature. The name 32-bit Transfer Mode is actually a misnomer because it doesn’t really allow 32-bit transfers on the IDE bus.

The IDE interface is always 16-bits in width even when the IDE controller is on the 32-bit PCI bus. What this feature actually does is command the IDE controller to combine two 16-bit reads from the hard disk into a single 32-bit double word transfer to the processor. This allows the PCI bus to be more efficiently used as the number of transactions required for a particular amount of data is effectively halved!

However, according to a Microsoft article (Enhanced IDE operation under Windows NT 4.0), 32-bit disk access can cause data corruption under Windows NT in some cases. Therefore, Microsoft recommends that Windows NT 4.0 users disable 32-bit Disk Access.

Lord Mike asked ‘someone in the know’ about this matter and he was told that the data corruption issue was taken very seriously at Microsoft and that it had been corrected through the Windows NT 4.0 Service Pack 2. Although he couldn’t get an official statement from Microsoft, it’s probably safe enough to enable 32-bit Disk Access on a Windows NT 4.0 system, just as long as it has been upgraded with Service Pack 2.

Because it realizes the performance potential of the 32-bit IDE controller and improves the efficiency of the PCI bus, it is highly advisable to enable 32-bit Transfer Mode.

If you disable it, data transfers from the IDE controller to the processor will only occur in 16-bits chunks. This degrades the performance of the IDE controller as well as the PCI bus.

As such, you should disable this feature only if you actually face the possibility of data corruption (with an unpatched version of Windows NT 4.0).

You can also find more information on the Windows NT issue in the details of the IDE HDD Block Mode feature!

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Delay Prior To Thermal – The BIOS Optimization Guide

Delay Prior To Thermal

Common Options : 4 Minutes, 8 Minutes, 16 Minutes, 32 Minutes

 

Quick Review

The Delay Prior To Thermal BIOS feature is only valid for newer Intel processors (from the 0.13µ Intel Pentium 4 with 512 KB L2 cache “Prescott” onwards). These processors come with a Thermal Monitor which actually consists of a on-die thermal sensor and a Thermal Control Circuit (TCC).

When the Thermal Monitor is in automatic mode and the thermal sensor detects that the processor has reached its maximum safe operating temperature, it will activate the TCC. The TCC will then modulate the clock cycles by inserting null cycles, typically at a rate of 50-70% of the total number of clock cycles. This results in the processor “resting” 50-70% of the time.

As the die temperature drops, the TCC will gradually reduce the number of null cycles until no more is required to keep the die temperature below the safe point. Then the thermal sensor turns the TCC off. This mechanism allows the processor to dynamically adjust its duty cycles to ensure its die temperature remains within safe limits.

The Delay Prior To Thermal BIOS feature controls the activation of the Thermal Monitor’s automatic mode. It allows you to determine when the Thermal Monitor should be activated in automatic mode after the system boots. For example, with the default value of 16 Minutes, the BIOS activates the Thermal Monitor in automatic mode 16 minutes after the system starts booting up.

Generally, the Thermal Monitor should not be activated immediately on booting as the processor will be under a heavy load during the booting process. This causes a sharp rise in die temperature from its cold state. Because it takes time for the thermal output to radiate from the die to the heat sink, the thermal sensor will register the sudden spike in die temperature and prematurely activate the TCC. This unnecessarily reduces the processor’s performance during the booting up process.

Therefore, to ensure optimal booting performance, the activation of the Thermal Monitor must be delayed for a set period of time.

It is recommended that you set this BIOS feature to the lowest value (in minutes) that exceeds the time it takes to fully boot up your computer. For example, if it takes 5 minutes to fully boot up your system, you should select 8 Minutes.

You should not select a delay value that is unnecessarily long. Without the Thermal Monitor, your processor may heat up to a critical temperature (approximately 135 °C), at which point the thermal sensor shuts down your processor by removing the core voltage within 0.5 seconds.

 

Details

The Delay Prior To Thermal BIOS feature is only valid for newer Intel processors (from the 0.13µ Intel Pentium 4 with 512 KB L2 cache “Prescott” onwards). These processors come with a Thermal Monitor which actually consists of a on-die thermal sensor and a Thermal Control Circuit (TCC). Because the thermal sensor is on-die and placed at the hottest part of the die – near the integer ALU units, it is able to closely monitor the processor’s die temperature.

When the Thermal Monitor is in automatic mode and the thermal sensor detects that the processor has reached its maximum safe operating temperature, it will send a PROCHOT# (Processor Hot) signal which activates the TCC. The TCC will then modulate the clock cycles by inserting null cycles, typically at a rate of 50-70% of the total number of clock cycles. Note that the operating frequency of the processor remains unchanged. The TCC only inserts null cycles which results in the processor “resting” 50-70% of the time.

As the die temperature drops, the TCC will gradually reduce the number of null cycles until no more is required to keep the die temperature below the safe point. Then the thermal sensor stops sending the PROCHOT# signal, thereby turning the TCC off. This mechanism allows the processor to dynamically adjust its duty cycles to ensure its die temperature remains within safe limits.

[adrotate banner=”5″]

The Delay Prior To Thermal BIOS feature controls the activation of the Thermal Monitor’s automatic mode. It allows you to determine when the Thermal Monitor should be activated in automatic mode after the system boots. For example, with the default value of 16 Minutes, the BIOS activates the Thermal Monitor in automatic mode 16 minutes after the system starts booting up.

It also allows the watchdog timer to generate a System Management Interrupt (SMI), thereby presenting the BIOS with an opportunity to enable the Thermal Monitor when running non-ACPI-compliant operating systems.

Generally, the Thermal Monitor should not be activated immediately on booting as the processor will be under a heavy load during the booting process. This causes a sharp rise in die temperature from its cold state. Because it takes time for the thermal output to radiate from the die to the heat sink, the thermal sensor will register the sudden spike in die temperature and prematurely activate the TCC. This unnecessarily reduces the processor’s performance during the booting up process.

Therefore, to ensure optimal booting performance, the activation of the Thermal Monitor must be delayed for a set period of time. This allows the processor to operate at maximum performance without interference from the Thermal Monitor. It also prevents the unnecessary activation of the TCC and the subsequent modulation of processor cycles by allowing the die to stabilize to its true temperature before Thermal Monitor is activated.

It is recommended that you set this BIOS feature to the lowest value (in minutes) that exceeds the time it takes to fully boot up your computer. For example, if it takes 5 minutes to fully boot up your system, you should select 8 Minutes.

You should not select a delay value that is unnecessarily long. Without the Thermal Monitor, your processor may heat up to a critical temperature (approximately 135 °C), at which point the THERMTRIP# signal will be asserted. This shuts down your processor by removing the core voltage within 0.5 seconds. While this measure will most likely save the processor from permanent damage, you will be forced to reset the system before the processor will start working again.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Memory Hole At 15M-16M – The BIOS Optimization Guide

Memory Hole At 15M-16M

Common Options : Enabled, Disabled

 

Quick Review

Certain ISA cards require exclusive access to the 1 MB block of memory, from the 15th to the 16th megabyte, to work properly. The Memory Hole At 15M-16M BIOS feature allows you to reserve that 1 MB block of memory for such cards to use.

If you enable this feature, 1 MB of memory (the 15th MB) will be reserved exclusively for the ISA card’s use. This effectively reduces the total amount of memory available to the operating system by 1 MB.

Please note that in certain motherboards, enabling this feature may actually render all memory above the 15th MB unavailable to the operating system!

If you disable this feature, the 15th MB of RAM will not be reserved for the ISA card’s use. The full range of memory is therefore available for the operating system to use. However, if your ISA card requires the use of that memory area, it may then fail to work.

Since ISA cards are a thing of the past, it is highly recommended that you disable this feature. Even if you have an ISA card that you absolutely have to use, you may not actually need to enable this feature.

Most ISA cards do not need exclusive access to this memory area. Make sure that your ISA card requires this memory area before enabling this feature. You should use this BIOS feature only in a last-ditch attempt to get a stubborn ISA card to work.

 

Details

Certain ISA cards require exclusive access to the 1 MB block of memory, from the 15th to the 16th megabyte, to work properly. The Memory Hole At 15M-16M BIOS feature allows you to reserve that 1 MB block of memory for such cards to use.

If you enable this feature, 1 MB of memory (the 15th MB) will be reserved exclusively for the ISA card’s use. This effectively reduces the total amount of memory available to the operating system by 1 MB. Therefore, if you have 256 MB of memory, the usable amount of memory will be reduced to 255 MB.

Please note that in certain motherboards, enabling this feature may actually render all memory above the 15th MB unavailable to the operating system! In such cases, you will end up with only 14 MB of usable memory, irrespective of how much memory your system actually has.

[adrotate banner=”4″]

If you disable this feature, the 15th MB of RAM will not be reserved for the ISA card’s use. The full range of memory is therefore available for the operating system to use. However, if your ISA card requires the use of that memory area, it may then fail to work.

Since ISA cards are a thing of the past, it is highly recommended that you disable this feature. Even if you have an ISA card that you absolutely have to use, you may not actually need to enable this feature.

Most ISA cards do not need exclusive access to this memory area. Make sure that your ISA card requires this memory area before enabling this feature. You should use this BIOS feature only in a last-ditch attempt to get a stubborn ISA card to work.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

IDE Detect Time Out – The BIOS Optimization Guide

IDE Detect Time Out

Common Options : 0 to 15 or 0 to 30, in 1 second steps

 

Quick Review

Motherboards are capable of booting up much faster these days, with the initialization of IDE devices now take place much earlier. Unfortunately, this also means that some older IDE drives will not be able to spin up in time to be initialized! When this happens, the BIOS will not be able to detect those IDE drives and make them available to the operating system even though there’s nothing wrong with them.

This is where the IDE Detect Time Out BIOS feature comes in. It allows you to force the BIOS to delay the initialization of IDE devices for up to 30 seconds (although some BIOSes allow for even longer delays). The delay gives your IDE devices more time to spin up before the BIOS initializes them.

If you do not use old IDE drives and the BIOS has no problem initializing your IDE devices, it is recommended that you leave the delay at the default value of 0 for the shortest possible boot time. IDE devices manufactured in the last few years will have no problem spinning up in time for initialization. Only older IDE devices may have slower spin-up times.

However, if one or more of your IDE devices fail to initialize during the boot up process, start with a delay of 1 second. If that doesn’t help, gradually increase the delay until all your IDE devices initialize properly during the boot up process.

 

Details

Regardless of its shortcomings, the IDE standard is remarkably backward compatible. Every upgrade of the standard was designed to be fully compatible with older IDE devices, so you can actually use the old 40 MB hard disk drive that came with your ancient 386 system in your spanking new Intel Core i7 system! However, even backward compatibility cannot account for the slower motors used in the older drives.

Motherboards are capable of booting up much faster these days, with the initialization of IDE devices now take place much earlier. Unfortunately, this also means that some older IDE drives will not be able to spin up in time to be initialized! When this happens, the BIOS will not be able to detect those IDE drives and make them available to the operating system even though there’s nothing wrong with them.

[adrotate banner=”4″]

This is where the IDE Detect Time Out BIOS feature comes in. It allows you to force the BIOS to delay the initialization of IDE devices for up to 30 seconds (although some BIOSes allow for even longer delays). The delay gives your IDE devices more time to spin up before the BIOS initializes them.

If you do not use old IDE drives and the BIOS has no problem initializing your IDE devices, it is recommended that you leave the delay at the default value of 0 for the shortest possible boot time. IDE devices manufactured in the last few years will have no problem spinning up in time for initialization. Only older IDE devices may have slower spin-up times.

However, if one or more of your IDE devices fail to initialize during the boot up process, start with a delay of 1 second. If that doesn’t help, gradually increase the delay until all your IDE devices initialize properly during the boot up process.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Synchronous Mode Select – The BIOS Optimization Guide

Synchronous Mode Select

Common Options : Synchronous, Asynchronous

 

Quick Review

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

 

Details

The Synchronous Mode Select BIOS feature controls the signal synchronization of the DRAM-CPU interface.

When set to Synchronous, the chipset synchronizes the signals from the DRAM controller with signals from the CPU bus (front side bus or QuickPath Interconnect). Please note that for the signals to be synchronous, the DRAM controller and the CPU bus must run at the same clock speed.

When set to Asynchronous, the chipset will decouple the DRAM controller from the CPU bus. This allows the DRAM controller and the CPU bus to run at different clock speeds.

Generally, it is advisable to use the Synchronous setting as a synchronized interface allows data transfers to occur without delay. This results in a much higher throughput between the CPU bus and the DRAM controller.

[adrotate banner=”4″]

However, the Asynchronous mode does have its uses. Users of multiplier-locked processors and slow memory modules may find that using the Asynchronous mode allows them to overclock the processor much higher without the need to buy faster memory modules.

The Asynchronous mode is also useful for those who have very fast memory modules and multiplier-locked processors with low bus speeds. Running the fast memory modules synchronously with the low CPU bus speed would force the memory modules will have to run at the same slow speed. Running asynchronously will therefore allow the memory modules to run at a much higher speed than the CPU bus.

But please note that the performance gains of running synchronously cannot be underestimated. Synchronous operations are generally much faster than asychronous operations running at a higher clock speed. It is advisable that you compare benchmark scores of your computer running asynchronously (at a higher clock speed) and synchronously to determine the best option for your system.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Errata 123 Enhancement – The BIOS Optimization Guide

Errata 123 Enhancement

Common Options : Auto, Enabled, Disabled

 

Quick Review

Errata 123 refers to the 123rd bug identified in AMD Athlon and Opteron processors. This bug affects the cache bypass feature in those processors.

These processors have an internal data path that allows the processor to bypass the L2 cache and initiate an early DRAM read for certain cache line fill requests, even before receiving the hit/miss status from the L2 cache.

However, at low core frequencies, the DRAM data read may reach the processor core before it is ready. This causes data corruption and/or the processor to hang.

This bug affects the following processor families :

  • Dual Core AMD Opteron (Socket 940, 939)
  • AMD Athlon 64 X2 (Socket 939)

The Errata 123 BIOS feature is a workaround for the bug. It allows you to disable the cache bypass feature and avoid the bug from manifesting.

When enabled, the processor will not bypass the L2 cache to prefetch data from the system memory.

When disabled, the processor will continue to bypass the L2 cache for certain cache line fill requests. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or corrupting data. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

 

Details

As processors get more and more complex, every new processor design inevitably comes with a plethora of bugs. Those that are identified are given errata numbers.

Errata 123 refers to the 123rd bug identified in AMD Athlon and Opteron processors. This bug affects the cache bypass feature in those processors.

These processors have an internal data path that allows the processor to bypass the L2 cache and initiate an early DRAM read for certain cache line fill requests, even before receiving the hit/miss status from the L2 cache.

However, at low core frequencies, the DRAM data read may reach the processor core before it is ready. This causes data corruption and/or the processor to hang.

This bug is present in AMD’s processor revisions of JH-E1, BH-E4 and JH-E6. These revisions affect the following processor families :

[adrotate banner=”4″]
  • Dual Core AMD Opteron (Socket 940, 939)
  • AMD Athlon 64 X2 (Socket 939)

The processor families that are not affected are :

  • AMD Opteron (Socket 940)
  • AMD Athlon 64 (Socket 754, 939)
  • AMD Athlon 64 FX (Socket 940, 939)
  • Mobile AMD Athlon 64 (Socket 754)
  • AMD Sempron (Socket 754, 939)
  • Mobile AMD Sempron (Socket 754)
  • Mobile AMD Athlon XP-M (Socket 754)
  • AMD Turion Mobile Technology

The Errata 123 BIOS feature is a workaround for the bug. It allows you to disable the cache bypass feature and avoid the bug from manifesting.

When enabled, the processor will not bypass the L2 cache to prefetch data from the system memory.

When disabled, the processor will continue to bypass the L2 cache for certain cache line fill requests. This improves its performance.

When set to Auto, the BIOS will query the processor to see if it is affected by the bug. If the processor is affected, the BIOS will enable this BIOS feature. Otherwise, it will leave it disabled.

If your processor is affected by this bug, you should enable this BIOS feature to prevent the processor from hanging or corrupting data. But if your processor is not affected by this bug, disable this BIOS feature for maximum performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!