Tag Archives: PCI

Master Priority Rotation from The Tech ARP BIOS Guide!

Master Priority Rotation from The Tech ARP BIOS Guide!

Master Priority Rotation

Common Options : 1 PCI, 2 PCI, 3 PCI

 

Quick Review of Master Priority Rotation

The Master Priority Rotation BIOS feature controls the priority of the processor’s accesses to the PCI bus.

If you choose 1 PCI, the processor will always be granted access right after the current PCI bus master completes its transaction, irrespective of how many other PCI bus masters are on the queue.

If you choose 2 PCI, the processor will always be granted access right after the second PCI bus master on the queue completes its transaction.

If you choose 3 PCI, the processor will always be granted access right after the third PCI bus master on the queue completes its transaction.

But no matter what you choose, the processor is guaranteed access to the PCI bus after a certain number of PCI bus master grants.

It doesn’t matter if there are numerous PCI bus masters on the queue or when the processor requests access to the PCI bus. The processor will always be granted access after one PCI bus master transaction (1 PCI), two transactions (2 PCI) or three transactions (3 PCI).

For better overall performance, it is recommended that you select the 1 PCI option as this allows the processor to access the PCI bus with minimal delay.

However, if you wish to improve the performance of your PCI devices, you can try the 2 PCI or 3 PCI options. They ensure that your PCI cards will receive greater PCI bus priority.

Details of Master Priority Rotation

The Master Priority Rotation BIOS feature controls the priority of the processor’s accesses to the PCI bus.

If you choose 1 PCI, the processor will always be granted access right after the current PCI bus master completes its transaction, irrespective of how many other PCI bus masters are on the queue. This improves processor-to-PCI performance, at the expense of other PCI transactions.

If you choose 2 PCI, the processor will always be granted access right after the second PCI bus master on the queue completes its transaction. This means the processor has to wait for just two PCI bus masters to complete their transactions on the PCI bus before it can gain access to the PCI bus itself. This means slightly poorer processor-to-PCI performance but PCI bus masters will enjoy slightly better performance.

If you choose 3 PCI, the processor will always be granted access right after the third PCI bus master on the queue completes its transaction. This means the processor has to wait for three PCI bus masters to complete their transactions on the PCI bus before it can gain access to the PCI bus itself. This means poorer processor-to-PCI performance but PCI bus masters will enjoy better performance.

But no matter what you choose, the processor is guaranteed access to the PCI bus after a certain number of PCI bus master grants.

It doesn’t matter if there are numerous PCI bus masters on the queue or when the processor requests access to the PCI bus. The processor will always be granted access after one PCI bus master transaction (1 PCI), two transactions (2 PCI) or three transactions (3 PCI).

For better overall performance, it is recommended that you select the 1 PCI option as this allows the processor to access the PCI bus with minimal delay.

However, if you wish to improve the performance of your PCI devices, you can try the 2 PCI or 3 PCI options. They ensure that your PCI cards will receive greater PCI bus priority.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCI Clock Synchronization Mode – The Tech ARP BIOS Guide

PCI Clock Synchronization Mode

Common Options : To CPU, 33.33 MHz, Auto

 

Quick Review of PCI Clock Synchronization Mode

The PCI Clock Synchronization Mode BIOS feature allows you to force the PCI bus to either synchronize itself with the processor FSB (Front Side Bus) speed, or run at the standard clock speed of 33.33 MHz.

When set to To CPU, the PCI bus speed is slaved to the processor’s FSB speed. Any change in FSB speed will result in a similar change in the PCI bus speed. For example, if you increase the processor’s FSB speed by 10%, the PCI bus speed will increase by 10% as well.

When set to 33.33 MHz, the PCI bus speed will be locked into its standard clock speed of 33.33 MHz. No matter what the processor’s FSB speed is, the PCI bus will always run at 33.33 MHz.

The Auto option is ambiguous. Without testing, its effect cannot be ascertained since it’s up to the manufacturer what it wishes to implement by default for the motherboard. But logically, the Auto setting should force the PCI bus to run at its standard speed of 33.33 MHz for maximum compatibility.

It is recommended that you set the PCI Clock Synchronization Mode BIOS feature to To CPU if you are overclocking the processor FSB up to 12.5%. If you wish to overclock the processor FSB beyond 12.5%, then you should set this BIOS feature to 33.33 MHz.

However, if you do not intend to overclock, this BIOS feature will not have any effect. The PCI bus will remain at 33.33 MHz, no matter what you select.

 

Details of PCI Clock Synchronization Mode

The PCI Clock Synchronization Mode BIOS feature allows you to force the PCI bus to either synchronize itself with the processor FSB (Front Side Bus) speed, or run at the standard clock speed of 33.33 MHz.

When set to To CPU, the PCI bus speed is slaved to the processor’s FSB speed. Any change in FSB speed will result in a similar change in the PCI bus speed. For example, if you increase the processor’s FSB speed by 10%, the PCI bus speed will increase by 10% as well.

When set to 33.33MHz, the PCI bus speed will be locked into its standard clock speed of 33.33 MHz. No matter what the processor’s FSB speed is, the PCI bus will always run at 33.33 MHz.

The Auto option is ambiguous. Without testing, its effect cannot be ascertained since it’s up to the manufacturer what it wishes to implement by default for the motherboard. But logically, the Auto setting should force the PCI bus to run at its standard speed of 33.33 MHz for maximum compatibility.

Synchronizing the PCI bus with the processor FSB allows for greater performance when you are overclocking. Because the PCI bus will be overclocked as you overclock the processor FSB, you will experience better performance from your PCI devices. However, if your PCI device cannot tolerate the overclocked PCI bus, you may experience issues like system crashes or data corruption.

The recommended safe limit for an overclocked PCI bus is 37.5 MHz. This is the speed at which practically all new PCI cards can run at without breaking a sweat. Still, you should test the system thoroughly for stability issues before committing to an overclocked PCI bus speed.

Please note that if you wish to synchronize the PCI bus with the processor FSB and remain within this relatively safe limit, you can only overclock the processor FSB by up to 12.5%. Any higher, your PCI bus will be overclocked beyond 37.5 MHz.

If you wish to overclock the processor FSB further without worrying about your PCI devices, then you should set this BIOS feature to 33.33 MHz. This forces the PCI bus to run at the standard speed of 33.33MHz, irrespective of the processor’s FSB speed.

It is recommended that you set the PCI Clock Synchronization Mode BIOS feature to To CPU if you are overclocking the processor FSB up to 12.5%. If you wish to overclock the processor FSB beyond 12.5%, then you should set this BIOS feature to 33.33 MHz.

However, if you do not intend to overclock, this BIOS feature will not have any effect. The PCI bus will remain at 33.33 MHz, no matter what you select.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCI Chaining from The Tech ARP BIOS Guide

PCI Chaining

Common Options : Enabled, Disabled

 

Quick Review of PCI Chaining

The PCI Chaining BIOS feature is designed to speed up writes from the processor to the PCI bus by allowing write combining to occur at the PCI interface.

When PCI chaining is enabled, up to four quadwords of processor writes to contiguous PCI addresses will be chained together and written to the PCI bus as a single PCI burst write.

When PCI chaining is disabled, each processor write to the PCI bus will be handled as separate non-burst writes of 32-bits.

Needless to say, writing four quadwords of data in a single PCI write is much faster than doing so in four separate non-burstable writes. A single PCI burst write will also reduce the amount of time the processor has to wait while writing to the PCI bus.

Therefore, it is recommended that you enable this BIOS feature for better CPU to PCI write performance.

[adrotate group=”1″]

 

Details of PCI Chaining

The PCI Chaining BIOS feature is designed to speed up writes from the processor to the PCI bus by allowing write combining to occur at the PCI interface.

When PCI chaining is enabled, up to four quadwords of processor writes to contiguous PCI addresses will be chained together and written to the PCI bus as a single PCI burst write.

When PCI chaining is disabled, each processor write to the PCI bus will be handled as separate non-burst writes of 32-bits.

Needless to say, writing four quadwords of data in a single PCI write is much faster than doing so in four separate non-burstable writes. A single PCI burst write will also reduce the amount of time the processor has to wait while writing to the PCI bus.

Therefore, it is recommended that you enable this BIOS feature for better CPU to PCI write performance.

 

What Is A Quadword?

In computing, a quadword is a term that means four words, equivalent to 8 bytes or 64-bits.

So a PCI burst write of four quadwords would be 32 bytes, or 256 bits in size. That would be 8X faster than a non-burst write of 4 bytes, or 32 bits in size.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PIRQ x Use IRQ No. from The Tech ARP BIOS Guide

PIRQ x Use IRQ No.

Common Options : Auto, 3, 4, 5, 7, 9, 10, 11, 12, 14, 15

 

Quick Review of PIRQ x Use IRQ No.

The PIRQ x Use IRQ No. BIOS feature allows you to manually set the IRQ for a particular device installed on the AGP and PCI buses.

It is especially useful when you are transferring a hard disk from one computer to another; and you don’t want to reinstall your operating system to redetect the IRQ settings. By setting the IRQs to fit the original settings, you can circumvent a lot of configuration problems after installing the hard disk in a new system. However, this is only true for non-ACPI systems.

Below is a table showing the relationship between PIRQ and INT in the reference motherboard :-

Signals

AGP Slot
PCI Slot 1

PCI Slot 2

PCI Slot 3

PCI Slot 4
PCI Slot 5

PIRQ_0

INT A

INT D

INT C

INT B

PIRQ_1

INT B

INT A

INT D

INT C

PIRQ_2

INT C

INT B

INT A

INT D

PIRQ_3

INT D

INT C

INT B

INT A

You will notice that the interrupts are staggered so that conflicts do not happen easily.

Even then, you should try not use up paired slots that share the same set of IRQs. In such cases, it is recommended that you use only one of the two slots.

In most cases, you should just leave the setting as Auto. This allows the motherboard to assign the IRQs automatically. But if you need to assign a particular IRQ to a device on the AGP or PCI bus, here is how you can make use of this BIOS feature.

  1. Determine the slot that the device is located in.
  2. Check your motherboard’s PIRQ table (in the manual) to determine the slot’s primary PIRQ.
  3. You can then select the IRQ you want by assigning the IRQ to the appropriate PIRQ.

Just remember that the BIOS will always try to allocate the PIRQ linked to INT A for each slot. It is just a matter of linking the IRQ you want to the correct PIRQ for that slot.

Please note the table, notes and INT details are only examples provided by the reference motherboard. They may vary from motherboard to motherboard.

 

Details of PIRQ x Use IRQ No.

The PIRQ x Use IRQ No. BIOS feature allows you to manually set the IRQ for a particular device installed on the AGP and PCI buses.

It is especially useful when you are transferring a hard disk from one computer to another; and you don’t want to reinstall your operating system to redetect the IRQ settings. By setting the IRQs to fit the original settings, you can circumvent a lot of configuration problems after installing the hard disk in a new system. However, this is only true for non-ACPI systems.

Here are some important notes from the reference motherboard (may vary from motherboard to motherboard) :

  • If you specify a particular IRQ here, you can’t specify the same IRQ for the ISA bus. If you do, you will cause a hardware conflict.
  • Each PCI slot is capable of activating up to 4 interrupts – INT A, INT B, INT C and INT D.
  • The AGP slot is capable of activating up to 2 interrupts – INT A and INT B.
  • Normally, each slot is allocated INT A. The other interrupts are reserves and used only when the PCI/AGP device requires more than one IRQ or if the IRQ requested has been used up.
  • The AGP slot and PCI slot #1 share the same IRQ.
  • PCI slot #4 and #5 share the same IRQs.
  • USB uses PIRQ_4.

Below is a table showing the relationship between PIRQ and INT in the reference motherboard :-

Signals

AGP Slot
PCI Slot 1

PCI Slot 2

PCI Slot 3

PCI Slot 4
PCI Slot 5

PIRQ_0

INT A

INT D

INT C

INT B

PIRQ_1

INT B

INT A

INT D

INT C

PIRQ_2

INT C

INT B

INT A

INT D

PIRQ_3

INT D

INT C

INT B

INT A

You will notice that the interrupts are staggered so that conflicts do not happen easily. The INT A entries are in bold to highlight the staggered arrangement.

Even then, you should try not use up paired slots that share the same set of IRQs. In this reference motherboard, such paired slots would be the AGP slot and PCI slot 1 or PCI slots 4 and 5. In such cases, it is recommended that you use only one of the two slots.

In most cases, you should just leave the setting as Auto. This allows the motherboard to assign the IRQs automatically. But if you need to assign a particular IRQ to a device on the AGP or PCI bus, here is how you can make use of this BIOS feature.

  1. Determine the slot that the device is located in.
  2. Check your motherboard’s PIRQ table (in the manual) to determine the slot’s primary PIRQ. For example, if you have a PCI network card in PCI slot 3, the table above shows that the slot’s primary PIRQ is PIRQ_2. Remember, all slots are first allocated INT A if it is available.
  3. You can then select the IRQ you want by assigning the IRQ to the appropriate PIRQ. In our network card example, if the card requires IRQ 7, set PIRQ_2 to use IRQ 7. The BIOS will then allocate IRQ 7 to PCI slot 3. It is that easy! 🙂
[adrotate group=”2″]

Just remember that the BIOS will always try to allocate the PIRQ linked to INT A for each slot. So, in our reference motherboard, the primary PIRQ for the AGP slot and PCI slot 1 is PIRQ_0 while the primary PIRQ for PCI slot 2 is PIRQ_1 and so on. It is just a matter of linking the IRQ you want to the correct PIRQ for that slot.

Please note the table, notes and INT details are only examples provided by the reference motherboard. They may vary from motherboard to motherboard. For example, Intel i8xx chipsets have 8 interrupt lines (INT A to INT H). In i8xx motherboards, the AGP slot will always have its own IRQ. Thanks to alex-the-cat for that info!

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Byte Merge from The Tech ARP BIOS Guide

Byte Merge

Common Options : Enabled, Disabled

 

Quick Review of Byte Merge

The Byte Merge BIOS feature is similar to the PCI Dynamic Bursting feature.

When enabled, the PCI write buffer accumulates and merges 8-bit and 16-bit writes into 32-bit writes. This increases the efficiency of the PCI bus and improves its bandwidth.

When disabled, the PCI write buffer will not accumulate or merge 8-bit or 16-bit writes. It will just write them to the PCI bus as soon as the bus is free. As such, there may be a loss of PCI bus efficiency when 8-bit or 16-bit data is written to the PCI bus.

Therefore, it is recommended that you enable Byte Merge for better performance.

However, please note that Byte Merge may be incompatible with certain PCI network interface cards (also known as NICs). So, if your NIC won’t work properly, try disabling this feature.

 

Details of Byte Merge

The Byte Merge BIOS feature is similar to the PCI Dynamic Bursting feature.

If you have already read about the CPU to PCI Write Buffer feature, you should know that the chipset has an integrated PCI write buffer which allows the CPU to immediately write up to four words (or 64-bits) of PCI writes to it. This frees up the CPU to work on other tasks while the PCI write buffer writes them to the PCI bus.

Now, the CPU doesn’t always write 32-bit data to the PCI bus. 8-bit and 16-bit writes can also take place. But while the CPU may only write 8-bits of data to the PCI bus, it is still considered as a single PCI transaction. This makes it equivalent to a 16-bit or 32-bit write in terms of PCI bandwidth! This reduces the effective PCI bandwidth, especially if there are many 8-bit or 16-bit CPU-to-PCI writes.

To solve this problem, the write buffer can be programmed to accumulate and merge 8-bit and 16-bit writes into 32-bit writes. The buffer then writes the merged data to the PCI bus. As you can see, merging the smaller 8-bit or 16-bit writes into a few large 32-bit writes reduces the number of PCI transactions required. This increases the efficiency of the PCI bus and improves its bandwidth.

[adrotate group=”1″]

This is where the Byte Merge BIOS feature comes in. It controls the byte merging capability of the PCI write buffer.

If it is enabled, every write transaction will go straight to the write buffer. They are accumulated until there is enough to be written to the PCI bus in a single burst. This improves the PCI bus’ performance.

If you disable byte merging, all writes will still go to the PCI write buffer (if the CPU to PCI Write Buffer feature has been enabled). But the buffer won’t accumulate and merge the data. The data is written to the PCI bus as soon as the bus becomes free. This reduces PCI bus efficiency, particularly when 8-bit or 16-bit data is written to the PCI bus.

Therefore, it is recommended that you enable Byte Merge for better performance.

However, please note that Byte Merge may be incompatible with certain PCI network interface cards (also known as NICs). Boar-Ral explains :-

I noticed that some PCI cards really despise Byte Merge, in particular the 3Com 3C905 series of NICs. While this may only apply to certain motherboards, in my case, the P3V4X; I feel that this is probably not the case and that it is a rather widespread problem.

Issues I have encountered with Byte Merge enabled, range from Windows 98 SE freezing at the boot screen to my NIC not functioning at all. This issue has been confirmed with others using the same NIC and is what alerted me to the issue in the first place.

Prozactive concurs :-

I wanted to confirm the observation posted by Boar-Ral concerning the “Byte Merge” BIOS setting. After enabling “Byte Merge” and making other recommended BIOS setting changes, I suddenly lost all network I/O from my system. And yes, I happen to be using a 3Com 3C905B-TX NIC (with an Asus A7V motherboard). After a great deal of trial and error troubleshooting, I found that disabling “Byte Merge” lets everything work again.

On the other hand, Cprall discovered that he was able to use the NIC in Windows 98 SE but not in Windows 2000. Check out what he has to say :-

I’ll even third this to say I was recently bitten by the same (A7V motherboard at BIOS 1009 and 3C905B-TX network card). I do have one slight addition to what was seen here. With Byte Merge enabled, I was able to access the network under Windows 98 SE, but not Windows 2000. With Byte Merge disabled, the network card works under both.

So, if your NIC (Network Interface Card) won’t work properly, try disabling Byte Merge. Otherwise, you should enable Byte Merge for better performance.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCI Pipelining – The Tech ARP BIOS Guide

PCI Pipelining

Common Options : Enabled, Disabled

 

Quick Review of PCI Pipelining

The PCI Pipelining BIOS feature determines if PCI transactions to the memory subsystem will be pipelined.

If the PCI pipeline feature is enabled, the memory controller allows PCI transactions to be pipelined. This masks the latency of each PCI transaction and improves the efficiency of the PCI bus.

If the PCI pipeline feature is disabled, the memory controller is forced to check for outstanding transactions from other devices to the same block address that each PCI transaction is targeting.

For better PCI performance, the PCI pipeline should be enabled. This allows the latency of the bus to be masked for consecutive transactions.

However, if your system constantly locks up for no apparent reason, try disabling this feature. Disabling PCI Pipelining reduces performance but ensures that data coherency is strictly maintained for maximum reliability.

 

Details of PCI Pipelining

The PCI Pipelining BIOS feature determines if PCI transactions to the memory subsystem will be pipelined.

The pipelining of PCI transactions allows their latencies to be masked (hidden). This greatly improves the efficiency of the PCI bus. However, this is only true for multiple transactions in the same direction. Pipelining won’t help with PCI devices that switch between reads and writes often.

This feature is different from a burst transfer where multiple data transactions are executed consecutively with a single command. In PCI pipelining, different transactions are progressively processed in the pipeline without waiting for the current transaction to finish. Normally, outstanding transactions have to wait for the current one to complete before they are initiated.

If the PCI pipeline feature is enabled, the memory controller allows PCI transactions to be pipelined. This masks the latency of each PCI transaction and improves the efficiency of the PCI bus.

Please note that once the transactions are pipelined, they are flagged as performed, even though they have not actually been completed. As such, data coherency problems may occur when other devices write to the same memory block. This may cause valid data to be overwritten by outdated or expired data, causing problems like data corruption or system lock-ups.

[adrotate group=”1″]

If the PCI pipeline feature is disabled, the memory controller is forced to check for outstanding transactions from other devices to the same block address that each PCI transaction is targeting.

If there is a match, the PCI transaction is stalled until the outstanding transaction to the same memory block is complete. This essentially forces the memory controller to hold the PCI bus until the PCI transaction is cleared to proceed. It also prevents other PCI transactions from being pipelined. Both factors greatly reduce performance.

For better PCI performance, the PCI pipeline should be enabled. This allows the latency of the bus to be masked for consecutive transactions.

However, if your system constantly locks up for no apparent reason, try disabling this feature. Disabling PCI Pipelining reduces performance but ensures that data coherency is strictly maintained for maximum reliability.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Delayed Transaction – The BIOS Optimization Guide

Delayed Transaction

Common Options : Enabled, Disabled

 

Quick Review of Delayed Transaction

To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed. According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles for the initial read and 8 PCI clock cycles for each subsequent read.

If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Delayed Transaction feature.

With PCI Delayed Transaction enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the master device will rearbitrate for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, the target device just sends the completion status to the master device to complete the transaction.

One advantage of using PCI Delayed Transaction is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Delayed Transaction also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. Without PCI Delayed Transaction, all write-posted data will have to be flushed before another PCI transaction can occur.

It is highly recommended that you enable Delayed Transaction for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

Please note that while many manuals and even earlier versions of the BIOS Optimization Guide have stated that this is an ISA bus-specific BIOS feature which enables a 32-bit write-posted buffer for faster PCI-to-ISA writes, they are incorrect! This BIOS feature is not ISA bus-specific and it does not control any write-posted buffers. It merely allows write-posting to continue while a non-postable PCI transaction is underway.

 

Details of Delayed Transaction

On the PCI bus, there are many devices that may not meet the PCI target latency rule. Such devices include I/O controllers and bridges (i.e. PCI-to-PCI and PCI-to-ISA bridges). To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed.

According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles (32 clock cycles for a host bus bridge) for the initial read and 8 PCI clock cycles for each subsequent read. If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Delayed Transaction feature.

When a master device reads from a target device on the PCI bus but fails to meet the latency requirements; the transaction will be terminated with a Retry command. The master device will then have to rearbitrate for bus access. But if PCI Delayed Transaction had been enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the target device latches on the data and terminates the transaction if it cannot be completed within the target latency period. The master device then rearbitrates for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, instead of returning data (in the case of a read transaction), the target device sends the completion status to the master device to complete the transaction.

[adrotate group=”1″]

One advantage of using PCI Delayed Transaction is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Delayed Transaction also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. The write-posted data will be written to memory while the target device is working on the non-postable transaction and flushed before the transaction is completed on the master device. Without PCI Delayed Transaction, all write-posted data will have to be flushed before another PCI transaction can occur.

As you can see, the PCI Delayed Transaction feature allows for more efficient use of the PCI bus as well as better PCI performance by allowing write-posting to occur concurrently with non-postable transactions. In this BIOS, the Delayed Transaction option allows you to enable or disable the PCI Delayed Transaction feature.

It is highly recommended that you enable Delayed Transaction for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

Please note that while many manuals and even earlier versions of the BIOS Optimization Guide have stated that this is an ISA bus-specific BIOS feature which enables a 32-bit write-posted buffer for faster PCI-to-ISA writes, they are incorrect! This BIOS feature is not ISA bus-specific and it does not control any write-posted buffers. It merely allows write-posting to continue while a non-postable PCI transaction is underway.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Auto Detect DIMM/PCI Clk – The BIOS Optimization Guide

Auto Detect DIMM/PCI Clk

Common Options : Enabled, Disabled

 

Quick Review

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI (Electromagnetic Interference) and reduce power consumption by turning off unoccupied or inactive PCI and memory slots.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

It is recommended that you enable this feature to save power and reduce EMI.

[adrotate banner=”5″]

 

Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted. To reduce this problem, the motherboard can either modulate the pulses (see Spread Spectrum) or turn off unused AGP, PCI or memory clock signals.

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI and reduce power consumption by turning off unoccupied or inactive PCI and memory slots. It is similar to the Smart Clock option of the Spread Spectrum BIOS feature.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

This method allows you to reduce the motherboard’s EMI levels without compromising system stability. It also allows the motherboard to reduce power consumption because the clock signals will only be generated for PCI and memory slots that are occupied and active.

The choice of whether to enable or disable this feature is really up to your personal preference. But since this feature reduces EMI and power consumption without compromising system stability, it is recommended that you enable it.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Init Display First – The BIOS Optimization Guide

Init Display First

Common Options : AGP or PCIe, PCI

 

Quick Review

The Init Display First BIOS feature allows you to select whether to boot the system using the PCIe / AGP graphics card or the PCI graphics card. This is important if you have both PCIe / AGP and PCI graphics cards.

If you are only using a single graphics card, the BIOS will ignore this BIOS setting and boot the computer using that graphics card. However, there may be a slight reduction in the time taken to detect and initialize the card if you select the proper setting. For example, if you only use a PCIe / AGP graphics card, then setting Init Display First to PCIe or AGP may speed up your system’s booting-up process.

If you are only using a single graphics card, it is recommended that you set the Init Display First feature to the proper setting for your system :

  • PCIe for a single PCIe card,
  • AGP for a single AGP card, and
  • PCI for a single PCI card.

But if you are using multiple graphics cards, it is up to you which card you want to use as your primary display card. It is recommended that you select the fastest graphics card as the primary display card.

 

Details

Although the PCI Express and AGP buses were designed exclusively for the graphics subsystem, some users still have to use PCI graphics cards for multi-monitor support. This was more common with AGP motherboards because there can be only one AGP port, while PCI Express motherboards can have multiple PCIe slots.

If you want to use multiple monitors on AGP motherboards, you must either get an AGP graphics card with multi-monitor support, or use PCI graphics cards. PCI Express motherboards usually have multiple PCIe slots, but there may still not be enough PCIe slots, and you may need to install PCI graphics cards.

For those who upgraded from a PCI graphics card to an AGP graphics card, it is certainly enticing to use the old PCI graphics card to support a second monitor. The PCI card would do the job just fine as it merely sends display data to the second monitor. You don’t need a powerful graphics card to run the second monitor, if it’s merely for display purposes.

When it comes to a case of a PCI Express or an AGP graphics card working in tandem with a PCI graphics card, the BIOS has to determine which graphics card is the primary graphics card. Naturally, the default would be the PCIe or AGP graphics card since it would naturally be the faster graphics card.

However, there are situations in which you may want to manually select the PCI graphics card instead. For example – you have a PCIe / AGP graphics card as well as a PCI graphics card, but only one monitor. This is where the Init Display First BIOS feature comes in. It allows you to select whether to boot the system using the PCIe / AGP graphics card or the PCI graphics card.

[adrotate banner=”5″]

If you are only using a single graphics card, the BIOS will ignore this BIOS setting and boot the computer using that graphics card. However, there may be a slight reduction in the time taken to detect and initialize the card if you select the proper setting. For example, if you only use a PCIe / AGP graphics card, then setting Init Display First to PCIe or AGP may speed up your system’s booting-up process.

If you are only using a single graphics card, it is recommended that you set the Init Display First feature to the proper setting for your system :

  • PCIe for a single PCIe card,
  • AGP for a single AGP card, and
  • PCI for a single PCI card.

But if you are using multiple graphics cards, it is up to you which card you want to use as your primary display card. It is recommended that you select the fastest graphics card as the primary display card.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PCI Prefetch – The BIOS Optimization Guide

PCI Prefetch

Common Options : Enabled, Disabled

 

Quick Review

The PCI Prefetch feature controls the PCI controller’s prefetch capability.

When enabled, the PCI controller will prefetch data whenever the PCI device reads from the system memory. This speeds up PCI reads as it allows contiguous memory reads by the PCI device to proceed with minimal delay.

Therefore, it is recommended that you enable this feature for better PCI read performance.

 

Details

The PCI Prefetch feature controls the PCI controller’s prefetch capability.

When enabled, the system controller will prefetch eight quadwords (one cache line) of data whenever a PCI device reads from the system memory.

Therefore, it is recommended that you enable this feature for better PCI read performance. Please note that PCI writes to the system memory do not benefit from this feature.

[adrotate banner=”4″]

Here’s how it works.

Whenever the PCI controller reads PCI-requested data from the system memory, it also reads the subsequent cache line of data. This is done on the assumption that the PCI device will request for the subsequent cache line.

When the PCI device actually initiates a read command for that cache line, the system controller can immediately send it to the PCI device.

This speeds up PCI reads as the PCI device won’t need to wait for the system controller to read from the system memory. As such, PCI Prefetch allows contiguous memory reads by the PCI device to proceed with minimal delay.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

CPU to PCI Write Buffer – The BIOS Optimization Guide

CPU to PCI Write Buffer

Common Options : Enabled, Disabled

 

Quick Review

The CPU to PCI Write Buffer BIOS feature controls the chipset’s CPU-to-PCI write buffer. It is used to store PCI writes from the processor before they are written to the PCI bus.

When enabled, all PCI writes from the processor will go directly to the write buffer. This allows the processor to work on something else while the write buffer writes the data to the PCI bus on the next available PCI cycle.

When disabled, the processor bypasses the buffer and writes directly to the PCI bus. This ties up the processor for the entire length of the transaction.

It is recommended that you enable this BIOS feature for better performance.

 

Details

The CPU to PCI Write Buffer BIOS feature controls the chipset’s CPU-to-PCI write buffer. It is used to store PCI writes from the processor before they are written to the PCI bus.

If this buffer is disabled, the processor bypasses the buffer and writes directly to the PCI bus. Although this may seem like the faster and better method, it really isn’t so.

When the processor wants to write to the PCI bus, it has to arbitrate for control of the PCI bus. This takes time, especially when there are other devices requesting access to the PCI bus as well. During this time, the processor cannot do anything else but wait for its turn.

Even when it gets control of the PCI bus, the processor still has to wait until the PCI bus is free. Because the processor bus (which can be as fast as 533 MHz) is many times faster than the PCI bus (at only 33 MHz), the processor wastes many clock cycles just waiting for the PCI bus. And it hasn’t even begun writing to the PCI bus yet! The entire transaction, therefore, puts the processor out of commission for many clock cycles.

[adrotate banner=”5″]

This is where the CPU-to-PCI write buffer comes in. It is a small memory buffer built into the chipset. The actual size of the buffer varies from chipset to chipset. But in most cases, it is big enough for four words or 64-bits worth of data.

When this write buffer is enabled, all PCI writes from the processor will go straight into it, instead of the PCI bus. This is virtually instantaneous since the processor does not have to arbitrate or wait for the PCI bus. That task is now left to the chipset and its write buffer. The processor is thus free to work on something else.

It is important to note that the write buffer won’t be able to write the data to the PCI bus any faster than the processor can. This is because the write buffer still has to arbitrate and wait for control of the PCI bus! But the difference here is that the entire transaction can now be carried out without tying up the processor.

To sum it all up, enabling the CPU to PCI write buffer frees up CPU cycles that would normally be wasted waiting for the PCI bus. Therefore, it is recommended that you enable this feature for better performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Graphics Aperture Size – The BIOS Optimization Guide

Graphics Aperture Size

Common Options : 4, 8, 16, 32, 64, 128, 256 (in MB)

 

Quick Review

The Graphics Aperture Size BIOS feature does two things. It selects the size of the AGP aperture and it determines the size of the GART (Graphics Address Relocation Table).

The aperture is a portion of the PCI memory address range that is dedicated for use as AGP memory address space while the GART is a translation table that translates AGP memory addresses into actual memory addresses which are often fragmented. The GART allows the graphics card to see the memory region available to it as a contiguous piece of memory range.

Host cycles that hit the aperture range are forwarded to the AGP bus without need for translation. The aperture size also determines the maximum amount of system memory that can be allocated to the AGP graphics card for texture storage.

Please note that the AGP aperture is merely address space, not actual physical memory in use. Although it is very common to hear people recommending that the AGP aperture size should be half the size of system memory, that is wrong!

The requirement for AGP memory space shrinks as the graphics card’s local memory increases in size. This is because the graphics card will have more local memory to dedicate to texture storage. So, if you upgrade to a graphics card with more memory, you shouldn’t be “deceived” into thinking that you will need even more AGP memory! On the contrary, a smaller AGP memory space will be required.

It is recommended that you keep the Graphics Aperture Size around 64 MB to 128 MB in size, even if your graphics card has a lot of onboard memory. This allows flexibility in the event that you actually need extra memory for texture storage. It will also keep the GART (Graphics Address Relocation Table) within a reasonable size.

 

Details

The Graphics Aperture Size BIOS feature does two things. It selects the size of the AGP aperture and it determines the size of the GART (Graphics Address Relocation Table).

The aperture is a portion of the PCI memory address range that is dedicated for use as AGP memory address space while the GART is a translation table that translates AGP memory addresses into actual memory addresses which are often fragmented. The GART allows the graphics card to see the memory region available to it as a contiguous piece of memory range.

Host cycles that hit the aperture address range are forwarded to the AGP bus without need for translation. The aperture size also determines the maximum amount of system memory that can be allocated to the AGP graphics card for texture storage.

The graphics aperture size is calculated using this formula :

AGP Aperture Size = (Maximum usable AGP memory size x 2) + 12 MB

As you can see, the actual available AGP memory space is less than half the AGP aperture size set in the BIOS. This is because the AGP controller needs a write combined memory areaequal in size to the actual AGP memory area (uncached) plus an additional 12MB for virtual addressing.

Therefore, it isn’t simply a matter of determining how much AGP memory space you need. You also need to calculate the final aperture size by doubling the amount of AGP memory space desired and adding 12MB to the total.

Please note that the AGP aperture is merely address space, not actual physical memory in use. It doesn’t lock up any of your system memory. The physical memory is allocated and released as needed whenever Direct3D makes a “create non-local surface” call.

Windows 95 (with VGARTD.VXD) and later versions of Microsoft Windows use a waterfall method of memory allocation. Surfaces are first created in the graphics card’s local memory. When that memory is full, surface creation spills over into AGP memory and then system memory. So, memory usage is automatically optimized for each application. AGP and system memory are not used unless absolutely necessary.

Unfortunately, it is very common to hear people recommending that the AGP aperture size should be half the size of system memory. However, this is wrong for the same reason why swapfile size should not be fixed at 1/4 of system memory. Like the swapfile, the requirement for AGP memory space shrinks as the graphics card’s local memory increases in size. This is because the graphics card will have more local memory to use for texture storage!

This reduces the need for AGP memory. Therefore, when you upgrade to a graphics card with more memory, you shouldn’t be “deceived” into thinking that you will need even more AGP memory! On the contrary, a smaller AGP memory space will be required.

[adrotate banner=”5″]

If your graphics card has very little graphics memory (4 MB16 MB), you may need to create a large AGP aperture, up to half the size of the system memory. The graphics card’s local memory and the AGP aperture size combined should be roughly around 64 MB. Please note that the size of the aperture does not correspond to performance! Increasing it to gargantuan proportions will not improve performance.

Still, it is recommended that you keep the Graphics Aperture Size around 64 MB to 128 MB. Now, why should we use such a large aperture size when most graphics cards come with large amounts of local memory? Shouldn’t we set it to the absolute minimum to save system memory?

  1. First of all, setting it to a lower memory won’t save you memory! Don’t forget that all the AGP aperture size does is limit the amount of system memory the AGP bus can appropriate whenever it needs more memory. It is not used unless absolutely necessary. So, setting the AGP aperture size to 64 MB doesn’t mean that 64 MB of your system memory will be appropriated and reserved for the AGP bus’ use. What it does it limit the AGP bus to a maximum of 64 MB of system memory when the need arises.
  2. Next, most graphics cards require an AGP aperture of at least 16MB in size to work properly. Many new graphics cards require even more. This is probably because the virtual addressing space is already 12 MB in size! So, setting the AGP Aperture Size to 4 MB or 8 MB is a big no-no.
  3. We should also remember that many software have AGP aperture size and texture storage requirements that are mostly unspecified. Some applications will not work with AGP apertures that are too small. And some games use so much textures that a large AGP aperture is needed even with graphics cards with large memory buffers.
  4. Finally, you should remember that the actual available AGP memory space is less than half the size of the AGP aperture size you set. If you want just 15 MB of AGP memory for texture storage, the AGP aperture has to be at least 42 MB in size! Therefore, it makes sense to set a large AGP aperture size in order to cater for all eventualities.

Now, while increasing the AGP aperture size beyond 128 MB won’t take up system memory, it would still be best to keep the aperture size in the 64 MB – 128 MB range so that the GART (Graphics Address Relocation Table) won’t become too big. The larger the GART gets, the longer it takes to scan through the GART and find the translated address for each AGP memory address request.

With local memory on graphics cards increasing to incredible sizes and texture compression commonplace, there’s really not much need for the AGP aperture size to grow beyond 64 MB. Therefore, it is recommended that you set the Graphics Aperture Size to 64 MB or at most, 128 MB.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP ISA Aliasing – The BIOS Optimization Guide

AGP ISA Aliasing

Common Options : Enabled, Disabled

 

Quick Review

The AGP ISA Aliasing BIOS feature allows you to determine if the system controller will perform ISA aliasing to prevent conflicts between ISA devices.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10]. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Details

The origin of the AGP ISA Aliasing feature can be traced back all the way to the original IBM PC. When the IBM PC was designed, it only had ten address lines (10-bits) for I/O space allocation. Therefore, the I/O space back in those days was only 1KB or 1024 bytes in size. Out of those 1024 available addresses, the first 256 addresses were reserved exclusively for the motherboard’s use, leaving the last 768 addresses for use by add-in devices. This would become a critical factor later on.

Later, motherboards began to utilize 16 address lines for I/O space allocation. This was supposed to create a contiguous I/O space of 64KB in size. Unfortunately, many ISA devices by then were only capable of doing 10-bit decodes. This was because they were designed for computers based on the original IBM design which only supported 10 address lines.

To circumvent this problem, they fragmented the 64KB I/O space into 1KB chunks. Unfortunately, because the first 256 addresses must be reserved exclusively for the motherboard, this means that only the first (or lower) 256 bytes of each 1KB chunk would be decoded in full 16-bits. All 10-bits-decoding ISA devices are, therefore, restricted to the last (or top) 768 bytes of the 1KB chunk of I/O space.

As a result, such ISA devices only have 768 I/O locations to use. Because there were so many ISA devices back then, this limitation created a lot of compatibility problems because the chances of two ISA cards using the same I/O space were high. When that happened, one or both of the cards would not work. Although they tried to reduce the chance of such conflicts by standardizing the I/O locations used by different classes of ISA devices, it was still not good enough.

Eventually, they came up with a workaround. Instead of giving each ISA device all the I/O space it wants in the 10-bit range, they gave each a much ISA device smaller number of I/O locations and made up for the difference by “borrowing” them from the 16-bit I/O space! Here’s how they did it.

The ISA device would first take up a small number of I/O locations in the 10-bit range. It then extends its I/O space by using 16-bit aliases of the few 10-bit I/O locations taken up earlier. Because each I/O location in the 10-bit decode area has sixty-three16-bit aliases, the total number of I/O locations expands from just 768 locations to a maximum of 49,152 locations!

More importantly, each ISA card will now require very few I/O locations in the 10-bit range. This drastically reduced the chances of two ISA cards conflicting each other in the limited 10-bit I/O space. This workaround naturally became known as ISA Aliasing.

Now, that’s all well and good for ISA devices. Unfortunately, the 10-bit limitation of ISA devices becomes a liability to devices that require 16-bit addressing. AGP and PCI devices come to mind. As noted earlier, only the first 256 addresses of the 1KB chunks support 16-bit addressing. What that really means is all 16-bit addressing devices are thus limited to only 256 bytes of contiguous I/O space!

When a 16-bit addressing device requires a larger contiguous I/O space, it will have to encroach on the 10-bit ISA I/O space. For example, if an AGP card requires 8KB of contiguous I/O space, it will take up eight of the 1KB I/O chunks (which will comprise of eight 16-bit areas and eight 10-bit areas!). Because ISA devices are using ISA Aliasing to extend their I/O space, there’s now a high chance of I/O space conflicts between ISA devices and the AGP card. When that happens, the affected cards will most probably fail to work.

[adrotate banner=”5″]

There are two ways out of this mess. Obviously, you can limit the AGP card to a maximum of 256 bytes of contiguous I/O space. Of course, this is not an acceptable solution.

The second, and the preferred method, would be to throw away the restriction and provide the AGP card with all the contiguous I/O space it wants.

Here’s where the AGP ISA Aliasing BIOS feature comes in.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10] – the last 6-bits. Only the first 10-bits (address bits 0 to 9) are used for decoding. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

PCI Dynamic Bursting – The BIOS Optimization Guide

PCI Dynamic Bursting

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature is similar to the Byte Merge feature.

When enabled, the PCI write buffer accumulates and merges 8-bit and 16-bit writes into 32-bit writes. This increases the efficiency of the PCI bus and improves its bandwidth.

When disabled, the PCI write buffer will not accumulate or merge 8-bit or 16-bit writes. It will just write them to the PCI bus as soon as the bus is free. As such, there may be a loss of PCI bus efficiency when 8-bit or 16-bit data is written to the PCI bus.

Therefore, it is recommended that you enable PCI Dynamic Bursting for better performance.

However, please note that PCI Dynamic Bursting may be incompatible with certain PCI network interface cards (also known as NICs). So, if your NIC won’t work properly, try disabling this feature.

 

Details

This BIOS feature is similar to the Byte Merge feature.

If you have already read about the CPU to PCI Write Buffer feature, you should know that the chipset has an integrated PCI write buffer which allows the CPU to immediately write up to four words (or 64-bits) of PCI writes to it. This frees up the CPU to work on other tasks while the PCI write buffer writes them to the PCI bus.

Now, the CPU doesn’t always write 32-bit data to the PCI bus. 8-bit and 16-bit writes can also take place. But while the CPU may only write 8-bits of data to the PCI bus, it is still considered as a single PCI transaction. This makes it equivalent to a 16-bit or 32-bit write in terms of PCI bandwidth! This reduces the effective PCI bandwidth, especially if there are many 8-bit or 16-bit CPU-to-PCI writes.

To solve this problem, the write buffer can be programmed to accumulate and merge 8-bit and 16-bit writes into 32-bit writes. The buffer then writes the merged data to the PCI bus. As you can see, merging the smaller 8-bit or 16-bit writes into a few large 32-bit writes reduces the number of PCI transactions required. This increases the efficiency of the PCI bus and improves its bandwidth.

This is where the PCI Dynamic Bursting BIOS feature comes in. It controls the byte merging capability of the PCI write buffer.

If it is enabled, every write transaction will go straight to the write buffer. They are accumulated until there is enough to be written to the PCI bus in a single burst. This improves the PCI bus’ performance.

If you disable byte merging, all writes will still go to the PCI write buffer (if the CPU to PCI Write Buffer feature has been enabled). But the buffer won’t accumulate and merge the data. The data is written to the PCI bus as soon as the bus becomes free. This reduces PCI bus efficiency, particularly when 8-bit or 16-bit data is written to the PCI bus.

Therefore, it is recommended that you enable PCI Dynamic Bursting for better performance.

Please note that like Byte Merge, this feature may not be compatible with certain PCI network interface cards. For more details, please check out the Byte Merge feature.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

PCI Timeout – The BIOS Optimization Guide

PCI Timeout

Common Options : Enabled, Disabled

 

Quick Review

To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed. According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles for the initial read and 8 PCI clock cycles for each subsequent read.

If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Timeout feature.

With PCI Timeout enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the master device will rearbitrate for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, the target device just sends the completion status to the master device to complete the transaction.

One advantage of using PCI Timeout is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Timeout also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. Without PCI Timeout, all write-posted data will have to be flushed before another PCI transaction can occur.

It is highly recommended that you enable PCI Timeout for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

 

Details

This is the same as the Delayed Transaction BIOS feature because it refers to the PCI Delayed Transaction feature which is part of the PCI Revision 2.1 specifications.

On the PCI bus, there are many devices that may not meet the PCI target latency rule. Such devices include I/O controllers and bridges (i.e. PCI-to-PCI and PCI-to-ISA bridges). To meet PCI 2.1 compliance, the PCI maximum target latency rule must be observed.

According to this rule, a PCI 2.1-compliant device must service a read request within 16 PCI clock cycles (32 clock cycles for a host bus bridge) for the initial read and 8 PCI clock cycles for each subsequent read. If it cannot do so, the PCI bus will terminate the transaction so that other PCI devices can access the bus. But instead of rearbitrating for access (and failing to meet the minimum latency requirement again), the PCI 2.1-compliant device can make use of the PCI Timeout feature.

When a master device reads from a target device on the PCI bus but fails to meet the latency requirements; the transaction will be terminated with a Retry command. The master device will then have to rearbitrate for bus access. But if PCI Timeout had been enabled, the target device can independently continue the read transaction. So, when the master device successfully gains control of the bus and reissues the read command, the target device will have the data ready for immediate delivery. This ensures that the retried read transaction can be completed within the stipulated latency period.

If the delayed transaction is a write, the target device latches on the data and terminates the transaction if it cannot be completed within the target latency period. The master device then rearbitrates for bus access while the target device completes writing the data. When the master device regains control of the bus, it reissues the same write request. This time, instead of returning data (in the case of a read transaction), the target device sends the completion status to the master device to complete the transaction.

[adrotate banner=”5″]

One advantage of using PCI Timeout is that it allows other PCI masters to use the bus while the transaction is being carried out on the target device. Otherwise, the bus will be left idling while the target device completes the transaction.

PCI Timeout also allows write-posted data to remain in the buffer while the PCI bus initiates a non-postable transaction and yet still adhere to the PCI ordering rules. The write-posted data will be written to memory while the target device is working on the non-postable transaction and flushed before the transaction is completed on the master device. Without PCI Timeout, all write-posted data will have to be flushed before another PCI transaction can occur.

As you can see, the PCI Timeout feature allows for more efficient use of the PCI bus as well as better PCI performance by allowing write-posting to occur concurrently with non-postable transactions. In this BIOS, the PCI 2.1 Compliance option allows you to enable or disable the PCI Timeout feature.

It is highly recommended that you enable PCI Timeout for better PCI performance and to meet PCI 2.1 specifications. Disable it only if your PCI cards cannot work properly with this feature enabled or if you are using PCI cards that are not PCI 2.1 compliant.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!