Category Archives: The Famous Tech ARP BIOS Guide!

This is the new home of the famous Tech ARP BIOS Guide.

Created more than twenty years ago, this is the most extensive guide on BIOS settings in your motherboard, and source of the published text – Breaking The BIOS Barrier : The Definitive BIOS Optimization Guide!

It currently covers over 400 BIOS settings, with more being added on a weekly basis. So make sure you check back often!

Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength

Common Options : Not Reduced, Reduced 15%, Reduced 30%, Reduced 50%

 

Memory DQ Drive Strength : A Quick Review

The Memory DQ Drive Strength BIOS feature allows you to reduce the drive strength for the memory DQ (data) pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Memory DQ Drive Strength : The Full Details

Every Dual Inline Memory Module (DIMM) has 64 data (DQ) lines. These lines transfer data from the DRAM chips to the memory controller and vice versa.

No matter what kind of DRAM chips are used (whether it’s regular SDRAM, DDR SDRAM or DDR2 SDRAM), the 64 data lines allow it to transfer 64-bits of data every clock cycle.

Each DIMM also has a number of data strobe (DQS) lines. These serve to time the data transfers on the DQ lines. The number of DQS lines depends on the type of memory chip used.

DIMMs based on x4 DRAM chips have 16 DQS lines, while DIMMs using x8 DRAM chips have 8 DQS lines and DIMMs with x16 DRAM chips have only 4 DQS lines.

Memory data transfers begin with the memory controller sending its commands to the DIMM. If data is to be read from the DIMM, then DRAM chips on the DIMM will drive their DQ and DQS (data strobe) lines.

On the other hand, if data is to be written to the DIMM, the memory controller will drive its DQ and DQS lines instead.

If many output buffers (on either the DIMMs or the memory controller) drive their DQ lines simultaneously, they can cause a drop in the signal level with a momentary raise in the relative ground voltage.

This reduces the quality of the signal which can be problematic at high clock speeds. Increasing the drive strength of the DQ pins can help give it a higher voltage swing, improving the signal quality.

However, it is important to increase the DQ drive strength according to the DRAM load. Unnecessarily increasing the DQ drive strength can cause the signal to overshoot its rising and falling edges, as well as create more signal reflection.

All this increase signal noise, which ironically negates the increased signal strength provided by a higher drive strength. Therefore, it is sometimes useful to reduce the DQ drive strength.

With light DRAM loads, you can reduce the DQ drive strength to lower signal noise and improve the signal-noise ratio. Doing so will also reduce power consumption, although that is probably low on most people’s list of importance. In certain cases, it actually allows you to achieve a higher memory clock speed.

This is where the Memory DQ Drive Strength BIOS feature comes in. It allows you to reduce the drive strength for the memory data pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PEG Port VC1/Map from The Tech ARP BIOS Guide!

PEG Port VC1/Map

Common Options : Disabled, TC1, TC2, TC3, TC4, TC5, TC6, TC7

 

PEG Port VC1/Map : A Quick Review

Unlike the sideband signals used to prioritize traffic on the AGP or PCI bus, PCI Express uses virtual channels and traffic classes (also called transaction classes) to decide what traffic gets higher priority to bandwidth on the bus at any particular time.

The PEG Port VC1/Map BIOS feature allows you to manually map a specific traffic class to the second (VC1) virtual channel of the PCI Express graphics port.

This is the higher-priority virtual channel, so mapping a specific traffic class to it will increase bandwidth allocation priority for that traffic class. However, this is not a requirement.

When set to Disabled, no traffic class will be manually mapped to the VC1 virtual channel.

When set to TC1, the TC1 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC0 to be given access to a higher priority virtual channel.

When set to TC2, the TC2 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC1 to be given access to a higher priority virtual channel.

When set to TC3, the TC3 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC2 to be given access to a higher priority virtual channel.

When set to TC4, the TC4 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC3 to be given access to a higher priority virtual channel.

When set to TC5, the TC5 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC4 to be given access to a higher priority virtual channel.

When set to TC6, the TC6 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC5 to be given access to a higher priority virtual channel.

When set to TC7, the TC7 traffic class will be manually mapped to the VC1 virtual channel. This allows only traffic with the highest priority to be given access to a higher priority virtual channel.

Generally, it is recommended that you leave this BIOS feature at the default setting of TC7. This allows only the highest priority traffic to be given access to the higher priority VC1 channel.

 

PEG Port VC1/Map : The Full Details

Unlike the sideband signals used to prioritize traffic on the AGP or PCI bus, PCI Express uses virtual channels and traffic classes (also called transaction classes) to decide what traffic gets a higher priority to bandwidth on the bus at any particular time.

PCI Express requires each port to support at least one, and up to eight Virtual Channels (VC0 to VC7). Each port is also required to support at least one, and up to eight Traffic Classes (TC0 to TC7).

In short, each port must support at least VC0 and TC0. It can support additional virtual channels or traffic classes up to VC7 and TC7, but that is optional.

Virtual channels are used to allow easy division of bandwidth according to demand and availability. Each virtual channel has its own set of queues, buffers and control logic, which allow independent flow control between multiple virtual channels.

If more than one virtual channel is supported, each subsequent virtual channel has a higher priority than the default VC0 channel. In other words, VC1 has a higher priority than VC0, but a lower priority than VC3. The last virtual channel, VC7, has the highest priority.

Traffic classes, on the other hand, are used to separate system traffic into different priority levels. If more than one traffic class is supported, each subsequent traffic class is higher in priority than the default TC0 class.

In other words, TC1 traffic is higher in priority than TC0, but lower in priority than TC3. The last traffic class, TC7, is the highest in priority.

[adrotate group=”1″]

The PCI Express specifications require TC0 to be mapped to VC0 at the very least. This is essentially hardwired. The other virtual channels and traffic classes can be assigned to each other as required. There are just some considerations to note :

  • A single virtual channel can be shared by multiple traffic classes (i.e. VC0 can be shared by TC0, TC1 and TC2)
  • Each traffic class must be assigned to a virtual channel. There can be no unassigned traffic class.
  • Each traffic class can be assigned to only one virtual channel. It cannot be shared by multiple virtual channels. (i.e. TC1 cannot be assigned to both VC0 and VC1)

The PEG Port VC1/Map BIOS feature allows you to manually map a specific traffic class to the second (VC1) virtual channel of the PCI Express graphics port. This is the higher-priority virtual channel, so mapping a specific traffic class to it will increase bandwidth allocation priority for that traffic class. However, this is not a requirement.

When set to Disabled, no traffic class will be manually mapped to the VC1 virtual channel.

When set to TC1, the TC1 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC0 to be given access to a higher priority virtual channel.

When set to TC2, the TC2 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC1 to be given access to a higher priority virtual channel.

When set to TC3, the TC3 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC2 to be given access to a higher priority virtual channel.

When set to TC4, the TC4 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC3 to be given access to a higher priority virtual channel.

When set to TC5, the TC5 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC4 to be given access to a higher priority virtual channel.

When set to TC6, the TC6 traffic class will be manually mapped to the VC1 virtual channel. This allows traffic with a higher priority than TC5 to be given access to a higher priority virtual channel.

When set to TC7, the TC7 traffic class will be manually mapped to the VC1 virtual channel. This allows only traffic with the highest priority to be given access to a higher priority virtual channel.

Generally, it is recommended that you leave this BIOS feature at the default setting of TC7. This allows only the highest priority traffic to be given access to the higher priority VC1 channel.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCIE Spread Spectrum from The Tech ARP BIOS Guide!

PCIE Spread Spectrum

Common Options : Down Spread, Disabled

 

PCIE Spread Spectrum : A Quick Review

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The PCIE Spread Spectrum BIOS feature controls spread spectrum clocking of the PCI Express interconnect.

When set to Down Spread, the motherboard modulates the PCI Express interconnect’s clock signal downwards by a small amount. Because the clock signal is modulated downwards, there is a slight reduction in performance.

The amount of modulation is not revealed and depends on what the manufacturer has qualified for the motherboard. However, the greater the modulation, the greater the reduction of EMI and performance.

When set to Disabled, the motherboard disables any modulation of the PCI Express interconnect’s clock signal.

Generally, frequency modulation via this feature should not cause any problems. Since the motherboard only modulates the signal downwards, system stability is not compromised.

However, spread spectrum clocking can interfere with the operation of timing-critical devices like clock-sensitive SCSI devices. If you are using such devices on the PCI Express interconnect, you must disable PCIE Spread Spectrum.

System stability may also be compromised if you are overclocking the PCI Express interconnect. Therefore, it is recommended that you disable this feature if you are overclocking the PCI Express interconnect.

Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the PCI Express interconnect frequency a little to provide a margin of safety.

If you are not overclocking the PCI Express interconnect, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature.

Otherwise, disable it to remove even the slightest possibility of stability issues.

[adrotate group=”1″]

 

PCIE Spread Spectrum : The Full Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted.

To prevent EMI from causing problems to other electronics, the FCC enacted Part 15 of the FCC regulations in 1975. It regulates the power output of such clock generators by limiting the amount of EMI they can generate. As a result, engineers use spread spectrum clocking to ensure that their motherboards comply with the FCC regulation on EMI levels.

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. Instead of generating a typical waveform, the clock signal continuously varies around the target frequency within a tight range. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

Clock signal (courtesy of National Instruments)

The same clock signal, with spread spectrum clocking

The PCIE Spread Spectrum BIOS feature controls spread spectrum clocking of the PCI Express interconnect.

When set to Down Spread, the motherboard modulates the PCI Express interconnect’s clock signal downwards by a small amount. Because the clock signal is modulated downwards, there is a slight reduction in performance.

The amount of modulation is not revealed and depends on what the manufacturer has qualified for the motherboard. However, the greater the modulation, the greater the reduction of EMI and performance.

When set to Disabled, the motherboard disables any modulation of the PCI Express interconnect’s clock signal.

Generally, frequency modulation via this feature should not cause any problems. Since the motherboard only modulates the signal downwards, system stability is not compromised.

However, spread spectrum clocking can interfere with the operation of timing-critical devices like clock-sensitive SCSI devices. If you are using such devices on the PCI Express interconnect, you must disable PCIE Spread Spectrum.

System stability may also be compromised if you are overclocking the PCI Express interconnect. Of course, this depends on the amount of modulation, the extent of overclocking and other factors like temperature, voltage levels, etc. As such, the problem may not readily manifest itself immediately.

Therefore, it is recommended that you disable this feature if you are overclocking the PCI Express interconnect. You will be able to achieve better overclockability, at the expense of higher EMI.

Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the PCI Express interconnect frequency a little to provide a margin of safety.

If you are not overclocking the PCI Express interconnect, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature.

Otherwise, disable it to remove even the slightest possibility of stability issues.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


IOQD from The Tech ARP BIOS Guide!

IOQD

Common Options : 1, 4, 8, 12

 

IOQD : A Quick Review

The IOQD BIOS feature controls the use of the processor bus’ command queue.

Normally, there are only two options available. Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12).

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands. If selected, each command will only be issued after the processor has finished with the previous one.

Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled.

This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

Interestingly, this IOQD feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds. To overclock beyond what’s normally possible, you can try disabling command queuing.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

 

IOQD : The Details

For greater performance at high clock speeds, motherboard chipsets now feature a pipelined processor bus. The multiple stages in this pipeline can also be used to queue up multiple commands to the processor.

This command queuing greatly improves performance because it effectively masks the latency of the processor bus. In optimal situations, the amount of latency between each succeeding command can be reduced to only a single clock cycle!

The IOQD BIOS feature controls the use of the processor bus’ command queue. Normally, there are only two options available.

Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12). This is because the IOQD BIOS feature does not actually allow you to select the number of commands that can be queued.

It merely allows you to disable or enable the command queuing capability of the processor bus pipeline. This is because the number of commands that can be queued depends entirely on the number of stages in the pipeline.

As such, you can expect to see IOQD to be associated with options like Enabled and Disabled in some motherboards.

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands.

If selected, each command will only be issued after the processor has finished with the previous one. Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

As you can see, this reduces performance as the processor has to wait for each command to filter down the pipeline. The severity of the effect depends greatly on the depth of the pipeline. The deeper the pipeline, the greater the effect.

If the second queue depth option is 4, this means that the processor bus pipeline has 4 stages in it. Selecting this option allows the queuing of up to 4 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 8, this means that the processor bus pipeline has 8 stages in it. Selecting this option allows the queuing of up to 8 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 12, this means that the processor bus pipeline has 12 stages in it. Selecting this option allows the queuing of up to 12 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

Please note that the latency of only 1 clock cycle is only possible if the pipeline is completely filled up. If the pipeline is only partially filled up, then the latency affecting one or more of the commands will be more than 1 clock cycle. Still, the average latency for each command will be much lower than it would be with command queuing disabled.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled. This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

Interestingly, this IOQD feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds.

To overclock beyond what’s normally possible, you can try disabling command queuing. This may reduce performance but it will make the processor more stable and may allow it to be further overclocked.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM Trrd Timing Value from The Tech ARP BIOS Guide!

SDRAM Trrd Timing Value

Common Options : 2 cycles, 3 cycles

 

SDRAM Trrd Timing Value : A Quick Review

The SDRAM Trrd Timing Value BIOS feature specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device.

The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges.

For desktop PCs, a delay of 2 cycles is recommended as current surges aren’t really important. The performance benefit of using the shorter 2 cycles delay is of far greater interest.

The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device’s read and write performance.

Switch to 3 cycles only when there are stability problems with the 2 cycles setting.

 

SDRAM Trrd Timing Value : The Details

The Bank-to-Bank Delay or tRRD is a DDR timing parameter which specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device, even to different internal banks.

The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges.

Because this timing parameter is DDR device-specific, it may differ from one DDR device to another. DDR DRAM manufacturers typically specify the tRRD parameter based on the row ACTIVATE activity to limit current surges within the device.

If you let the BIOS automatically configure your DRAM parameters, it will retrieve the manufacturer-set tRRD value from the SPD (Serial Presence Detect) chip. However, you may want to manually set the tRRD parameter to suit your requirements.

For desktop PCs, a delay of 2 cycles is recommended as current surges aren’t really important.

This is because the desktop PC essentially has an unlimited power supply and even the most basic desktop cooling solution is sufficient to dispel any extra thermal load that the current surges may impose.

The performance benefit of using the shorter 2 cycles delay is of far greater interest. The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device’s read and write performance.

Note that the shorter delay of 2 cycles works with most DDR DIMMs, even at 133 MHz (266 MHz DDR). However, DDR DIMMs running beyond 133 MHz (266 MHz DDR) may need to introduce a delay of 3 cycles between each successive bank activation.

Select 2 cycles whenever possible for optimal DDR DRAM performance.

Switch to 3 cycles only when there are stability problems with the 2 cycles setting.

In mobile devices like laptops however, it would be advisable to use the longer delay of 3 cycles.

Doing so limits the current surges that accompany row activations. This reduces the DDR device’s power consumption and thermal output, both of which should be of great interest to the road warrior.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


VGA Share Memory Size from The Tech ARP BIOS Guide!

VGA Share Memory Size

Common Options for UMA : 1MB, 4MB, 8MB, 16MB, 32MB, 64MB, 128MB

Common Options for DVMT : 1MB, 8MB

 

VGA Share Memory Size : A Quick Review

The VGA Share Memory Size BIOS feature controls the amount of system memory that is allocated to the integrated graphics processor when the system boots up.

However, its effect depends on whether your motherboard supports the older Unified Memory Architecture (UMA) or the newer Dynamic Video Memory Technology (DVMT).

If you have a motherboard that supports UMA, the memory size you select determines the maximum amount of system memory that is allocated to the graphics processor. Once allocated, it can only be used as graphics memory. It is no longer accessible to the operating system or applications.

Therefore, it is recommended that you select the absolute minimum amount of system memory that the graphics processor requires for your monitor. You can calculate it by multiplying the resolution and colour depth that you are using. Of course, if you intend to play 3D games, you will need to allocate more memory.

If you have a motherboard that supports DVMT, the memory size you select determines the maximum amount of system memory that is pre-allocated to the graphics processor. Once allocated, it can only be used as graphics memory. It is no longer accessible to the operating system or applications.

However, unlike in a UMA system, this memory is only allocated for use during the boot process or with MS-DOS or legacy operating systems. Additional system memory is allocated only after the graphics driver is loaded. It is recommended that you set it to 8MB as this allows for high-resolution splash screens as well as higher resolutions in MS-DOS applications and games.

 

VGA Share Memory Size : The Full Details

Some motherboard chipsets come with an integrated graphics processor. To reduce costs, it usually makes use of UMA (Unified Memory Architecture) or DVMT (Dynamic Video Memory Technology) for its memory requirements.

Both technologies allow the integrated graphics processor to requisition some system memory for use as graphics memory. This reduces cost by obviating the need for dedicated graphics memory. Of course, it has some disadvantages :

  • Allocating system memory to the graphics processor reduces the amount of system memory available for the operating system and programs to use.
  • Sharing system memory with the graphics processor saturates the memory bus and reduces the amount of memory bandwidth for both the processor and the graphics processor.

Therefore, integrated graphics processors are usually unsuitable for high-demand 3D applications and games. They are best used for basic 2D graphics and video functions.

The VGA Share Memory Size BIOS feature controls the amount of system memory that is allocated to the integrated graphics processor when the system boots up.

However, its effect depends on whether your motherboard supports the older Unified Memory Architecture (UMA) or the newer Dynamic Video Memory Technology (DVMT).

If you have a motherboard that supports UMA, the memory size you select determines the maximum amount of system memory that is allocated to the graphics processor. Once allocated, it can only be used as graphics memory. It is no longer accessible to the operating system or applications.

Therefore, it is recommended that you select the absolute minimum amount of system memory that the graphics processor requires for your monitor. You can calculate it by multiplying the resolution and colour depth that you are using.

For example, if you use a resolution of 1600 x 1200 and a colour depth of 32-bits, the amount of graphics memory you require will be 1600 x 1200 x 32-bits = 61,440,000 bits or 7.68 MB.

After doubling that to allow for double buffering, the minimum amount of graphics memory you need would be 15.36 MB. You should set this BIOS feature to 16MB in this example.

Of course, if you intend to play 3D games, you will need to allocate more memory. But please remember that once allocated as graphics memory, it is no longer available to the operating system or applications. You need to balance the performance of your 3D games with that of your operating system and applications.

If you have a motherboard that supports DVMT, the memory size you select determines the maximum amount of system memory that is pre-allocated to the graphics processor. Once allocated, it can only be used as graphics memory. It is no longer accessible to the operating system or applications.

However, unlike in a UMA system, this memory is only allocated for use during the boot process or with MS-DOS or legacy operating systems. Additional system memory is allocated only after the graphics driver is loaded. Therefore, the amount of system memory that can be selected is small – only a choice of 1MB or 8MB.

It is recommended that you set it to 8MB as this allows for high-resolution splash screens as well as higher resolutions in MS-DOS applications and games.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Hard Disk Pre-Delay from The Tech ARP BIOS Guide!

Hard Disk Pre-Delay

Common Options : Disabled, 3 Seconds, 6 Seconds, 9 Seconds, 12 Seconds, 15 Seconds, 21 Seconds, 30 Seconds

 

Hard Disk Pre-Delay : A Quick Review

The Hard Disk Pre-Delay BIOS feature allows you to force the BIOS to delay the initialisation of your hard disk drives for up to 30 seconds. The delay allows your IDE devices more time to spin up before the BIOS initializes them.

If you do not use old IDE drives and the BIOS has no problem initializing your IDE devices, it is recommended that you disable this BIOS feature for the shortest possible booting time. Most IDE devices will have no problem spinning up in time for initialisation.

But if one or more of your IDE devices fail to initialize during the boot up process, start with a delay of 3 Seconds. If that doesn’t help, gradually increase the delay until all your IDE devices initialize properly during the boot up process.

 

Hard Disk Pre-Delay : The Full Details

Regardless of its shortcomings, the IDE standard is remarkably backward-compatible. Every upgrade of the standard was designed to be fully compatible with older IDE devices. So, you can actually use the old 40 MB hard disk drive that came with your ancient 386 system in your much newer Athlon XP system!

However, even backward compatibility cannot account for the slower motors used in the older IDE drives. Crucially, motherboards are capable of booting up much faster these days, initialising IDE devices much earlier.

Unfortunately, this also means that some older IDE drives will not be able to spin up in time to be initialized! When this happens, the BIOS will not be able to detect that IDE drive and the drive will not be accessible even though it is actually running just fine.

This is where the Hard Disk Pre-Delay BIOS feature comes in. It allows you to force the BIOS to delay the initialisation of your hard disk drives for up to 30 seconds. The delay allows your IDE devices more time to spin up before the BIOS initializes them.

If you do not use old IDE drives and the BIOS has no problem initializing your IDE devices, it is recommended that you disable this BIOS feature for the shortest possible booting time. Most IDE devices will have no problem spinning up in time for initialization.

But if one or more of your IDE devices fail to initialize during the boot up process, start with a delay of 3 Seconds. If that doesn’t help, gradually increase the delay until all your IDE devices initialize properly during the boot up process.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Write Data In to Read Delay from The Tech ARP BIOS Guide!

Write Data In to Read Delay

Common Options : 1 Cycle, 2 Cycles

 

Write Data In to Read Delay : A Quick Review

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Write Data In to Read Delay : The Full Details

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

Please note that this is only applicable for read commands that follow a write operation. Consecutive read operations or writes that follow reads are not affected.

If a 1 Cycle delay is selected, every read command that follows a write operation will be delayed one clock cycle before it is issued.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

If a 2 Cycles delay is selected, every read command that follows a write operation will be delayed two clock cycles before it is issued.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

By default, this BIOS feature is set to 2 Cycles. This meets JEDEC’s specification of 2 clock cycles for write-to-read command delay in DDR400 memory modules. DDR266 and DDR333 memory modules require a write-to-read command delay of only 1 clock cycle.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NX Technology from The Tech ARP BIOS Guide!

NX Technology

Common Options : Enabled, Disabled

 

NX Technology : A Quick Review

The NX Technology BIOS feature is actually a toggle for the processor’s No Execute feature.

In fact, the acronym NX is short for No Execute and is specific to AMD’s implementation. Intel’s implementation is called XD, short for Execute Disable.

When enabled, the processor prevents the execution of code in data-only memory pages. This provides some protection against buffer overflow attacks.

When disabled, the processor will not restrict code execution in any memory area. This makes the processor more vulnerable to buffer overflow attacks.

It is highly recommended that you enable the NX Technology BIOS feature for increased protection against buffer overflow attacks.

However, please note that the No Execute feature is a hardware feature present only in the AMD64 family of processors. Older AMD processor do not support the No Execute feature. With such processors, this BIOS feature has no effect.

In addition, you must use an operating system that supports the No Execute feature. Currently, that includes the following operating systems :

  • Microsoft Windows Server 2003 with Service Pack 1, or newer
  • Microsoft Windows XP with Service Pack 2, or newer
  • Microsoft Windows XP Tablet PC Edition 2005, or newer
  • SUSE Linux 9.2, or newer
  • Red Hat Enterprise Linux 3 Update 3, or newer

Incidentally, some applications and device drivers attempt to execute code from the kernel stack for improved performance. This will cause a page-fault error if No Execute is enabled. In such cases, you will need to disable this BIOS feature.

 

NX Technology : The Full Details

Buffer overflow attacks are a major threat to networked computers. For example, a worm may infect a computer and flood the processor with code, bringing the system down to a halt. The worm will also propagate throughout the network, paralyzing each and every system it infects.

Due to the prevalence of such attacks, AMD added a feature called No Execute page protection, also known as Enhanced Virus Protection (EVP) to the AMD64 processors. This feature is designed to protect the computer against certain buffer overflow attacks.

Processors that come with this feature can restrict memory areas in which application code can be executed. When paired with an operating system that supports the No Execute feature, the processor adds a new attribute bit (the No Execute bit) in the paging structures used for address translation.

If the No Execute bit of a memory page is set to 1, that page can only be used to store data. It will not be used to store executable code. But if the No Execute bit of a memory page is set to 0, that page can be used to store data or executable code.

The processor will henceforth check the No Execute bit whenever it executes code. It will not execute code in a memory page with the No Execute bit set to 1. Any attempt to execute code in such a protected memory page will result in a page-fault exception.

So, if a worm or virus inserts code into the buffer, the processor prevents the code from being executed and the attack fails. This also prevents the worm or virus from propagating to other computers on the network.

The NX technology BIOS feature is actually a toggle for the processor’s No Execute feature. In fact, the acronym NX is short for No Execute and is specific to AMD’s implementation. Intel’s implementation is called XD, short for Execute Disable.

When enabled, the processor prevents the execution of code in data-only memory pages. This provides some protection against buffer overflow attacks.

When disabled, the processor will not restrict code execution in any memory area. This makes the processor more vulnerable to buffer overflow attacks.

It is highly recommended that you enable the NX Technology BIOS feature for increased protection against buffer overflow attacks.

However, please note that the No Execute feature is a hardware feature present only in the AMD64 family of processors. Older AMD processor do not support the No Execute feature. With such processors, this BIOS feature has no effect.

In addition, you must use an operating system that supports the No Execute feature. Currently, that includes the following operating systems :

  • Microsoft Windows Server 2003 with Service Pack 1, or newer
  • Microsoft Windows XP with Service Pack 2, or newer
  • Microsoft Windows XP Tablet PC Edition 2005, or newer
  • SUSE Linux 9.2, or newer
  • Red Hat Enterprise Linux 3 Update 3, or newer

Incidentally, some applications and device drivers attempt to execute code from the kernel stack for improved performance. This will cause a page-fault error if No Execute is enabled. In such cases, you will need to disable this BIOS feature.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


V-Link Data 2X Support From The Tech ARP BIOS Guide!

V-Link Data 2X Support

Common Options : Enabled, Disabled

 

V-Link Data 2X Support : A Quick Review

In VIA chipsets, the IDE / SATA controller (known as the VIA DriveStation) is linked to the south bridge chip using a dedicated V-Link bus that offers twice as much bandwidth as the PCI bus.

However, there are occasions where it may cause data corruption as well as boot failures with some storage disk drives. This is where V-Link Data 2X Support comes in.

The V-Link Data 2X Support BIOS feature controls the operation of the V-Link bus between the VIA DriveStation and the south bridge chip.

It is slaved to the Serial ATA Controller BIOS feature. If the Serial ATA Controller BIOS feature is disabled, this BIOS feature will be grayed out.

When enabled, the V-Link bus connecting the VIA DriveStation to the south bridge chip will run at full speed, delivering 266 MB/s of bandwidth.

When disabled, the V-Link bus connecting the VIA DriveStation to the south bridge chip will run at half speed, delivering 133 MB/s of bandwidth.

It is recommended that you enable this BIOS feature for maximum performance from storage devices attached to the VIA DriveStation. However, if you experience data corruption or boot problems, disable this BIOS feature.

 

V-Link Data 2X Support : The Full Details

In VIA chipsets, the IDE / SATA controller (known as the VIA DriveStation) no longer runs off the PCI bus. To ensure maximum performance, it is linked to the south bridge chip using a dedicated 8-bit, quad-pumped V-Link bus running at 66 MHz.

This V-Link bus offers twice as much bandwidth as the PCI bus, allowing data transfers of up to 266 MB/s.

In addition, the IDE/SATA controller does not need share this bandwidth with any other device, as it would have to, if it was on the PCI bus.

However, all is not peachy with the use of the faster V-Link bus.

There are occasions where it may cause data corruption as well as boot failures with some storage drives.

This issue only affects storage drives connected to the VIA DriveStation. It does not affect drives connected to third-party IDE/SATA controllers because they use the PCI bus.

This is where V-Link Data 2X Support comes in. It controls the operation of the V-Link bus between the VIA DriveStation and the south bridge chip.

It is slaved to the Serial ATA Controller BIOS feature. If the Serial ATA Controller BIOS feature is disabled, this BIOS feature will be grayed out.

When enabled, the V-Link bus connecting the VIA DriveStation to the south bridge chip will run at full speed, delivering 266 MB/s of bandwidth.

When disabled, the V-Link bus connecting the VIA DriveStation to the south bridge chip will run at half speed, delivering 133 MB/s of bandwidth.

It is recommended that you enable this BIOS feature for maximum performance from storage devices attached to the VIA DriveStation. However, if you experience data corruption or boot problems, disable this BIOS feature.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


AGPCLK / CPUCLK from The Tech ARP BIOS Guide!

AGPCLK / CPUCLK

Common Options : 1/1, 2/3, 1/2, 2/5

 

AGPCLK / CPUCLK Quick Review

The AGPCLK / CPUCLK BIOS feature allows you to set the ratio between the AGP clock speed and the CPU bus (also know as front side bus or FSB) clock speed.

This allows you to keep the AGP bus speed within specifications (66 MHz) while using a much faster CPU bus speed.

When the ratio is set to 1/1, the AGP bus will run at the same speed as the CPU bus. This is meant for processors that use the 66 MHz bus speed, like the older Intel Celeron processors.

The 2/3 divider is used when you use a processor running with a bus speed of 100 MHz. This divider will cut the AGP bus speed down to 66 MHz.

The 2/5 divider is used when you use a processor running with a bus speed of 166 MHz. This divider will cut the AGP bus speed down to 66 MHz.

Generally, you should set this feature according to the CPU bus speed you are using.

This means using the 1/1 divider for 66 MHz bus speed CPUs, the 2/3 divider for 100 MHz bus speed CPUs, the 1/2 divider for 133 MHz CPUs and the 2/5 divider for 166 MHz CPUs.

 

AGPCLK / CPUCLK Details

The AGP bus clock speed is referenced from the CPU bus clock speed. However, the AGP bus was only designed to run at 66 MHz while the CPU bus runs anywhere from 66 MHz to 133 MHz.

Therefore, a suitable AGP bus to CPU bus clock speed ratio or divider must be selected to ensure that the AGP bus won’t run way beyond 66 MHz. This is where the AGPCLK / CPUCLK BIOS option comes in.

When the ratio is set to 1/1, the AGP bus will run at the same speed as the CPU bus. This is meant for processors that use the 66 MHz bus speed, like the older Intel Celeron processors.

The 2/3 divider is used when you use a processor running with a bus speed of 100 MHz. This divider will cut the AGP bus speed down to 66 MHz.

The 1/2 divider was introduced with motherboards that provide 133 MHz bus speed support. Such motherboards need the 1/2 divider to make the AGP bus run at the standard 66 MHz. Without this divider, the AGP bus would have to run at 89 MHz, which is more than what most AGP cards can withstand.

The 2/5 divider was introduced with motherboards that provide 166 MHz bus speed support. Such motherboards need the 2/5 divider to make the AGP bus run at the standard 66 MHz. Without this divider, the AGP bus would have to run at 83 MHz, which is more than what most AGP cards can withstand.

Generally, you should set this feature according to the CPU bus speed you are using. This means using the 1/1 divider for 66 MHz bus speed CPUs, the 2/3 divider for 100 MHz bus speed CPUs, the 1/2 divider for 133 MHz CPUs and the 2/5 divider for 166 MHz CPUs.

If you are overclocking the CPU bus, you are supposed to reduce the divider to ensure that the AGP bus speed remains within specifications. However, most AGP cards can run with the AGP bus overclocked to 75 MHz. Some would even happily run at 83 MHz! However, anything above 83 MHz would be a little iffy.

In most cases, you can still stick with the original AGP bus / CPU bus clock divider when you overclock the CPU. This means that the AGP bus will be overclocked as well. But as long as the AGP card can work at the higher clock speed, it shouldn’t be a problem. In fact, you can expect a linear increase in AGP bus performance.

Be warned though – overclocking the AGP bus can potentially damage your AGP card. So, be circumspect when you overclock the AGP bus. 75 MHz is normally the safe limit for most AGP cards.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


CPU / DRAM CLK Synch CTL – The Tech ARP BIOS Guide!

CPU / DRAM CLK Synch CTL

Common Options : Synchronous, Asynchronous, Auto

 

Quick Review of CPU / DRAM CLK Synch CTL

The CPU / DRAM CLK Synch CTL BIOS feature offers a clear-cut way of controlling the memory controller’s operating mode.

When set to Synchronous, the memory controller will set the memory clock to the same speed as the processor bus.

When set to Asynchronous, the memory controller will allow the memory clock to run at any speed.

When set to Auto, the operating mode of memory controller will depend on the memory clock you set.

It is recommended that you select the Synchronous operating mode. This generally provides the best performance, even if your memory modules are capable of higher clock speeds.

 

Details of CPU / DRAM CLK Synch CTL

The memory controller can operate either synchronously or asynchronously.

In the synchronous mode, the memory clock runs at the same speed as the processor bus speed.

In the asynchronous mode, the memory clock is allowed to run at a different speed than the processor bus.

While the asynchronous mode allows the memory controller to support memory modules of different clock speeds, it requires the use of FIFO (First In, First Out) buffers and resynchronizers. This increases the latency of the memory bus, and reduces performance.

Running the memory controller in synchronous mode allows the memory controller to bypass the FIFO buffers and deliver data directly to the processor bus. This reduces the latency of the memory bus and greatly improves performance.

Normally, the synchronicity of the memory controller is determined by the memory clock. If the memory clock is the same as the processor bus speed, then the memory controller is in the synchronous mode. Otherwise, it is in the asynchronous mode.

The CPU / DRAM CLK Synch CTL BIOS feature, however, offers a more clear-cut way of controlling the memory controller’s operating mode.

When set to Synchronous, the memory controller will set the memory clock to the same speed as the processor bus. Even if you set the memory clock to run at a higher speed than the front side bus, the memory controller automatically selects a lower speed that matches the processor bus speed.

When set to Asynchronous, the memory controller will allow the memory clock to run at any speed. Even if you set the memory clock to run at a higher speed than the front side bus, the memory controller will not force the memory clock to match the processor bus speed.

When set to Auto, the operating mode of memory controller will depend on the memory clock you set. If you set the memory clock to run at the same speed as the processor bus, the memory controller will operate in the synchronous mode. If you set the memory clock to run at a different speed, then the memory controller will operate in the asychronous mode.

It is recommended that you select the Synchronous operating mode. This generally provides the best performance, even if your memory modules are capable of higher clock speeds.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


IDE Bus Master Support from The Tech ARP BIOS Guide!

IDE Bus Master Support

Common Options : Enabled, Disabled

 

Quick Review of IDE Bus Master Support

The IDE Bus Master Support BIOS feature is a misnomer since it doesn’t actually control the bus mastering ability of the onboard IDE controller.

It is actually a toggle for the built-in driver that allows the onboard IDE controller to perform DMA (Direct Memory Access) transfers.

When this BIOS feature is enabled, the BIOS loads up the 16-bit busmastering driver for the onboard IDE controller. This allows the IDE controller to transfer data via DMA, resulting in greatly improved transfer rates and lower CPU utilization in real mode DOS and during the loading of other operating systems.

When this BIOS feature is disabled, the BIOS will not load up the 16-bit busmastering driver for the onboard IDE controller. The IDE controller will then transfer data via PIO.

Therefore, it is recommended that you enable IDE Bus Master Support. This greatly improves the IDE transfer rate and reduces the CPU utilization during the booting process or when you are using real mode DOS. Users of DOS-based disk utilities like Norton Ghost can expect to benefit a lot from this feature.

[adrotate group=”1″]

 

Details of IDE Bus Master Support

The IDE Bus Master Support BIOS feature is a misnomer since it doesn’t actually control the bus mastering ability of the onboard IDE controller.

It is actually a toggle for the built-in driver that allows the onboard IDE controller to perform DMA (Direct Memory Access) transfers.

DMA transfer modes allow IDE devices to transfer large amounts of data from the hard disk to the system memory and vice versa with minimal processor intervention.

It differs from the older and processor-intensive PIO transfer modes by offloading the task of data transfer from the processor to the chipset.

Previously, this feature is only available after an operating system that supports DMA transfers (via the appropriate device driver) is loaded up.

But now, many BIOS come with a built-in 16-bit driver that allows DMA transfers. This allows the onboard IDE controller to perform DMA transfers even before the operating system is loaded up!

When this BIOS feature is enabled, the BIOS loads up the 16-bit busmastering driver for the onboard IDE controller. This allows the IDE controller to transfer data via DMA, resulting in greatly improved transfer rates and lower CPU utilization in real mode DOS and during the loading of other operating systems.

When this BIOS feature is disabled, the BIOS will not load up the 16-bit busmastering driver for the onboard IDE controller. The IDE controller will then transfer data via PIO.

Therefore, it is recommended that you enable IDE Bus Master Support. This greatly improves the IDE transfer rate and reduces the CPU utilization during the booting process or when you are using real mode DOS. Users of DOS-based disk utilities like Norton Ghost can expect to benefit a lot from this feature.

Please note that since current operating systems (i.e. Windows XP) load up their own 32-bit busmastering driver, this feature has no effect once such an operating system loads up. Still, it is recommended that you enable this feature to improve performance prior to the loading of the operating system’s own driver.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


CPUID Maximum Value Limit from The Tech ARP BIOS Guide!

CPUID Maximum Value Limit

Common Options : Enabled, Disabled

 

Quick Review of CPUID Maximum Value Limit

When the computer is booted up, the operating system executes the CPUID instruction to identify the processor and its capabilities.

The first step is to query the processor to find out the highest input value CPUID recognises, by executing CPUID with the EAX register set to 0. This determines the kind of basic information CPUID can provide the operating system.

The maximum CPUID input value determines the values that the operating system can write to the CPUID’s EAX register to obtain information about the processor.

However, if you attempt to use a new processor with an old operating system, that operating system may not be able to handle the extra CPUID information provided by the processor.

This is where the CPUID Maximum Value Limit BIOS feature comes in. It allows you to circumvent problems with older operating systems that do not support newer processors with extended CPUID information.

When enabled, the processor will limit the maximum CPUID input value to 03h when queried, even if the processor supports a higher CPUID input value.

When disabled, the processor will return the actual maximum CPUID input value of the processor when queried.

It is recommended that you leave it at the default setting of Disabled. You should only enable it if you intend to use a newer processor with an operating system that does not support it.

 

Details of CPUID Maximum Value Limit

When the computer is booted up, the operating system executes the CPUID instruction to identify the processor and its capabilities.

The first step is to query the processor to find out the highest input value CPUID recognises, by executing CPUID with the EAX register set to 0. This determines the kind of basic information CPUID can provide the operating system.

Here’s a table of the maximum CPUID input values the operating system will obtain from Intel processors when CPUID is executed with the EAX register set to 0.

IA-32 Processors Maximum CPUID
Input Value
Earlier Intel486 Processors CPUID Not Implemented
Later Intel486 Processors 01h
Pentium Processors 01h
Pentium Pro Processors 02h
Pentium II Processors 02h
Celeron Processors 02h
Pentium III Processors 03h
Pentium 4 Processors 02h
Xeon Processors 02h
Pentium M Processors 02h
Pentium 4 Processors
with Hyper-Threading Technology
05h

Now that it knows the maximum CPUID input value, the operating system can now write the correct values to the CPUID’s EAX register to obtain information about the processor.

Maximum CPUID
Input Value
EAX Input Values
Supported
01h 00h, 01h
02h 00h, 01h, 02h
03h 00h, 01h, 02h, 03h
05h 00h, 01h, 02h, 03h, 04h, 05h

Using those EAX input values, the operating system queries the processor for the following basic information.

EAX
Input Value
Possible Basic Information Provided by CPUID
00h EAX : Maximum input value for basic CPUID information
EBX : “Genu”
ECX : “ntel”
EDX : ineI”
01h EAX
– 
32-bit Processor Signature
– last 32 bits of the 96-bit processor serial numberEBX
– 
Brand Index
– CLFLUSH line size
– count of logical processors
– processor local APIC physical IDECX
– Processor feature flagsEDX
– Processor feature flags
02h EAX : Cache and TLB descriptors
EBX : Cache and TLB descriptors
ECX : Cache and TLB descriptors
EDX : Cache and TLB descriptors
03h EAX : Reserved
EBX : Reserved
ECX : First 32 bits of the 96-bit processor serial number
EDX : Second 32 bits of the 96-bit processor serial number
04h EAX
– Cache type
– Cache level
– Self-initializing cache level
– Presence of fully associative cache
– Number of threads sharing this cache
– Number of processor cores on this dieEBX
– System coherency line size
– Physical line partitions
– Ways of associativity
ECX : Number of sets
EDX : Reserved
05h EAX : MONITOR/MWAIT function
EBX : MONITOR/MWAIT function
ECX : Reserved
EDX : Reserved

 

Why Does CPUID Maximum Value Limit Matter?

However, if you attempt to use a new processor with an old operating system, that operating system may not be able to handle the extra CPUID information provided by the processor.

This is where the CPUID Maximum Value Limit BIOS feature comes in. It allows you to circumvent problems with older operating systems that do not support newer processors with extended CPUID information.

When enabled, the processor will limit the maximum CPUID input value to 03h when queried, even if the processor supports a higher CPUID input value. The operating system will only query the processor with EAX input values of up to 03h.

When disabled, the processor will return the actual maximum CPUID input value of the processor when queried.

By default, it is set to Disabled because all new operating systems are aware of current processors, and have no problem handling the additional CPUID information.

Irrespective of what you set this BIOS feature to, the operating system will first query the processor.

Only if the processor returns a maximum CPUID input value greater than 03h, will this BIOS feature be taken into account. If the processor returns a maximum CPUID input value of 03h or less, this BIOS feature will be ignored.

It is recommended that you leave it at the default setting of Disabled. You should only enable it if you intend to use a newer processor with an operating system that does not support it.

[adrotate group=”1″]

 

A Little History Lesson

Historically, Intel processors from the Pentium Pro onwards have a maximum CPUID input value of only 02h or 03h. The only exception is the Intel Pentium 4 with Hyper-Threading Technology (HTT).

Older operating systems like Windows 95/98 and Windows Me were released before the Intel Pentium 4 with HTT, and are therefore not aware of such a processor.

This would not have been a problem if the Pentium 4 with HTT did not come with additional CPUID capabilities. Unfortunately, it has a maximum CPUID input value of 05h, as well as support for additional EAX input values of 04h and 05h.

When these operating systems boot up, they would receive a maximum CPUID input value of 05h from the processor – which they were not programmed to handle. Therefore, they were not able to initialise the processor properly.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Bank Swizzle Mode from The Tech ARP BIOS Guide

Bank Swizzle Mode

Common Options : Enabled, Disabled

 

Quick Review of Bank Swizzle Mode

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits.

It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

This effectively interleaves the memory banks and maximises memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Details of Bank Swizzle Mode

DRAM (and its various derivatives – SDRAM, DDR SDRAM, etc.) store data in cells that are organized in rows and columns.

Whenever a read command is issued to a memory bank, the appropriate row is first activated using the RAS (Row Address Strobe). Then, to read data from the target memory cell, the appropriate column is activated using the CAS (Column Address Strobe).

Multiple cells can be read from the same active row by applying the appropriate CAS signals. If data has to be read from a different row, the active row has to be deactivated before the appropriate row can be activated.

This takes time and reduces performance, so good memory controllers will try to schedule memory accesses to maximize the number of hits on active rows. One of the methods used to achieve that goal is the bank swizzle mode.

Bank Swizzle Mode is a DRAM bank address mode that remaps the DRAM bank address to appear as physical address bits. It does this by using the logical operation, XOR (exclusive or), to create the bank address from the physical address bits.

The XOR operation results in a value of true if only one of the two operands (inputs) is true. If both operands are simultaneously false or true, then it results in a value of false.

[adrotate group=”1″]

This characteristic of XORing the physical address to create the bank address reduces page conflicts by remapping the memory bank addresses so only one of two banks can be active at any one time.

This effectively interleaves the memory banks and maximizes memory accesses on active rows in each memory bank.

It also reduces page conflicts between a cache line fill and a cache line evict in the processor’s L2 cache.

When set to Enable, the memory controller will remap the DRAM bank addresses to appear as physical address bits. This improves performance by maximizing memory accesses on active rows and minimizes page conflicts in the processor’s L2 cache.

When set to Disable, the memory controller will not remap the DRAM bank addresses.

It is highly recommended that you enable this BIOS feature to improve memory throughput. You should only disable it if you face stability issues after enabling this feature.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Master Priority Rotation from The Tech ARP BIOS Guide!

Master Priority Rotation

Common Options : 1 PCI, 2 PCI, 3 PCI

 

Quick Review of Master Priority Rotation

The Master Priority Rotation BIOS feature controls the priority of the processor’s accesses to the PCI bus.

If you choose 1 PCI, the processor will always be granted access right after the current PCI bus master completes its transaction, irrespective of how many other PCI bus masters are on the queue.

If you choose 2 PCI, the processor will always be granted access right after the second PCI bus master on the queue completes its transaction.

If you choose 3 PCI, the processor will always be granted access right after the third PCI bus master on the queue completes its transaction.

But no matter what you choose, the processor is guaranteed access to the PCI bus after a certain number of PCI bus master grants.

It doesn’t matter if there are numerous PCI bus masters on the queue or when the processor requests access to the PCI bus. The processor will always be granted access after one PCI bus master transaction (1 PCI), two transactions (2 PCI) or three transactions (3 PCI).

For better overall performance, it is recommended that you select the 1 PCI option as this allows the processor to access the PCI bus with minimal delay.

However, if you wish to improve the performance of your PCI devices, you can try the 2 PCI or 3 PCI options. They ensure that your PCI cards will receive greater PCI bus priority.

Details of Master Priority Rotation

The Master Priority Rotation BIOS feature controls the priority of the processor’s accesses to the PCI bus.

If you choose 1 PCI, the processor will always be granted access right after the current PCI bus master completes its transaction, irrespective of how many other PCI bus masters are on the queue. This improves processor-to-PCI performance, at the expense of other PCI transactions.

If you choose 2 PCI, the processor will always be granted access right after the second PCI bus master on the queue completes its transaction. This means the processor has to wait for just two PCI bus masters to complete their transactions on the PCI bus before it can gain access to the PCI bus itself. This means slightly poorer processor-to-PCI performance but PCI bus masters will enjoy slightly better performance.

If you choose 3 PCI, the processor will always be granted access right after the third PCI bus master on the queue completes its transaction. This means the processor has to wait for three PCI bus masters to complete their transactions on the PCI bus before it can gain access to the PCI bus itself. This means poorer processor-to-PCI performance but PCI bus masters will enjoy better performance.

But no matter what you choose, the processor is guaranteed access to the PCI bus after a certain number of PCI bus master grants.

It doesn’t matter if there are numerous PCI bus masters on the queue or when the processor requests access to the PCI bus. The processor will always be granted access after one PCI bus master transaction (1 PCI), two transactions (2 PCI) or three transactions (3 PCI).

For better overall performance, it is recommended that you select the 1 PCI option as this allows the processor to access the PCI bus with minimal delay.

However, if you wish to improve the performance of your PCI devices, you can try the 2 PCI or 3 PCI options. They ensure that your PCI cards will receive greater PCI bus priority.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


RW Queue Bypass from The Tech ARP BIOS Guide

RW Queue Bypass

Common Options : Auto, 2X, 4X, 8X, 16X

 

Quick Review of RW Queue Bypass

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set the RW Queue Bypass BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

 

Details of RW Queue Bypass

The R/W Queue Bypass BIOS option is similar to the DCQ Bypass Maximum BIOS option – they both decide the limits on which an arbiter can intelligently reschedule memory accesses to improve performance.

The difference between the two is that DCQ Bypass Maximum does this at the memory controller level, while R/W Queue Bypass does it at the Device Control Interface (DCI) level.

To improve performance, the arbiter can reschedule transactions in the DCI read / write queue.

By allowing some transactions to bypass other transactions in the queue, the arbiter can maximize the number of hits on open memory pages.

This improves the overall memory performance but at the expense of some memory accesses which have to be delayed.

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set this BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


AGP Capability from The Tech ARP BIOS Guide

AGP Capability

Common Options : Auto, 1X Mode, 2X Mode, 4X Mode, 8X Mode

 

Quick Review of AGP Capability

The AGP Capability BIOS feature is only found in AGP 8X-capable motherboards. AGP 8X is backward-compatible with earlier AGP standards.

This BIOS feature allows you to set the motherboard’s maximum supported AGP transfer protocol.

It is recommended that you leave this BIOS feature at its default setting of Auto. This allows the motherboard to set the appropriate AGP transfer protocol based on the graphics card’s AGP support detected during the booting up process.

However, the other options are useful if your graphics card has problems using the detected AGP transfer protocol. You can then manually select a slower AGP transfer protocol to solve the problem.

 

Details of AGP Capability

The AGP Capability BIOS feature is only found in AGP 8X-capable motherboards. AGP 8X is backward-compatible with earlier AGP standards.

This BIOS feature allows you to set the motherboard’s maximum supported AGP transfer protocol.

When this BIOS feature is set to Auto, the motherboard will automatically select the appropriate AGP transfer protocol after detecting the capabilities of the AGP graphics card.

When this BIOS feature is set to 1X Mode, the motherboard will force the AGP bus to use the AGP 1X transfer protocol. AGP 1X allows a maximum transfer rate of 266MB/s.

When this BIOS feature is set to 2X Mode, the motherboard will force the AGP bus to use the AGP 2X transfer protocol. AGP 2X allows a maximum transfer rate of 533MB/s.

When this BIOS feature is set to 4X Mode, the motherboard will force the AGP bus to use the AGP 4X transfer protocol. AGP 4X allows a maximum transfer rate of 1GB/s.

When this BIOS feature is set to 8X Mode, the motherboard will force the AGP bus to use the AGP 8X transfer protocol. AGP 8X allows a maximum transfer rate of 2.1GB/s.

It is recommended that you leave this BIOS feature at its default setting of Auto. This allows the motherboard to set the appropriate AGP transfer protocol based on the graphics card’s AGP support detected during the booting up process.

However, the other options are useful if your graphics card has problems using the detected AGP transfer protocol. You can then manually select a slower AGP transfer protocol to solve the problem.

Please note that manually setting the AGP Capabilities BIOS feature to 8X Mode will not enable AGP 8X transfers if your graphics card supports only AGP 4X. The AGP bus will make use of the fastest AGP transfer protocol supported by both motherboard and graphics card.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCI Clock Synchronization Mode – The Tech ARP BIOS Guide

PCI Clock Synchronization Mode

Common Options : To CPU, 33.33 MHz, Auto

 

Quick Review of PCI Clock Synchronization Mode

The PCI Clock Synchronization Mode BIOS feature allows you to force the PCI bus to either synchronize itself with the processor FSB (Front Side Bus) speed, or run at the standard clock speed of 33.33 MHz.

When set to To CPU, the PCI bus speed is slaved to the processor’s FSB speed. Any change in FSB speed will result in a similar change in the PCI bus speed. For example, if you increase the processor’s FSB speed by 10%, the PCI bus speed will increase by 10% as well.

When set to 33.33 MHz, the PCI bus speed will be locked into its standard clock speed of 33.33 MHz. No matter what the processor’s FSB speed is, the PCI bus will always run at 33.33 MHz.

The Auto option is ambiguous. Without testing, its effect cannot be ascertained since it’s up to the manufacturer what it wishes to implement by default for the motherboard. But logically, the Auto setting should force the PCI bus to run at its standard speed of 33.33 MHz for maximum compatibility.

It is recommended that you set the PCI Clock Synchronization Mode BIOS feature to To CPU if you are overclocking the processor FSB up to 12.5%. If you wish to overclock the processor FSB beyond 12.5%, then you should set this BIOS feature to 33.33 MHz.

However, if you do not intend to overclock, this BIOS feature will not have any effect. The PCI bus will remain at 33.33 MHz, no matter what you select.

 

Details of PCI Clock Synchronization Mode

The PCI Clock Synchronization Mode BIOS feature allows you to force the PCI bus to either synchronize itself with the processor FSB (Front Side Bus) speed, or run at the standard clock speed of 33.33 MHz.

When set to To CPU, the PCI bus speed is slaved to the processor’s FSB speed. Any change in FSB speed will result in a similar change in the PCI bus speed. For example, if you increase the processor’s FSB speed by 10%, the PCI bus speed will increase by 10% as well.

When set to 33.33MHz, the PCI bus speed will be locked into its standard clock speed of 33.33 MHz. No matter what the processor’s FSB speed is, the PCI bus will always run at 33.33 MHz.

The Auto option is ambiguous. Without testing, its effect cannot be ascertained since it’s up to the manufacturer what it wishes to implement by default for the motherboard. But logically, the Auto setting should force the PCI bus to run at its standard speed of 33.33 MHz for maximum compatibility.

Synchronizing the PCI bus with the processor FSB allows for greater performance when you are overclocking. Because the PCI bus will be overclocked as you overclock the processor FSB, you will experience better performance from your PCI devices. However, if your PCI device cannot tolerate the overclocked PCI bus, you may experience issues like system crashes or data corruption.

The recommended safe limit for an overclocked PCI bus is 37.5 MHz. This is the speed at which practically all new PCI cards can run at without breaking a sweat. Still, you should test the system thoroughly for stability issues before committing to an overclocked PCI bus speed.

Please note that if you wish to synchronize the PCI bus with the processor FSB and remain within this relatively safe limit, you can only overclock the processor FSB by up to 12.5%. Any higher, your PCI bus will be overclocked beyond 37.5 MHz.

If you wish to overclock the processor FSB further without worrying about your PCI devices, then you should set this BIOS feature to 33.33 MHz. This forces the PCI bus to run at the standard speed of 33.33MHz, irrespective of the processor’s FSB speed.

It is recommended that you set the PCI Clock Synchronization Mode BIOS feature to To CPU if you are overclocking the processor FSB up to 12.5%. If you wish to overclock the processor FSB beyond 12.5%, then you should set this BIOS feature to 33.33 MHz.

However, if you do not intend to overclock, this BIOS feature will not have any effect. The PCI bus will remain at 33.33 MHz, no matter what you select.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


CPU Drive Strength from The Tech ARP BIOS Guide

CPU Drive Strength

Common Options : 0, 1, 2, 3

 

Quick Review of CPU Drive Strength

The CPU Drive Strength BIOS feature allows you to manually set the drive strength of the CPU bus. The higher the value, the stronger the drive strength.

If you are facing stability problems with your processor, you might want to try boosting the CPU drive strength to a higher value. It will help to correct any possible increase in impedance from the motherboard.

It can also be used to improve the CPU’s overclockability. By raising the processor drive strength, it is possible to improve its stability at overclocked speeds.

However, this is not a surefire way of overclocking the CPU. Increasing it to the highest value will not necessarily mean that you can overclock the CPU more than you already can.

In addition, it is important to note that increasing the processor drive strength will not improve its performance. Contrary to popular opinion, it is not a performance-enhancing feature.

 

Details of CPU Drive Strength

The system controller has auto-compensation circuitry to automatically compensate for impedance variations in motherboard designs.

Now, the impedance is more or less fixed for each motherboard design. So some manufacturers may choose to pre-calculate and use a fixed, optimal CPU drive strength for a particular design.

However, due to variations in ambient conditions and manufacturing variances, there may be situations where the impedance compensation may not be sufficient.

This is where the CPU Drive Strength BIOS option comes in – it allows you to manually set the processor bus drive strength. The higher the value, the stronger the drive strength.

If you are facing stability problems with your processor, you might want to try boosting the processor drive strength to a higher value. It will help to correct any possible increase in impedance from the motherboard.

It can also be used to improve the CPU’s overclockability. By raising the processor drive strength, it is possible to improve its stability at overclocked speeds. Try the higher values of 2 or 3 if your CPU just won’t go the extra mile.

However, this is not a surefire way of overclocking the CPU. Increasing it to the highest value will not necessarily mean that you can overclock the CPU more than you already can.

In addition, it is important to note that increasing the processor drive strength will not improve its performance. Contrary to popular opinion, it is not a performance-enhancing feature.

Although little else is known about this feature, the downsides to a high CPU drive strength would probably be increased EMI (Electromagnetic Interference), power consumption and thermal output.

Therefore, unless you need to boost the processor bus drive strength (for troubleshooting or overclocking purposes), it is recommended that you leave it at the default setting.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Multi-Sector Transfers from The Tech ARP BIOS Guide

Multi-Sector Transfers

Common Options : Disabled, 2 Sectors, 4 Sectors, 8 Sectors, 16 Sectors, 32 Sectors, Maximum

 

Quick Review of Multi-Sector Transfers

The Multi-Sector Transfers BIOS feature speeds up hard disk drive access by transferring multiple sectors of data per interrupt instead of using the usual single-sector transfer mode. This mode of transferring data is known as block transfers.

There are a few available options, from Disabled and a few different multiple sectors option to Maximum.

The Disabled option forces your IDE controller to transfer only a single sector (512 bytes) per interrupt. Needless to say, this will significantly degrade performance.

The selection of 2 Sectors to 32 Sectors allows you to manually select the number of sectors that the IDE controller is allowed to transfer per interrupt.

The Maximum option allows your IDE controller to transfer as many sectors per interrupt as the hard disk is able to support.

Since all current hard disk drives support block transfers, there is usually no reason why IDE HDD Block Mode should be disabled.

Therefore, you should disable IDE HDD Block Mode only if you actually face the possibility of data corruption (with an unpatched version of Windows NT 4.0). Otherwise, it is highly recommended that you select the Maximum option for significantly better hard disk performance!

The manual selection of 2 to 32 sectors is useful if you notice data corruption with the Maximum option. It allows you to scale back the multi-sector transfer feature to correct the problem without losing too much performance.

 

Details of Multi-Sector Transfers

The Multi-Sector Transfers BIOS feature speeds up hard disk drive access by transferring multiple sectors of data per interrupt instead of using the usual single-sector transfer mode. This mode of transferring data is known as block transfers.

There are a few available options, from Disabled and a few different multiple sectors option to Maximum.

The Disabled option forces your IDE controller to transfer only a single sector (512 bytes) per interrupt. Needless to say, this will significantly degrade performance.

The selection of 2 Sectors to 32 Sectors allows you to manually select the number of sectors that the IDE controller is allowed to transfer per interrupt.

The Maximum option allows your IDE controller to transfer as many sectors per interrupt as the hard disk is able to support.

Since all current hard disk drives support block transfers, there is usually no reason why IDE HDD Block Mode should be disabled.

However, if you are running on Windows NT 4.0, you might need to disable this BIOS feature because Windows NT 4.0 has a problem with block transfers. According to Chris Bope, Windows NT does not support IDE HDD Block Mode and enabling this feature can cause data to be corrupted.

Ryu Connor confirmed this by sending me a link to a Microsoft article (Enhanced IDE operation under Windows NT 4.0). According to this article, IDE HDD Block Mode and 32-bit Disk Access have been found to cause data corruption in some cases. Therefore, Microsoft recommends that Windows NT 4.0 users disable IDE HDD Block Mode.

Lord Mike asked ‘someone in the know‘ about this matter and he was told that the data corruption issue was taken very seriously at Microsoft and that it had been corrected through the Windows NT 4.0 Service Pack 2. Although he could not get an official statement from Microsoft, it is probably safe enough to enable IDE HDD Block Mode on a Windows NT 4.0 system, just as long as it has been upgraded with Service Pack 2.

Therefore, you should disable IDE HDD Block Mode only if you actually face the possibility of data corruption (with an unpatched version of Windows NT 4.0). Otherwise, it is highly recommended that you select the Maximum option for significantly better hard disk performance!

The manual selection of 2 to 32 sectors is useful if you notice data corruption with the Maximum option. It allows you to scale back the multi-sector transfer feature to correct the problem without losing too much performance.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCI Chaining from The Tech ARP BIOS Guide

PCI Chaining

Common Options : Enabled, Disabled

 

Quick Review of PCI Chaining

The PCI Chaining BIOS feature is designed to speed up writes from the processor to the PCI bus by allowing write combining to occur at the PCI interface.

When PCI chaining is enabled, up to four quadwords of processor writes to contiguous PCI addresses will be chained together and written to the PCI bus as a single PCI burst write.

When PCI chaining is disabled, each processor write to the PCI bus will be handled as separate non-burst writes of 32-bits.

Needless to say, writing four quadwords of data in a single PCI write is much faster than doing so in four separate non-burstable writes. A single PCI burst write will also reduce the amount of time the processor has to wait while writing to the PCI bus.

Therefore, it is recommended that you enable this BIOS feature for better CPU to PCI write performance.

[adrotate group=”1″]

 

Details of PCI Chaining

The PCI Chaining BIOS feature is designed to speed up writes from the processor to the PCI bus by allowing write combining to occur at the PCI interface.

When PCI chaining is enabled, up to four quadwords of processor writes to contiguous PCI addresses will be chained together and written to the PCI bus as a single PCI burst write.

When PCI chaining is disabled, each processor write to the PCI bus will be handled as separate non-burst writes of 32-bits.

Needless to say, writing four quadwords of data in a single PCI write is much faster than doing so in four separate non-burstable writes. A single PCI burst write will also reduce the amount of time the processor has to wait while writing to the PCI bus.

Therefore, it is recommended that you enable this BIOS feature for better CPU to PCI write performance.

 

What Is A Quadword?

In computing, a quadword is a term that means four words, equivalent to 8 bytes or 64-bits.

So a PCI burst write of four quadwords would be 32 bytes, or 256 bits in size. That would be 8X faster than a non-burst write of 4 bytes, or 32 bits in size.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PCI-E Max Read Request Size – The Tech ARP BIOS Guide

PCI-E Max Read Request Size

Common Options : Automatic, Manual – User Defined

 

Quick Review of PCI-E Max Read Request Size

This BIOS feature can be used to ensure a fairer allocation of PCI Express bandwidth. It determines the largest read request any PCI Express device can generate. Reducing the maximum read request size reduces the hogging effect of any device with large reads.

When set to Automatic, the BIOS will automatically select a maximum read request size for PCI Express devices. Usually, this would be a manufacturer-preset value that’s designed with maximum “fairness“, rather than performance in mind.

When set to Manual – User Defined, you will be allowed to enter a numeric value (in bytes). Although it appears as though you can enter any value, you must only enter one of these values :

128 – This sets the maximum read request size to 128 bytes. All PCI Express devices will only be allowed to generate read requests of up to 128 bytes in size.

256 – This sets the maximum read request size to 256 bytes. All PCI Express devices will only be allowed to generate read requests of up to 256 bytes in size.

512 – This sets the maximum read request size to 512 bytes. All PCI Express devices will only be allowed to generate read requests of up to 512 bytes in size.

1024 – This sets the maximum read request size to 1024 bytes. All PCI Express devices will only be allowed to generate read requests of up to 1024 bytes in size.

2048 – This sets the maximum read request size to 2048 bytes. All PCI Express devices will only be allowed to generate read requests of up to 2048 bytes in size.

4096 – This sets the maximum read request size to 4096 bytes. This is the largest read request size currently supported by the PCI Express protocol. All PCI Express devices will be allowed to generate read requests of up to 4096 bytes in size.

It is recommended that you set this BIOS feature to 4096, as it maximizes performance by allowing all PCI Express devices to generate as large a read request as they require. However, this will be at the expense of devices that generate smaller read requests.

Even so, this is generally not a problem unless they require a certain degree of quality of service. For example, you may experience glitches with the audio output (e.g. stuttering) of a PCI Express sound card when its reads are delayed by a bandwidth-hogging graphics card.

If such problems arise, reduce the maximum read request size. This reduces the amount of bandwidth any PCI Express device can hog at the expense of the other devices.

 

Details of PCI-E Max Read Request Size

Arbitration for PCI Express bandwidth is based on the number of requests from each device. However, the size of each request is not taken into account. As such, if some devices request much larger data reads than others, the PCI Express bandwidth will be unevenly allocated between those devices.

This can cause problems for applications that have specific quality of service requirements. These application may not have timely access to the requested data simply because another PCI Express device is hogging the bandwidth by requesting for very large data reads.

This BIOS feature can be used to correct that and ensure a fairer allocation of PCI Express bandwidth. It determines the largest read request any PCI Express device can generate. Reducing the maximum read request size reduces the hogging effect of any device with large reads.

However, doing so reduces the performance of devices that generate large reads. Instead of generating large but fewer reads, they will have to generate smaller reads but in greater numbers. Because arbitration is done according to the number of requests, they will have to wait longer for the data requested.

[adrotate group=”1″]

When set to Automatic, the BIOS will automatically select a maximum read request size for PCI Express devices. Usually, this would be a manufacturer-preset value that’s designed with maximum “fairness“, rather than performance in mind.

When set to Manual – User Defined, you will be allowed to enter a numeric value (in bytes). Although it appears as though you can enter any value, you must only enter one of these values :

128 – This sets the maximum read request size to 128 bytes. All PCI Express devices will only be allowed to generate read requests of up to 128 bytes in size.

256 – This sets the maximum read request size to 256 bytes. All PCI Express devices will only be allowed to generate read requests of up to 256 bytes in size.

512 – This sets the maximum read request size to 512 bytes. All PCI Express devices will only be allowed to generate read requests of up to 512 bytes in size.

1024 – This sets the maximum read request size to 1024 bytes. All PCI Express devices will only be allowed to generate read requests of up to 1024 bytes in size.

2048 – This sets the maximum read request size to 2048 bytes. All PCI Express devices will only be allowed to generate read requests of up to 2048 bytes in size.

4096 – This sets the maximum read request size to 4096 bytes. This is the largest read request size currently supported by the PCI Express protocol. All PCI Express devices will be allowed to generate read requests of up to 4096 bytes in size.

It is recommended that you set this BIOS feature to 4096, as it maximizes performance by allowing all PCI Express devices to generate as large a read request as they require. However, this will be at the expense of devices that generate smaller read requests.

Even so, this is generally not a problem unless they require a certain degree of quality of service. For example, you may experience glitches with the audio output (e.g. stuttering) of a PCI Express sound card when its reads are delayed by a bandwidth-hogging graphics card.

If such problems arise, reduce the maximum read request size. This reduces the amount of bandwidth any PCI Express device can hog at the expense of the other devices.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PPM Mode from The Tech ARP BIOS Guide

PPM Mode

Common Options : Native, SMM

 

Quick Review of PPM Mode

The PPM Mode BIOS option allows you to change the operating mode of the Processor Power Management (PPM).

When set to Native, the operating system will use its native PPM support to directly control the processor’s performance states and power management.

When set to SSM, the operating system will revert to the ACPI System Management Mode (ACPI SMM), leaving power management to the processor.

If you are using an older operating system like Windows 98 or Windows 2000, you should set this BIOS option to SSM.

You should also select SSM if you are using Windows XP or Windows 2003 in a multi-processor or multi-core environment.

On the other hand, you should select Native if you are using a newer operating system that supports ACPI 3.0. This includes Windows Vista, Windows Server 2003, Windows 7, Windows 2008, and Windows 10.

 

PPM Mode Details

Prior to Windows XP, Microsoft operating systems cannot directly control the processor’s power management. They can only put the processor into its SMM (System Management Mode), whereby the processor would then perform its own power management routines.

Support for ACPI 2.0 processor performance states was first introduced in the Microsoft Windows XP operating system. It finally allowed the operating system to directly control the processor’s power and performance through the Advanced Configuration and Power Interface (ACPI).

However, Processor Power Management in Windows XP is limited to a single processor with a single core and running a single thread. It does not support multi-processor systems, or multi-core processors, or even multi-threading.

Support for multi-processor systems and multi-core processors was eventually added to the ACPI 3.0 specifications. Microsoft Windows Vista was the first Microsoft operating system to offer native support for PPM of multi-processor systems or multi-core processors. This includes systems using processors with multiple logical threads, multiple cores or multiple physical sockets.

[adrotate group=”1″]

This is where the PPM Mode BIOS option comes in. It allows you to change the operating mode of the Processor Power Management (PPM).

When set to Native, the operating system will use its native PPM support to directly control the processor’s performance states and power management.

When set to SSM, the operating system will revert to the ACPI System Management Mode (ACPI SMM), leaving power management to the processor.

If you are using an older operating system like Windows 98 or Windows 2000, you should set this BIOS option to SSM.

You should also select SSM if you are using Windows XP or Windows 2003 in a multi-processor or multi-core environment.

On the other hand, you should select Native if you are using a newer operating system that supports ACPI 3.0. This includes Windows Vista, Windows Server 2003, Windows 7, Windows 2008, and Windows 10.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM PH Limit from the Tech ARP BIOS Guide

SDRAM PH Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM PH Limit

SDRAM PH Limit is short for SDRAM Page Hit Limit. The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM PH Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Graphic Win Size from The Tech ARP BIOS Guide

Graphic Win Size

Common Options : 4, 8, 16, 32, 64, 128, 256

 

Quick Review of Graphic Win Size

The Graphic Win Size BIOS feature does two things. It selects the size of the AGP aperture (hence, the name Graphic Windows Size) and it determines the size of the GART (Graphics Address Relocation Table).

The aperture is a portion of the PCI memory address range that is dedicated for use as AGP memory address space while the GART is a translation table that translates AGP memory addresses into actual memory addresses which are often fragmented. The GART allows the graphics card to see the memory region available to it as a contiguous piece of memory range.

Host cycles that hit the aperture range are forwarded to the AGP bus without need for translation. The aperture size also determines the maximum amount of system memory that can be allocated to the AGP graphics card for texture storage.

Please note that the AGP aperture is merely address space, not actual physical memory in use. Although it is very common to hear people recommending that the AGP aperture size should be half the size of system memory, that is wrong!

The requirement for AGP memory space shrinks as the graphics card’s local memory increases in size. This is because the graphics card will have more local memory to dedicate to texture storage. So, if you upgrade to a graphics card with more memory, you shouldn’t be “deceived” into thinking that you will need even more AGP memory! On the contrary, a smaller AGP memory space will be required.

It is recommended that you keep the Graphic Win Size (or AGP aperture) around 64 MB to 128 MB in size, even if your graphics card has a lot of onboard memory. This allows flexibility in the event that you actually need extra memory for texture storage. It will also keep the GART (Graphics Address Relocation Table) within a reasonable size.

 

Details of Graphic Win Size

The Graphic Win Size BIOS feature does two things. It selects the size of the AGP aperture (hence, the name Graphic Windows Size) and it determines the size of the GART (Graphics Address Relocation Table).

The aperture is a portion of the PCI memory address range that is dedicated for use as AGP memory address space while the GART is a translation table that translates AGP memory addresses into actual memory addresses which are often fragmented. The GART allows the graphics card to see the memory region available to it as a contiguous piece of memory range.

Host cycles that hit the aperture address range are forwarded to the AGP bus without need for translation. The aperture size also determines the maximum amount of system memory that can be allocated to the AGP graphics card for texture storage.

The Graphic Win Size or AGP aperture size is calculated using this formula :

AGP Aperture Size = (Maximum usable AGP memory size x 2) + 12 MB

As you can see, the actual available AGP memory space is less than half the AGP aperture size set in the BIOS. This is because the AGP controller needs a write combined memory area equal in size to the actual AGP memory area (uncached) plus an additional 12 MB for virtual addressing.

Therefore, it isn’t simply a matter of determining how much AGP memory space you need. You also need to calculate the final aperture size by doubling the amount of AGP memory space desired and adding 12 MB to the total.

Please note that the AGP aperture is merely address space, not actual physical memory in use. It doesn’t lock up any of your system memory. The physical memory is allocated and released as needed whenever Direct3D makes a “create non-local surface” call.

Microsoft Windows 95 (with VGARTD.VXD) and later versions of Microsoft Windows use a waterfall method of memory allocation. Surfaces are first created in the graphics card’s local memory. When that memory is full, surface creation spills over into AGP memory and then system memory. So, memory usage is automatically optimized for each application. AGP and system memory are not used unless absolutely necessary.

Unfortunately, it is very common to hear people recommending that the AGP aperture size should be half the size of system memory. However, this is wrong for the same reason why swapfile size should not be fixed at 1/4 of system memory. Like the swapfile, the requirement for AGP memory space shrinks as the graphics card’s local memory increases in size. This is because the graphics card will have more local memory to use for texture storage!

This reduces the need for AGP memory. Therefore, when you upgrade to a graphics card with more memory, you shouldn’t be “deceived” into thinking that you will need even more AGP memory! On the contrary, a smaller AGP memory space will be required.

[adrotate group=”1″]

If your graphics card has very little graphics memory (4 MB – 16 MB), you may need to create a large AGP aperture, up to half the size of the system memory. The graphics card’s local memory and the AGP aperture size combined should be roughly around 64 MB. Please note that the size of the aperture does not correspond to performance! Increasing it to gargantuan proportions will not improve performance.

Still, it is recommended that you keep the AGP aperture size around 64 MB to 128 MB. Now, why should we use such a large aperture size when most graphics cards come with large amounts of local memory? Shouldn’t we set it to the absolute minimum to save system memory?

  1. First of all, setting it to a lower memory won’t save you memory! Don’t forget that all the AGP aperture size does is limit the amount of system memory the AGP bus can appropriate whenever it needs more memory. It is not used unless absolutely necessary. So, setting the AGP aperture size to 64 MB doesn’t mean that 64 MB of your system memory will be appropriated and reserved for the AGP bus’ use. What it does it limit the AGP bus to a maximum of 64 MB of system memory when the need arises.
  2. Next, most graphics cards require an AGP aperture of at least 16 MB in size to work properly. Many new graphics cards require even more. This is probably because the virtual addressing space is already 12 MB in size! So, setting the AGP Aperture Size to 4 MB or 8 MB is a big no-no.
  3. We should also remember that many software have AGP aperture size and texture storage requirements that are mostly unspecified. Some applications will not work with AGP apertures that are too small. And some games use so much textures that a large AGP aperture is needed even with graphics cards with large memory buffers.
  4. Finally, you should remember that the actual available AGP memory space is less than half the size of the AGP aperture size you set. If you want just 15 MB of AGP memory for texture storage, the AGP aperture has to be at least 42 MB in size! Therefore, it makes sense to set a large AGP aperture size in order to cater for all eventualities.

Now, while increasing the AGP aperture size beyond 128 MB won’t take up system memory, it would still be best to keep the aperture size in the 64 MB-128 MB range so that the GART (Graphics Address Relocation Table) won’t become too big. The larger the GART gets, the longer it takes to scan through the GART and find the translated address for each AGP memory address request.

With local memory on graphics cards increasing to incredible sizes and texture compression commonplace, there’s really not much need for the AGP aperture size to grow beyond 64 MB. Therefore, it is recommended that you set the AGP Aperture Size to 64 MB or at most, 128 MB.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


MP Capable Bit Identify from The Tech ARP BIOS Guide

MP Capable Bit Identify

Common Options : Enabled, Disabled

 

Quick Review of MP Capable Bit Identify

This BIOS feature determines if the BIOS should query the MP Capable bit to correctly identify an AMD Athlon MP processor.

When set to Enabled, the BIOS will query the MP Capable bit at boot-up. If it detects a MP Capable bit setting of 1, it writes the Athlon MP processor string name into the appropriate registers.

When set to Disabled, the BIOS will not query the MP Capable bit at boot-up. The Athlon MP processor will be indistinguishable from the Athlon XP processor, as far as the processor identification is concerned.

If you are using an AMD Athlon MP processor, it is recommended that you enable this BIOS feature to allow proper identification of the processor.

If you are using other Athlon processors, you should disable this BIOS feature as there is no need to identify multi-processing capability.

 

Details of MP Capable Bit Identify

There are a few flavours of the AMD Athlon processor, namely the Duron, Athlon XP, Athlon MP and the mobile Athlon XP (Athlon XP-M). However, they all have the same CPUID. So, processor identification has to be done on the basis of clock speed and L2 cache size variations. This is not a problem for the Duron and Athlon XP-M processors.

Unfortunately, there is nothing to distinguish the Athlon MP from the Athlon XP. Neither clock speed or L2 cache size can be used to differentiate the two processors.

In addition, AMD did not hardcode the processor name string into the AMD Athlon processors. The BIOS actually detects the processor model during the boot-up process and writes the appropriate name string into the processor. So, the Athlon MP cannot be detected by querying the processor name string.

The only thing that truly distinguishes the Athlon MP processor from the Athlon XP processor is its multi-processing capability.

[adrotate group=”1″]

To solve this problem, AMD used bit 19 of Athlon’s Extended Feature Flags to denote multi-processing capability. That’s why it is also known as the MP Capable bit, MP being short for multi-processing.

This bit is set to 0 in the Athlon XP processors and set to 1 in the Athlon MP processors. Below is a table of the MP Capable bit settings for the different AMD Athlon processors.

Therefore, if the BIOS detects a processor with the MP Capable bit set to 1, it writes the processor name string of AMD Athlon ™ MP into the processor.

This BIOS feature determines if the BIOS should query the MP Capable bit to correctly identify an AMD Athlon MP processor.

When set to Enabled, the BIOS will query the MP Capable bit at boot-up. If it detects a MP Capable bit setting of 1, it writes the Athlon MP processor string name into the appropriate registers.

When set to Disabled, the BIOS will not query the MP Capable bit at boot-up. The Athlon MP processor will be indistinguishable from the Athlon XP processor, as far as the processor identification is concerned.

If you are using an AMD Athlon MP processor, it is recommended that you enable this BIOS feature to allow proper identification of the processor.

If you are using other Athlon processors, you should disable this BIOS feature as there is no need to identify multi-processing capability.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


OS Select For DRAM > 64MB from The Tech ARP BIOS Guide

OS Select For DRAM > 64MB

Common Options : OS/2, Non-OS/2n-OS/2

 

Quick Review of OS Select For DRAM > 64MB

The OS Select For DRAM > 64MB BIOS feature is designed to correct the memory size detection problem for OS/2 systems that have more than 64 MB of system memory.

If you are using an older version of the IBM OS/2 operating system, you should select OS/2.

If you are using the IBM OS/2 Warp v4.0111 or higher operating system, you should select Non-OS/2.

If you are using an older version of the IBM OS/2 operating system but have already installed all the relevant IBM FixPaks, you should select Non-OS/2.

Users of non-OS/2 operating systems (like Microsoft Windows XP) should select the Non-OS/2 option.

 

Details of OS Select For DRAM > 64MB

Older versions of IBM’s OS/2 operating system use the BIOS function Int15 [AX=E801] to detect the size of installed system memory. Microsoft Windows, on the other hand, uses the BIOS function Int15 [EAX=0000E820].

However, the Int15 [AX=E801] function was later scrapped as not ACPI-compliant. As a result, OS/2 cannot detect the correct size of system memory if more than 64 MB of memory is installed. Microsoft Windows isn’t affected because the BIOS function it uses is ACPI-compliant.

The OS Select For DRAM > 64MB BIOS feature is designed to correct the memory size detection problem for OS/2 systems that have more than 64 MB of system memory.

[adrotate group=”1″]

If you are running an old, unpatched version of OS/2, you must select the OS/2 option. But please note that this is only true for older versions of OS/2 that haven’t been upgraded using IBM’s FixPaks.

Starting with the OS/2 Warp v4.0111, IBM changed the OS/2 kernel to start using Int15 [EAX=0000E820] to detect the size of installed system memory. the memory management system to the more conventional method. IBM also issued FixPaks to address this issue with older versions of OS/2.

Therefore, if you are using OS/2 Warp v4.0111 or higher, you should select Non-OS/2 instead. You should also select Non-OS/2 if you have upgraded an older version of OS/2 with the FixPaks that IBM have been releasing over the years.

If you select the OS/2 option with a newer (v4.0111 or higher) or updated version of OS/2, it will cause erroneous memory detection. For example, if you have 64 MB of memory, it may only register as 16 MB. Or if you have more than 64 MB of memory, it may register as only 64 MB of memory.

Users of non-OS/2 operating systems (like Microsoft Windows or Linux) should select the Non-OS/2 option. Doing otherwise will cause memory errors if you have more than 64 MB of memory in your system.

In conclusion :-

  • If you are using an older version of the IBM OS/2 operating system, you should select OS/2.
  • If you are using the IBM OS/2 Warp v4.0111 or higher operating system, you should select Non-OS/2.
  • If you are using an older version of the IBM OS/2 operating system but have already installed all the relevant IBM FixPaks, you should select Non-OS/2.
  • Users of non-OS/2 operating systems (like Microsoft Windows XP) should select the Non-OS/2 option.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


AGP Secondary Lat Timer from The Tech ARP BIOS Guide

AGP Secondary Lat Timer

Common Options : 00h, 20h, 40h, 60h, 80h, C0h, FFh

 

Quick Review of AGP Secondary Lat Timer

The AGP Secondary Lat Timer BIOS feature controls how long the AGP bus can hold the PCI bus (via the PCI-to-PCI bridge) before another PCI device takes over. The longer the latency, the longer the AGP bus can retain control of the PCI bus before handing it over to another PCI device.

Normally, the AGP Secondary Latency Timer is set to 20h (32 clock cycles). This means the AGP bus’ PCI-to-PCI bridge has to complete its transactions within 32 clock cycles or hand it over to the next PCI device.

For better AGP performance, a longer latency should be used. Try increasing it to 40h (64 cycles) or even 80h (128 cycles). The optimal value for every system is different. You should benchmark your AGP card’s performance after each change to determine the optimal latency for your system.

[adrotate group=”2″]

If you set the AGP Secondary Latency Timer to a very large value like 80h (128 cycles) or C0h (192 cycles), it is recommended that you set the PCI Latency Time to 32 cycles. This provides better access for your PCI devices that might be unnecessarily stalled if both the AGP and PCI buses have very long latencies.

In addition, some time-critical PCI devices may not agree with a long AGP latency. Such devices require priority access to the PCI bus which may not be possible if the PCI bus is held up by the AGP bus for a long period. In such cases, it is recommended that you keep to the default latency of 20h (32 clock cycles).

 

Details of AGP Secondary Lat Timer

A bridge is a device that connects a primary bus with one or more logical secondary buses. The AGP bus is, therefore, a secondary bus connected to the PCI bus via a PCI-to-PCI bridge.

This BIOS feature is similar to the PCI Latency Timer BIOS feature. The only difference is this latency timer applies to the AGP bus, which is a secondary bus connected to the PCI bus via a PCI-to-PCI bridge. However, it is unknown why they named this BIOS feature AGP Secondary Lat Timer, instead of the more appropriate AGP Latency Timer or even PCI Secondary Latency Timer. The name is both misleading and inaccurate since the AGP bus does not have a secondary latency timer!

The AGP Secondary Lat Timer BIOS feature controls how long the AGP bus can hold the PCI bus (via the PCI-to-PCI bridge) before another PCI device takes over. The longer the latency, the longer the AGP bus can retain control of the PCI bus before handing it over to another PCI device.

Because a bridge device introduces additional delay to every transaction, a short latency would further reduce the amount of time the AGP bus has access to the PCI bus. A longer latency will allow the AGP bus more time to transact on the PCI bus. This speeds up AGP-to-PCI transactions.

Options (Hex)

Actual Latency

00h

0

20h

32

40h

64

60h

96

80h

128

C0h

192

FFh

255

The available options range are usually stated in terms of hexadecimal numbers. Here is a translation of those numbers into actual latencies :

Normally, the AGP Secondary Latency Timer is set to 20h (32 clock cycles). This means the AGP bus’ PCI-to-PCI bridge has to complete its transactions within 32 clock cycles or hand it over to the next PCI device.

For better AGP performance, a longer latency should be used. Try increasing it to 40h (64 cycles) or even 80h (128 cycles). The optimal value for every system is different. You should benchmark your AGP card’s performance after each change to determine the optimal latency for your system.

Please note that a longer latency isn’t necessarily better. A long latency can reduce performance as the other PCI devices queuing up may be stalled for too long. This is especially true with systems with many PCI devices or PCI devices that continuously write short bursts of data to the PCI bus. Such systems would work better with shorter latencies as they allow quicker access to the PCI bus.

Therefore, if you set the AGP Secondary Latency Timer to a very large value like 80h (128 cycles) or C0h (192 cycles), it is recommended that you set the PCI Latency Time to 32 cycles. This provides better access for your PCI devices that might be unnecessarily stalled if both the AGP and PCI buses have very long latencies.

In addition, some time-critical PCI devices may not agree with a long AGP latency. Such devices require priority access to the PCI bus which may not be possible if the PCI bus is held up by the AGP bus for a long period. In such cases, it is recommended that you keep to the default latency of 20h (32 clock cycles).

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PIO Mode from The Tech ARP BIOS Guide

PIO Mode

Common Options : Auto, 0, 1, 2, 3, 4

 

Quick Review

This BIOS feature allows you to set the PIO (Programmed Input / Output) mode for the IDE drive.

Setting this BIOS feature to Auto lets the BIOS auto-detect the IDE drive’s maximum supported PIO transfer mode at boot-up.

Setting this BIOS feature to 0 forces the BIOS to use PIO Mode 0 for the IDE drive.

Setting this BIOS feature to 1 forces the BIOS to use PIO Mode 1 for the IDE drive.

Setting this BIOS feature to forces the BIOS to use PIO Mode 2 for the IDE drive.

Setting this BIOS feature to 3 forces the BIOS to use PIO Mode 3 for the IDE drive.

Setting this BIOS feature to forces the BIOS to use PIO Mode 4 for the IDE drive.

Normally, you should leave it as Auto and let the BIOS auto-detect the IDE drive’s PIO transfer mode. You should only set it manually for the following reasons :-

  • if the BIOS cannot detect the correct PIO transfer mode.
  • if you want to try forcing the IDE device to use a faster PIO transfer mode than it was designed for.
  • if you want to force the IDE device to use a slower PIO transfer mode if it cannot work properly with the current PIO mode (i.e. when the PCI bus is overclocked)

Please note that forcing an IDE device to use a PIO transfer rate that is faster than what it is rated for can potentially cause data corruption.

 

Details

This BIOS feature allows you to set the PIO (Programmed Input / Output) mode for the IDE drive. Here is a table of the different PIO transfer rates and their corresponding maximum throughputs.

PIO Data Transfer Mode

Maximum Throughput

PIO Mode 0

3.3 MB/s

PIO Mode 1

5.2 MB/s

PIO Mode 2

8.3 MB/s

PIO Mode 3

11.1 MB/s

PIO Mode 4

16.6 MB/s

Setting this BIOS feature to Auto lets the BIOS auto-detect the IDE drive’s maximum supported PIO transfer mode at boot-up.

Setting this BIOS feature to 0 forces the BIOS to use PIO Mode 0 for the IDE drive.

Setting this BIOS feature to 1 forces the BIOS to use PIO Mode 1 for the IDE drive.

Setting this BIOS feature to forces the BIOS to use PIO Mode 2 for the IDE drive.

Setting this BIOS feature to 3 forces the BIOS to use PIO Mode 3 for the IDE drive.

Setting this BIOS feature to forces the BIOS to use PIO Mode 4 for the IDE drive.

[adrotate group=”2″]

Normally, you should leave it as Auto and let the BIOS auto-detect the IDE drive’s PIO transfer mode. You should only set it manually for the following reasons :-

  • if the BIOS cannot detect the correct PIO transfer mode.
  • if you want to try forcing the IDE device to use a faster PIO transfer mode than it was designed for.
  • if you want to force the IDE device to use a slower PIO transfer mode if it cannot work properly with the current PIO mode (i.e. when the PCI bus is overclocked)

Please note that forcing an IDE device to use a PIO transfer rate that is faster than what it is rated for can potentially cause data corruption.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!