Tag Archives: Memory

vivo Virtual RAM For V21, V21e, X60 : How Does It Work?

vivo Virtual RAM For V21, V21e, X60 : How Does It Work?

The vivo V21 series and X60 smartphones now support Virtual RAM / Extended RAM technology!

But what exactly is vivo Virtual RAM, and how does it work?

 

vivo Virtual RAM : How Does It Work?

First introduced in the vivo V21 series, vivo Virtual RAM is basically virtual memory that computers have been using for decades.

When our computers begin to run out of memory, it can use some of our HDD / SDD storage as virtual memory.

While storage speeds are many times slower than RAM speed, this is better than running out of memory, which would cause the app to stop working, or worse, crash the operating system.

vivo Virtual RAM uses the same concept – it utilises 3 GB of the smartphone’s internal storage as virtual memory.

This effectively gives the operating system and apps more memory to use. If your vivo smartphone has 8 GB of RAM, then Virtual RAM expands that to 11 GB.

To improve performance, vivo also borrowed the PC virtual memory technique of swapping out inactive apps into the extended RAM space, so active apps have access to the much faster RAM.

The Virtual RAM technology was introduced in the V21 series, and will soon be added to their flagship X60 smartphone.

 

vivo Virtual RAM : How Useful Is It?

That really depends on how you use your smartphone.

If you only use a few apps at the same time, you will probably never run out of memory.

vivo smartphones now come with a lot of memory – 8 GB, and even games like PUBG Mobile and Asphalt 9 use less than 1.2 GB of RAM!

But if you use many apps simultaneously, or switch between them often, this could help prevent slow loading.

According to vivo, turning on Virtual RAM on an 8 GB smartphone will allow you to cache up to 20 background apps simultaneously.

 

Recommended Reading

Go Back To > Mobile | Tech ARP

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Memory DQ Drive Strength from The Tech ARP BIOS Guide!

Memory DQ Drive Strength

Common Options : Not Reduced, Reduced 15%, Reduced 30%, Reduced 50%

 

Memory DQ Drive Strength : A Quick Review

The Memory DQ Drive Strength BIOS feature allows you to reduce the drive strength for the memory DQ (data) pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Memory DQ Drive Strength : The Full Details

Every Dual Inline Memory Module (DIMM) has 64 data (DQ) lines. These lines transfer data from the DRAM chips to the memory controller and vice versa.

No matter what kind of DRAM chips are used (whether it’s regular SDRAM, DDR SDRAM or DDR2 SDRAM), the 64 data lines allow it to transfer 64-bits of data every clock cycle.

Each DIMM also has a number of data strobe (DQS) lines. These serve to time the data transfers on the DQ lines. The number of DQS lines depends on the type of memory chip used.

DIMMs based on x4 DRAM chips have 16 DQS lines, while DIMMs using x8 DRAM chips have 8 DQS lines and DIMMs with x16 DRAM chips have only 4 DQS lines.

Memory data transfers begin with the memory controller sending its commands to the DIMM. If data is to be read from the DIMM, then DRAM chips on the DIMM will drive their DQ and DQS (data strobe) lines.

On the other hand, if data is to be written to the DIMM, the memory controller will drive its DQ and DQS lines instead.

If many output buffers (on either the DIMMs or the memory controller) drive their DQ lines simultaneously, they can cause a drop in the signal level with a momentary raise in the relative ground voltage.

This reduces the quality of the signal which can be problematic at high clock speeds. Increasing the drive strength of the DQ pins can help give it a higher voltage swing, improving the signal quality.

However, it is important to increase the DQ drive strength according to the DRAM load. Unnecessarily increasing the DQ drive strength can cause the signal to overshoot its rising and falling edges, as well as create more signal reflection.

All this increase signal noise, which ironically negates the increased signal strength provided by a higher drive strength. Therefore, it is sometimes useful to reduce the DQ drive strength.

With light DRAM loads, you can reduce the DQ drive strength to lower signal noise and improve the signal-noise ratio. Doing so will also reduce power consumption, although that is probably low on most people’s list of importance. In certain cases, it actually allows you to achieve a higher memory clock speed.

This is where the Memory DQ Drive Strength BIOS feature comes in. It allows you to reduce the drive strength for the memory data pins.

But it does not allow you to increase the drive strength because it has already been set to use the maximum drive strength by default.

When set to Not Reduced, the DQ drive strength will remain at full strength.

When set to Reduced 15%, the DQ drive strength will be reduced by approximately 15%.

When set to Reduced 30%, the DQ drive strength will be reduced by approximately 30%.

When set to Reduced 50%, the DQ drive strength will be reduced by approximately 50%.

Generally, you should keep the memory data pins at full strength if you have multiple memory modules. The greater the DRAM load, the more memory drive strength you need.

But no matter how many modules you use, AMD recommends that you set this BIOS feature to Not Reduced if you are using a CG or D revision Athlon 64 or Opteron processor.

However, if you are only using a single memory module, you can reduce the DQ drive strength to improve signal quality and possibly achieve higher memory clock speeds.

If you hit a snag in overclocking your memory modules, you can also try reducing the DQ drive strength to achieve higher clock speeds, even if you are using multiple memory modules.

AMD recommends that you reduce the DQ drive strength for Revision E Athlon 64 and Opteron processors. For example, the DQ drive strength should be reduced by 50% if you are using a Revision E Athlon 64 or Opteron processor with memory modules based on the Samsung 512 Mbits TCCD SDRAM chip.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM Trrd Timing Value from The Tech ARP BIOS Guide!

SDRAM Trrd Timing Value

Common Options : 2 cycles, 3 cycles

 

SDRAM Trrd Timing Value : A Quick Review

The SDRAM Trrd Timing Value BIOS feature specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device.

The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges.

For desktop PCs, a delay of 2 cycles is recommended as current surges aren’t really important. The performance benefit of using the shorter 2 cycles delay is of far greater interest.

The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device’s read and write performance.

Switch to 3 cycles only when there are stability problems with the 2 cycles setting.

 

SDRAM Trrd Timing Value : The Details

The Bank-to-Bank Delay or tRRD is a DDR timing parameter which specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device, even to different internal banks.

The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges.

Because this timing parameter is DDR device-specific, it may differ from one DDR device to another. DDR DRAM manufacturers typically specify the tRRD parameter based on the row ACTIVATE activity to limit current surges within the device.

If you let the BIOS automatically configure your DRAM parameters, it will retrieve the manufacturer-set tRRD value from the SPD (Serial Presence Detect) chip. However, you may want to manually set the tRRD parameter to suit your requirements.

For desktop PCs, a delay of 2 cycles is recommended as current surges aren’t really important.

This is because the desktop PC essentially has an unlimited power supply and even the most basic desktop cooling solution is sufficient to dispel any extra thermal load that the current surges may impose.

The performance benefit of using the shorter 2 cycles delay is of far greater interest. The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device’s read and write performance.

Note that the shorter delay of 2 cycles works with most DDR DIMMs, even at 133 MHz (266 MHz DDR). However, DDR DIMMs running beyond 133 MHz (266 MHz DDR) may need to introduce a delay of 3 cycles between each successive bank activation.

Select 2 cycles whenever possible for optimal DDR DRAM performance.

Switch to 3 cycles only when there are stability problems with the 2 cycles setting.

In mobile devices like laptops however, it would be advisable to use the longer delay of 3 cycles.

Doing so limits the current surges that accompany row activations. This reduces the DDR device’s power consumption and thermal output, both of which should be of great interest to the road warrior.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Write Data In to Read Delay from The Tech ARP BIOS Guide!

Write Data In to Read Delay

Common Options : 1 Cycle, 2 Cycles

 

Write Data In to Read Delay : A Quick Review

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Write Data In to Read Delay : The Full Details

The Write Data In to Read Delay BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing.

This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device.

Please note that this is only applicable for read commands that follow a write operation. Consecutive read operations or writes that follow reads are not affected.

If a 1 Cycle delay is selected, every read command that follows a write operation will be delayed one clock cycle before it is issued.

The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance.

If a 2 Cycles delay is selected, every read command that follows a write operation will be delayed two clock cycles before it is issued.

The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible.

By default, this BIOS feature is set to 2 Cycles. This meets JEDEC’s specification of 2 clock cycles for write-to-read command delay in DDR400 memory modules. DDR266 and DDR333 memory modules require a write-to-read command delay of only 1 clock cycle.

It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


10GB + 12GB Samsung LPDDR4X uMCP Memory Details!

Samsung just announced that they have begun mass-producing LPDDR4X uMCP memory in 10GB and 12GB capacities!

Here is EVERYTHING you need to know about the new 10GB and 12GB Samsung LPDDR4X uMCP memory!

 

10GB + 12GB Samsung LPDDR4X uMCP Memory

The Samsung LPDDR4X uMCP memory is a UFS-based package that uses multiple LPDDR4X (low-power double data rate 4X) memory chips.

Samsung actually introduced a 12GB LPDDR4X uMCP memory about seven months ago. That package used six 16Gb LPDDR4X DRAM chips to achieve the 12 GB capacity.

Now, Samsung has a new higher-capacity 24Gb LPDDR4X DRAM chip – that’s 3 GB of memory on a single chip. This new chip uses the 1y-nanometer process technology, which can be either 14 nm or 16 nm.

The new 12GB Samsung LPDDR4X uMCP memory (KM8F8001MM) combines just four of the new 24Gb LPDDR4X DRAM chips, and ultra-fast eUFS 3.0 NAND storage into a single package.

With a data transfer speed of 4,266 Mbps, or 533 MB/s, this new Samsung LPDDR4X uMCP is designed to support 4K video recording, AI and machine learning capabilities in mid-range smartphones.

Samsung also created a 10GB LPDDR4X uMCP memory package using two 24Gb DRAM chips and two 16Gb DRAM chips.

 

10GB + 12GB Samsung LPDDR4X uMCP Specifications

Features 12 GB Samsung
LPDDR4X uMCP
10 GB Samsung
LPDDR4X uMCP
Process Technology 1y (14-16 nm) 1y (14-16 nm)
LPDDR4X Capacity 12 GB 10 GB
Configuration 4 x 24 Gb (3 GB) LPDDR4X
+
eUFS 3.0 NAND storage
2 x 24 Gb (3 GB) LPDDR4X
+
2 x 16 Gb (2 GB) LPDDR4X
+
eUFS 3.0 NAND storage
Max. Transfer Rate 4266 Mbps / 533 MB/s NA

 

Recommended Reading

Go Back To > Mobile Devices | BusinessHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


CPU / DRAM CLK Synch CTL – The Tech ARP BIOS Guide!

CPU / DRAM CLK Synch CTL

Common Options : Synchronous, Asynchronous, Auto

 

Quick Review of CPU / DRAM CLK Synch CTL

The CPU / DRAM CLK Synch CTL BIOS feature offers a clear-cut way of controlling the memory controller’s operating mode.

When set to Synchronous, the memory controller will set the memory clock to the same speed as the processor bus.

When set to Asynchronous, the memory controller will allow the memory clock to run at any speed.

When set to Auto, the operating mode of memory controller will depend on the memory clock you set.

It is recommended that you select the Synchronous operating mode. This generally provides the best performance, even if your memory modules are capable of higher clock speeds.

 

Details of CPU / DRAM CLK Synch CTL

The memory controller can operate either synchronously or asynchronously.

In the synchronous mode, the memory clock runs at the same speed as the processor bus speed.

In the asynchronous mode, the memory clock is allowed to run at a different speed than the processor bus.

While the asynchronous mode allows the memory controller to support memory modules of different clock speeds, it requires the use of FIFO (First In, First Out) buffers and resynchronizers. This increases the latency of the memory bus, and reduces performance.

Running the memory controller in synchronous mode allows the memory controller to bypass the FIFO buffers and deliver data directly to the processor bus. This reduces the latency of the memory bus and greatly improves performance.

Normally, the synchronicity of the memory controller is determined by the memory clock. If the memory clock is the same as the processor bus speed, then the memory controller is in the synchronous mode. Otherwise, it is in the asynchronous mode.

The CPU / DRAM CLK Synch CTL BIOS feature, however, offers a more clear-cut way of controlling the memory controller’s operating mode.

When set to Synchronous, the memory controller will set the memory clock to the same speed as the processor bus. Even if you set the memory clock to run at a higher speed than the front side bus, the memory controller automatically selects a lower speed that matches the processor bus speed.

When set to Asynchronous, the memory controller will allow the memory clock to run at any speed. Even if you set the memory clock to run at a higher speed than the front side bus, the memory controller will not force the memory clock to match the processor bus speed.

When set to Auto, the operating mode of memory controller will depend on the memory clock you set. If you set the memory clock to run at the same speed as the processor bus, the memory controller will operate in the synchronous mode. If you set the memory clock to run at a different speed, then the memory controller will operate in the asychronous mode.

It is recommended that you select the Synchronous operating mode. This generally provides the best performance, even if your memory modules are capable of higher clock speeds.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


RW Queue Bypass from The Tech ARP BIOS Guide

RW Queue Bypass

Common Options : Auto, 2X, 4X, 8X, 16X

 

Quick Review of RW Queue Bypass

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set the RW Queue Bypass BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

 

Details of RW Queue Bypass

The R/W Queue Bypass BIOS option is similar to the DCQ Bypass Maximum BIOS option – they both decide the limits on which an arbiter can intelligently reschedule memory accesses to improve performance.

The difference between the two is that DCQ Bypass Maximum does this at the memory controller level, while R/W Queue Bypass does it at the Device Control Interface (DCI) level.

To improve performance, the arbiter can reschedule transactions in the DCI read / write queue.

By allowing some transactions to bypass other transactions in the queue, the arbiter can maximize the number of hits on open memory pages.

This improves the overall memory performance but at the expense of some memory accesses which have to be delayed.

The RW Queue Bypass BIOS setting determines how many times the arbiter is allowed to bypass the oldest memory access request in the DCI’s read/write queue.

Once this limit is reached, the arbiter is overriden and the oldest memory access request serviced instead.

As this feature greatly improves memory performance, most BIOSes will not include a Disabled setting.

Instead, you are allowed to adjust the number of times the arbiter is allowed to bypass the oldest memory access request in the queue.

A high bypass limit will give the arbiter more flexibility in scheduling memory accesses so that it can maximise the number of hits on open memory pages.

This improves the performance of the memory subsystem. However, this comes at the expense of memory access requests that get delayed. Such delays can be a problem for time-sensitive applications.

It is generally recommended that you set this BIOS feature to the maximum value of 16X, which would give the memory controller’s read-write queue arbiter maximum flexibility in scheduling memory access requests.

However, if you face stability issues, especially with time-sensitive applications, reduce the value step-by-step until the problem resolves.

The Auto option, if available, usually sets the bypass limit to the maximum – 16X.

Recommended Reading

[adrotate group=”2″]

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SDRAM PH Limit from the Tech ARP BIOS Guide

SDRAM PH Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM PH Limit

SDRAM PH Limit is short for SDRAM Page Hit Limit. The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM PH Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM PH Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DOS Flat Mode from The Tech ARP BIOS Guide

DOS Flat Mode

Common Options : Enabled, Disabled

 

Quick Review of DOS Flat Mode

The DOS Flat Mode BIOS feature controls the BIOS’ built-in extended memory manager.

When enabled, DOS programs can run in protected mode without the need of an extended memory manager.

When disabled, DOS programs require an extended memory manager to run in protected mode.

It is recommended that you enable DOS Flat Mode if you use the MS-DOS operating system and run protected mode DOS programs.

However, if you use a newer operating system that supports protected mode (for example, Windows XP, disable DOS Flat Mode.

 

Details of DOS Flat Mode

In real mode, MS-DOS (Microsoft Disk Operating System) cannot address more than 1 MB. It is also restricted to memory segments of up to 64 KB in size.

These limitations can be circumvented by making use of extended memory managers. Such software allow DOS programs to make use of memory beyond 1MB by running them in protected mode. This mode also removes the segmentation of memory and creates a flat memory model for the program to use.

Protected mode refers to a memory addressing mode supported since the Intel 80286 was introduced. It provides the following benefits over real mode :

  • Each program can be given its own protected memory area, hence, the name protected mode. In this protected memory area, the program is protected from interference by other programs.
  • Programs can now access more than 1 MB of memory (up to 4 GB). This allows the creation of bigger and more complex programs.
  • There is virtually no longer any segmentation of the memory. Memory segments can be up to 4 GB in size.

As you can see, running DOS programs in protected mode allows safer memory addressing, access to more memory and eliminates memory segmentation. Of course, the DOS program must be programmed to make use of protected mode. An extended memory manager, either within the program or as a separate program, must also be present for the program to run in protected mode. Otherwise, it will run in real mode.

Please note that even with an extended memory manager, MS-DOS does not actually run in protected mode. Only specifically-programmed DOS programs will run in protected mode. The processor switches between real mode and protected mode to handle both MS-DOS in real mode and the DOS program in protected mode.

[adrotate group=”1″]

This is where Gate A20 becomes critical to the computer’s performance. For more information on the effect of Gate A20 on the memory mode switching performance of the processor, please consult the Gate A20 Option BIOS feature.

The DOS Flat Mode BIOS feature controls the BIOS’ built-in extended memory manager.

When enabled, DOS programs can run in protected mode without the need of an extended memory manager.

When disabled, DOS programs require an extended memory manager to run in protected mode.

It is recommended that you enable DOS Flat Mode if you use the MS-DOS operating system and run protected mode DOS programs.

However, if you use a newer operating system that supports protected mode (for example, Windows XP), disable DOS Flat Mode.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Samsung Aquabolt – World’s Fastest HBM2 Memory Revealed!

2018-01-11 – Samsung Electronics today announced that it has started mass production of the Samsung Aquabolt – its 2nd-generation 8 GB High Bandwidth Memory-2 (HBM2) with the fastest data transmission speed on the market today. This is the industry’s first HBM2 to deliver a 2.4 Gbps data transfer speed per pin.

 

Samsung Aquabolt – World’s Fastest HBM2 Memory

Samsung’s new 8GB HBM2 delivers the highest level of DRAM performance, featuring a 2.4Gbps pin speed at 1.2V, which translates into a performance upgrade of nearly 50% per each package, compared to the 1st-generation 8GB HBM2 package with its 1.6Gbps pin speed at 1.2V and 2.0Gbps at 1.35V.

With these improvements, a single Samsung 8GB HBM2 package will offer a 307 GB/s data bandwidth – 9.6X faster than an 8 Gb GDDR5 chip, which provides a 32 GB/s data bandwidth. Using four of the new HBM2 packages in a system will enable a 1.2 TB/s bandwidth. This improves overall system performance by as much as 50%, compared to a system that uses the first-generation 1.6 Gbps HBM2 memory.

[adrotate group=”1″]

 

How Samsung Created Aquabolt

To achieve Aquabolt’s unprecedented performance, Samsung has applied new technologies related to TSV design and thermal control.

A single 8GB HBM2 package consists of eight 8Gb HBM2 dies, which are vertically interconnected using over 5,000 TSVs (Through Silicon Via’s) per die. While using so many TSVs can cause collateral clock skew, Samsung succeeded in minimizing the skew to a very modest level and significantly enhancing chip performance in the process.

In addition, Samsung increased the number of thermal bumps between the HBM2 dies, which enables stronger thermal control in each package. Also, the new HBM2 includes an additional protective layer at the bottom, which increases the package’s overall physical strength.

Go Back To > News | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DCLK Feedback Delay – The Tech ARP BIOS Guide

DCLK Feedback Delay

Common Options : 0 ps, 150 ps, 300 ps, 450 ps, 600 ps, 750 ps, 900 ps, 1050 ps

 

Quick Review of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

 

Details of DCLK Feedback Delay

DCLK is the clock signal sent by the SDRAM controller to the clock buffer of the SDRAM module. The SDRAM module will send back a feedback signal via DCLKFB or DCLKWR.

This feedback signal is used by the SDRAM controller to determine when it can write data to the SDRAM module. The main idea of this system is to ensure that both clock phases are properly aligned for the proper delivery of data.

By comparing the wave forms from both DCLK and its feedback signal, it can be determined if both clocks are in the same phase. If the clocks are not in the same phase, this may result in loss of data, resulting in system instability.

[adrotate group=”2″]

The DCLK Feedback Delay BIOS feature allows you to fine-tune the DCLK-DLCK feedback phase alignment.

By default, it’s set to 0 ps or no delay.

If the clocks are not in phase, you can add appropriate amounts of delay (in picoseconds) to the DLCK feedback signal until both signals are in the same phase. Just increase the amount of delay until the system is stable.

However, if you are not experiencing any stability issues, it’s highly recommended that you leave the delay at 0 ps. There’s no performance advantage is increasing or reducing the amount of feedback delay.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Asynclat (Asynchronous Latency) – The Tech ARP BIOS Guide

Asynclat

Common Options : 0 to 15 ns

 

Quick Review of Asynclat

Asynclat is an AMD processor-specific BIOS feature. It controls the amount of asynchronous latency, which depends on the time it takes for data to travel from the processor to the furthest DIMM on the motherboard and back. For your reference, AMD has a few conservative suggestions on setting the Asynclat BIOS feature.

Memory Type

Number Of
DIMM Slots

Memory Clock

200 MHz

166 MHz

133 MHz

100 MHz

Registered

8

8 ns

8 ns

9 ns

9 ns

Unbuffered

4

8 ns

8 ns

8 ns

8 ns

3 or 4

7 ns

7 ns

7 ns

7 ns

1 or 2

6 ns

6 ns

6 ns

6 ns

Do note that in this case, the distance of the furthest DIMM slot is considered analogous to the number of DIMM slots. The greater number of DIMM slots available on the motherboard, the further the final slot is from the memory controller.

Also, these values are rough and conservative recommendations that assume that the furthest DIMM slot is occupied by a module. If your motherboard has four slots and you choose to populate only the first two slots, you could use a shorter asynchronous latency.

Generally, it is recommended that you stick with the asynchronous latency recommended by AMD (see table above) or your memory module’s manufacturer. You can, of course, adjust the amount of asynchronous latency according to the situation. For example, if you are overclocking the memory modules, or if you populate the first two slots of the four available DIMM slots; you can get away with a lower asynchronous latency.

 

Details of Asynclat

Asynclat is an AMD processor-specific BIOS feature. It controls the amount of asynchronous latency, which depends on the time it takes for data to travel from the processor to the furthest DIMM on the motherboard and back.

The asynchronous latency is designed to account for variances in the trace length to the furthest DIMM on the motherboard, as well as the type of DIMM, number of chips in that DIMM and the memory bus frequency. For your reference, AMD has a few conservative suggestions on setting the Asynclat BIOS feature.

Memory Type

Number Of
DIMM Slots

Memory Clock

200 MHz

166 MHz

133 MHz

100 MHz

Registered

8

8 ns

8 ns

9 ns

9 ns

Unbuffered

4

8 ns

8 ns

8 ns

8 ns

3 or 4

7 ns

7 ns

7 ns

7 ns

1 or 2

6 ns

6 ns

6 ns

6 ns

Do note that in this case, the distance of the furthest DIMM slot is considered analogous to the number of DIMM slots. The greater number of DIMM slots available on the motherboard, the further the final slot is from the memory controller.

Also, these values are rough and conservative recommendations that assume that the furthest DIMM slot is occupied by a module. If your motherboard has four slots and you choose to populate only the first two slots, you could use a shorter asynchronous latency.

[adrotate group=”2″]

Naturally, the shorter the latency, the better the performance. However, if the latency is too short, it will not allow enough time for data to be returned from the furthest DIMM on the motherboard. This results in data corruption and system instability.

The optimal asynchronous latency varies from system to system. It depends on the motherboard design, where you install your DIMMs, the type of DIMM used and the memory bus speed selected. The only way to find the optimal asychronous latency is trial and error, by starting with a high value and working your way down.

Generally, it is recommended that you stick with the asynchronous latency recommended by AMD (see table above) or your memory module’s manufacturer. You can, of course, adjust the amount of asynchronous latency according to the situation. For example, if you are overclocking the memory modules, or if you populate the first two slots of the four available DIMM slots; you can get away with a lower asynchronous latency.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM Page Hit Limit – The Tech ARP BIOS Guide

SDRAM Page Hit Limit

Common Options : 1 Cycle, 4 Cycles, 8 Cycles, 16 Cycles, 32 Cycles

 

Quick Review of SDRAM Page Hit Limit

The SDRAM Page Hit Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

 

Details of SDRAM Page Hit Limit

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This is known as a page hit.

Normally, consecutive page hits offer the best memory performance for the requesting device. However, a flood of consecutive page hit requests can cause non-page hit requests to be delayed for an extended period of time. This does not allow fair system memory access to all devices and may cause problems for devices that generate non-page hit requests.

[adrotate group=”2″]

The SDRAM Page Hit Limit BIOS feature is designed to reduce the data starvation that occurs when pending non-page hit requests are unduly delayed. It does so by limiting the number of consecutive page hit requests that are processed by the memory controller before attending to a non-page hit request.

Please note that whatever you set for this BIOS feature will determine the maximum number of consecutive page hits, irrespective of whether the page hits are from the same memory bank or different memory banks. The default value is often 8 consecutive page hit accesses (described erroneously as cycles).

Generally, the default value of 8 Cycles should provide a balance between performance and fair memory access to all devices. However, you can try using a higher value (16 Cycles) for better memory performance by giving priority to a larger number of consecutive page hit requests. A lower value is not advisable as this will normally result in a higher number of page interruptions.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

ISA Shared Memory – The Tech ARP BIOS Guide

ISA Shared Memory

Common Options : 64KB, 32KB, 16KB, Disabled

 

Quick Review

This is an ISA-specific BIOS option. In motherboards that have ISA slots, the BIOS can be set to reserve part of the upper memory area (UMA), which resides between 640 KB and 1 MB, for use by ISA expansion cards. This is particularly important for older operating systems like MS-DOS because it frees up conventional memory (the first 640 KB) for use by the operating system and applications.

If your ISA expansion card requires an upper memory area of 64 KB in size, select the 64KB option. Similarly, if it requires just 32 KB or 16 KB of upper memory, select the 32KB or 16KB options respectively.

If you are not sure how much memory your ISA expansion card requires, select 64KB. It will work with cards that only require 32 KB or 16 KB of upper memory. The rest of the reserved upper memory area will be left unused.

If you do not have any ISA expansion cards installed, leave it at its default setting of Disabled. This frees up the UMA for use by the operating system (or third-party memory managers) to store TSR (Terminate and Stay Resident) programs.

 

Details

This is an ISA-specific BIOS option. In motherboards that have ISA slots, the BIOS can be set to reserve part of the upper memory area (UMA), which resides between 640 KB and 1 MB, for use by ISA expansion cards. This is particularly important for older operating systems like MS-DOS because it frees up conventional memory (the first 640 KB) for use by the operating system and applications.

In truly old motherboards with multiple ISA slots, you will have the option to set both the memory address range and the reserved memory size. You may have to set jumpers on your ISA expansion cards to prevent two or more cards using the same memory address range, but this allows you to reserve segments of the upper memory area for your ISA expansion cards.

PCI-based motherboards have no need for so many ISA slots. Most, if not all, only have a single ISA slot for the rare ISA expansion card that had not yet been “ported” to the PCI bus. Therefore, their BIOS only has a single BIOS option, which allows you to set the size of the UMA to be reserved for the single ISA card. There’s no need to set the memory address range because there is only one ISA slot, so you need not worry about conflicts between multiple ISA cards. All you need to do is set the reserved memory size.

[adrotate group=”1″]

To do so, you will have to find out how much memory your ISA expansion card requires. Check the manual that came with the card. It ranges from 16 KB to 64 KB. Once you know how much upper memory your ISA expansion card requires, use this BIOS setting to reserve the memory segment for your card.

If your ISA expansion card requires an upper memory area of 64 KB in size, select the 64KB option. Similarly, if it requires just 32 KB or 16 KB of upper memory, select the 32KB or 16KB options respectively.

If you are not sure how much memory your ISA expansion card requires, select 64KB. It will work with cards that only require 32 KB or 16 KB of upper memory. The rest of the reserved upper memory area will be left unused.

If you do not have any ISA expansion cards installed, leave it at its default setting of Disabled. This frees up the UMA for use by the operating system (or third-party memory managers) to store TSR (Terminate and Stay Resident) programs.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

In-Order Queue Depth – The BIOS Optimization Guide

In-Order Queue Depth

Common Options : 1, 4, 8, 12

 

Quick Review of In-Order Queue Depth

The In-Order Queue Depth BIOS feature controls the use of the processor bus’ command queue. Normally, there are only two options available. Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12).

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands. If selected, each command will only be issued after the processor has finished with the previous one. Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled. This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

Interestingly, this feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds. To overclock beyond what’s normally possible, you can try disabling command queuing.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

 

Details of In-Order Queue Depth

For greater performance at high clock speeds, motherboard chipsets now feature a pipelined processor bus. The multiple stages in this pipeline can also be used to queue up multiple commands to the processor. This command queuing greatly improves performance because it effectively masks the latency of the processor bus. In optimal situations, the amount of latency between each succeeding command can be reduced to only a single clock cycle!

The In-Order Queue Depth BIOS feature controls the use of the processor bus’ command queue. Normally, there are only two options available. Depending on the motherboard chipset, the options could be (1 and 4), (1 and 8) or (1 and 12). This is because this BIOS feature does not actually allow you to select the number of commands that can be queued.

It merely allows you to disable or enable the command queuing capability of the processor bus pipeline. This is because the number of commands that can be queued depends entirely on the number of stages in the pipeline. As such, you can expect to see this feature associated with options like Enabled and Disabled in some motherboards.

The first queue depth option is always 1, which prevents the processor bus pipeline from queuing any outstanding commands. If selected, each command will only be issued after the processor has finished with the previous one. Therefore, every command will incur the maximum amount of latency. This varies from 4 clock cycles for a 4-stage pipeline to 12 clock cycles for pipelines with 12 stages.

As you can see, this reduces performance as the processor has to wait for each command to filter down the pipeline. The severity of the effect depends greatly on the depth of the pipeline. The deeper the pipeline, the greater the effect.

If the second queue depth option is 4, this means that the processor bus pipeline has 4 stages in it. Selecting this option allows the queuing of up to 4 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 8, this means that the processor bus pipeline has 8 stages in it. Selecting this option allows the queuing of up to 8 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

If the second queue depth option is 12, this means that the processor bus pipeline has 12 stages in it. Selecting this option allows the queuing of up to 12 commands in the pipeline. Each command can then be processed successively with a latency of only 1 clock cycle.

Please note that the latency of only 1 clock cycle is only possible if the pipeline is completely filled up. If the pipeline is only partially filled up, then the latency affecting one or more of the commands will be more than 1 clock cycle. Still, the average latency for each command will be much lower than it would be with command queuing disabled.

In most cases, it is highly recommended that you enable command queuing by selecting the option of 4 / 8 / 12 or in some cases, Enabled. This allows the processor bus pipeline to mask its latency by queuing outstanding commands. You can expect a significant boost in performance with this feature enabled.

[adrotate group=”2″]

Interestingly, this feature can also be used as an aid in overclocking the processor. Although the queuing of commands brings with it a big boost in performance, it may also make the processor unstable at overclocked speeds. To overclock beyond what’s normally possible, you can try disabling command queuing. This may reduce performance but it will make the processor more stable and may allow it to be further overclocked.

But please note that the performance deficit associated with deeper pipelines (8 or 12 stages) may not be worth the increase in processor overclockability. This is because the deep processor bus pipelines have very long latencies.

If they are not masked by command queuing, the processor may be stalled so badly that you may end up with poorer performance even if you are able to further overclock the processor. So, it is recommended that you enable command queuing for deep pipelines, even if it means reduced overclockability.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Idle Cycle Limit – The BIOS Optimization Guide

Idle Cycle Limit

Common Options : 0T, 16T, 32T, 64T, 96T, Infinite, Auto

 

Quick Review of Idle Cycle Limit

The Idle Cycle Limit BIOS feature sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 8T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

 

Details of Idle Cycle Limit

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles.

[adrotate group=”1″]

This is where the Idle Cycle Limit BIOS feature comes in. It sets the number of idle cycles that is allowed before the memory controller forces open pages to close and precharge. It is based on the concept of temporal locality.

According to this concept, the longer the open page is left idle, the less likely it will be accessed again before it needs to be closed and the bank precharged. Therefore, it would be better to prematurely close the page and precharge the bank so that the next page can be opened quickly when a data request comes along.

The Idle Cycle Limit BIOS option can be set to a variety of clock cycles from 0T to 96T. This determines the number of clock cycles open pages are allowed to idle for before they are closed and the bank precharged. The default value is 16T which forces the memory controller to close the open pages once sixteen idle cycles have passed.

Increasing this BIOS feature to more than the default of 16T forces the memory controller to keep the activated pages opened longer during times of no activity. This allows for quicker data access if the next data request can be satisfied by the open pages.

However, this is limited by the refresh cycle already set by the BIOS. This means the open pages will automatically close when the memory bank needs to be recharged, even if the number of idle cycles have not reached the Idle Cycle Limit. So, this BIOS option can only be used to force the precharging of the memory bank before the set refresh cycle but not to actually delay the refresh cycle.

Reducing the number of cycles from the default of 16T to 0T forces the memory controller to close all open pages once there are no data requests. In short, the open pages are refreshed as soon as there are no further data requests. This may increase the efficiency of the memory subsystem by masking the bank precharge during idle cycles. However, prematurely closing the open pages may convert what could have been a page hit (and satisfied immediately) into a page miss which will have to wait for the bank to precharge and the same page reopened.

Because refreshes do not occur that often (usually only about once every 64 msec), the impact of refreshes on memory performance is really quite minimal. The apparent benefits of masking the refreshes during idle cycles will not be noticeable, especially since memory systems these days already use bank interleaving to mask refreshes.

With a 0T setting, data requests are also likely to get stalled because even a single idle cycle will cause the memory controller to close all open pages! In desktop applications, most memory reads follow the spatial locality concept where if one data bit is read, chances are high that the next data bit will also need to be read. That’s why closing open pages prematurely using DRAM Idle Timer will most likely cause reduced performance in desktop applications.

On the other hand, using a 0 or 16 idle cycles limit will ensure that the memory cells will be refreshed more often, thereby preventing the loss of data due to insufficiently refreshed memory cells. Forcing the memory controller to close open pages more often will also ensure that in the event of a very long read, the pages can be opened long enough to fulfil the data request.

If you select Infinite, the memory controller will never precharge the open pages prematurely. The open pages will be left activated until they need to be closed for a bank precharge.

If you select Auto, the memory controller will use the manufacturer’s preset default setting. Most manufacturers use a default value of 16T, which forces the memory controller to close the open pages once sixteen idle cycles have passed.

For general desktop use, it is recommended that you set this feature to 16T. It is important to keep the pages open for some time, to improve the chance of page hits. Yet, they should not be kept open too long as temporal locality dictates that the longer a page is kept idle, the less likely the next data request will require data from it.

Alternatively, you can greatly increase the value of the Refresh Interval or Refresh Mode Select feature to boost bandwidth and use this BIOS feature to maintain the data integrity of the memory cells. As ultra-long refresh intervals (i.e. 64 or 128 µsec) can cause memory cells to lose their contents, setting a low Idle Cycle Limit like 0T or 16T allows the memory cells to be refreshed more often, with a high chance of those refreshes being done during idle cycles.

[adrotate group=”2″]

This appears to combine the best of both worlds – a long bank active period when the memory controller is being stressed and more refreshes when the memory controller is idle. However, this is not a reliable way of ensuring sufficient refresh cycles since it depends on the vagaries of memory usage to provide sufficient idle cycles to trigger the refreshes.

If your memory subsystem is under extended load, there may not be any idle cycle to trigger an early refresh. This may cause the memory cells to lose their contents. Therefore, it is still recommended that you maintain a proper refresh interval and set this feature to 16T for desktops.

For applications (i.e. servers) that perform a lot of random accesses, it is advisable that you select 0T as subsequent data requests would most likely be fulfilled by pages other than the ones currently open. Closing those open pages will force the bank to precharge earlier, allowing faster accesses to the other pages for the next data request. There’s also the added benefit of increased data integrity due to more frequent refreshes.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

DRAM Termination – The BIOS Optimization Guide

DRAM Termination

Common Options : 50 Ohms, 75 Ohms, 150 Ohms (DDR2) / 40 Ohms, 60 Ohms, 120 Ohms (DDR3)

 

Quick Review of DRAM Termination

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal and higher power consumption.

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

 

Details of DRAM Termination

Like a ball thrown against a wall, electrical signals reflect (bounce) back when they reach the end of a transmission path. They also reflect at points where points where there is a change in impedance, e.g. at connections to DRAM devices or a bus. These reflected signals are undesirable because they distort the actual signal, impairing the signal quality and the data being transmitted.

Prior to the introduction of DDR2 memory, motherboard designers use line termination resistors at the end of the DRAM signal lines to reduce signal reflections. However, these resistors are only partially effective because they cannot reduce reflections generated by the stub lines that lead to the individual DRAM chips on the memory module (see illustration below). Even so, this method worked well enough with the lower operating frequency and higher signal voltages of SDRAM and DDR SDRAM modules.

Line termination resistors on the motherboard (Courtesy of Rambus)

The higher speed (and lower signal voltages) of DDR2 and DDR3 memory though require much better signal quality and these high-speed modules have much lower tolerances for noise. The problem is also compounded by the higher number of memory modules used. Line termination resistors are no longer good enough to tackle the problem of signal reflections. This is where On-Die Termination (ODT) comes in.

On-Die Termination shifts the termination resistors from the motherboard to the DRAM die itself. These resistors can better suppress signal reflections, providing much better a signal-to-noise ratio in DDR2 and DDR3 memory. This allows for much higher clock speeds at much lower voltages.

It also reduces the cost of motherboard designs. In addition, the impedance value of the termination resistors can be adjusted, or even turned off via the memory module’s Extended Mode Register Set (EMRS).

On-die termination (Courtesy of Rambus)

Unlike the termination resistors on the motherboard, the on-die termination resistors can be turned on and off as required. For example, when a DIMM is inactive, its on-die termination resistors turn on to prevent signals from the memory controller reflecting to the active DIMMs. The impedance value of the resistors are usually programmed by the BIOS at boot-time, so the memory controller only turns it on or off (unless the system includes a self-calibration circuit).

The DRAM Termination BIOS option controls the impedance value of the DRAM on-die termination resistors. DDR2 modules support impedance values of 50 ohms75 ohms and 150 ohms, while DDR3 modules support lower impedance values of 40 ohms60 ohms and 120 ohms.

A lower impedance value improves the resistor’s ability to absorb signal reflections and thus improve signal quality. However, this comes at the expense of a smaller voltage swing for the signal, and higher power consumption.

[adrotate group=”2″]

The proper amount of impedence depends on the memory type and the number of DIMMs used. Therefore, it is best to contact the memory manufacturer to find out the optimal amount of impedance for the particular set of memory modules. If you are unable to obtain that information, you can also follow these guidelines from a Samsung case study on the On-Die Termination of DDR2 memory :

  • Single memory module / channel : 150 ohms
  • Two memory modules / channels
    • DDR2-400 / 533 memory : 75 ohms
    • DDR2-667 / 800 memory : 50 ohms

Unfortunately, they did not perform any case study on the On-Die Termination of DDR3 memory. As such, the best thing to do if you are using DDR3 memory is to try using a low impedance of 40 ohms and adjust upwards if you face any stability issues.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

SDRAM Precharge Control – The BIOS Optimization Guide

SDRAM Precharge Control

Common Options : Enabled, Disabled

 

Quick Review of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank. This is useful in cases where subsequent data requests will also result in page misses.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

 

Details of SDRAM Precharge Control

This BIOS feature is similar to SDRAM Page Closing Policy.

The memory controller allows up to four pages to be opened at any one time. These pages have to be in separate memory banks and only one page may be open in each memory bank. If a read request to the SDRAM falls within those open pages, it can be satisfied without delay. This naturally improves performance.

But if read request cannot be satisfied by any of the four open pages, there are two possibilities. Either one page is closed and the correct page opened; or all open pages are closed and new pages opened up. Either way, the read request suffers the full latency penalty.

[adrotate group=”1″]

The SDRAM Precharge Control BIOS feature determines if the chipset should try to leave the pages open (by closing just one open page) or try to keep them closed (by closing all open pages) whenever there is a page miss.

When enabled, the memory controller will only close one page whenever a page miss occurs. This allows the other open pages to be accessed at the cost of only one clock cycle.

However, when a page miss occurs, there is a chance that subsequent data requests will result in page misses as well. In long memory reads that cannot be satisfied by any of the open pages, this may cause up to four full latency reads to occur. Naturally, this greatly impacts memory performance.

Fortunately, after the four full latency reads, the memory controller can often predict what pages will be needed next. It can then open them for minimum latency reads . This somewhat reduces the negative effect of consecutive page misses.

When disabled, the memory controller will send an All Banks Precharge Command to the SDRAM interface whenever there is a page miss. This causes all the open pages to close (precharge). Therefore, subsequent reads only need to activate the necessary memory bank.

This is useful in cases where subsequent data requests will also result in page misses. This is because the memory banks will already be precharged and ready to be activated. There is no need to wait for the memory banks to precharge before they can be activated. However, it also means that you won’t be able to benefit from data accesses that could have been satisfied by the previously opened pages.

As you can see, both settings have their advantages and disadvantages. But you should see better performance with this feature enabled as the open pages allow very fast accesses. Disabling this feature, however, has the advantage of keeping the memory contents refreshed more often. This improves data integrity although it is only useful if you have chosen a SDRAM refresh interval that is longer than the standard 64 msec.

Therefore, it is recommended that you enable this feature for better memory performance. Disabling this feature can improve data integrity but if you are keeping the SDRAM refresh interval within specification, then it is of little use.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

LD-Off Dram RD/WR Cycles – The BIOS Optimization Guide

LD-Off Dram RD/WR Cycles

Common Options : Delay 1T, Normal

 

Quick Review

The LD-Off Dram RD/WR Cycles BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

 

Details

At the beginning of a memory transaction (read or write), the memory controller normally sends the address and command signals simultaneously to the memory bank. This allows for the quickest activation of the memory bank.

However, this may cause problems with certain memory modules. In these memory modules, the target row may not be activated quickly enough to allow the memory controller to read from or write to it. This is where the LD-Off Dram RD/WR Cycles BIOS feature comes in.

[adrotate group=”1″]

This BIOS feature controls the lead-off time for the memory read and write cycles.

When set to Delay 1T, the memory controller issues the memory address first. The read or write command is only issued after a delay of one clock cycle. This ensures there is enough time for the memory bank to be activated before the read or write command arrives.

When set to Normal, the memory controller issues both memory address and read/write command simultaneously.

It is recommended that you select the Normal option for better performance. Select the Delay 1T option only if you have stability issues with your memory modules.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Auto Detect DIMM/PCI Clk – The BIOS Optimization Guide

Auto Detect DIMM/PCI Clk

Common Options : Enabled, Disabled

 

Quick Review

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI (Electromagnetic Interference) and reduce power consumption by turning off unoccupied or inactive PCI and memory slots.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

It is recommended that you enable this feature to save power and reduce EMI.

[adrotate banner=”5″]

 

Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted. To reduce this problem, the motherboard can either modulate the pulses (see Spread Spectrum) or turn off unused AGP, PCI or memory clock signals.

The Auto Detect DIMM/PCI Clk BIOS feature determines whether the motherboard should actively reduce EMI and reduce power consumption by turning off unoccupied or inactive PCI and memory slots. It is similar to the Smart Clock option of the Spread Spectrum BIOS feature.

When enabled, the motherboard will query the PCI and memory (DIMM) slots when it boots up, and automatically turn off clock signals to unoccupied slots. It will also turn off clock signals to occupied PCI and memory slots, but only when there is no activity.

When disabled, the motherboard will not turn off clock signals to any PCI or memory (DIMM) slots, even if they are unoccupied or inactive.

This method allows you to reduce the motherboard’s EMI levels without compromising system stability. It also allows the motherboard to reduce power consumption because the clock signals will only be generated for PCI and memory slots that are occupied and active.

The choice of whether to enable or disable this feature is really up to your personal preference. But since this feature reduces EMI and power consumption without compromising system stability, it is recommended that you enable it.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

CPU Hardware Prefetch – The BIOS Optimization Guide

CPU Hardware Prefetch

Common Options : Enabled, Disabled

 

Quick Review

The processor has a hardware prefetcher that automatically analyzes its requirements and prefetches data and instructions from the memory into the Level 2 cache that are likely to be required in the near future. This reduces the latency associated with memory reads.

When enabled, the processor’s hardware prefetcher will be enabled and allowed to automatically prefetch data and code for the processor.

When disabled, the processor’s hardware prefetcher will be disabled.

If you are using a C1 stepping (or older) of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, it is recommended that you enable this BIOS feature so that the hardware prefetcher is enabled for maximum performance.

But if you are using an older version of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, then you should disable the CPU Hardware Prefetch BIOS feature to circumvent the O37 bug which causes data corruption when the hardware prefetcher is operational.

 

Details

CPU Hardware Prefetch is a BIOS feature specific to processors based on the Intel NetBurst microarchitecture (e.g. Intel Pentium 4 and Intel Pentium 4 Xeon).

These processors have a hardware prefetcher that automatically analyzes the processor’s requirements and prefetches data and instructions from the memory into the Level 2 cache that are likely to be required in the near future. This reduces the latency associated with memory reads.

When it works, the hardware prefetcher does a great job of keeping the processor loaded with code and data. However, it doesn’t always work right.

Prior to the C1 stepping of the Intel Pentium 4 and Intel Pentium 4 Xeon, these processors shipped with a bug that causes data corruption when the hardware prefetcher was enabled. According to Intel, Errata O37 causes the processor to “use stale data from the cache while the Hardware Prefetcher is enabled“.

Unfortunately, the only solution for the affected processors is to disable the hardware prefetcher. This is where the CPU Hardware Prefetch BIOS feature comes in.

[adrotate banner=”4″]

When enabled, the processor’s hardware prefetcher will be enabled and allowed to automatically prefetch data and code for the processor.

When disabled, the processor’s hardware prefetcher will be disabled.

If you are using a C1 stepping (or older) of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, it is recommended that you enable this BIOS feature so that the hardware prefetcher is enabled for maximum performance.

But if you are using an older version of the Intel Pentium 4 or Intel Pentium 4 Xeon processor, then you should disable this BIOS feature to circumvent the O37 bug which causes data corruption when the hardware prefetcher is operational.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Rank Interleave – The BIOS Optimization Guide

Rank Interleave

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Details

Rank is a new term used to differentiate physical banks on a particular memory module from internal banks within the memory chip. Single-sided memory modules have a single rank while double-sided memory modules have two ranks.

This BIOS feature is similar to SDRAM Bank Interleave. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. The only difference is that Rank Interleave works between different physical banks or, as they are called now, ranks.

[adrotate banner=”4″]

Since a minimum of two ranks are required for interleaving to be supported, double-sided memory modules are a must if you wish to enable this BIOS feature. Enabling Rank Interleave with single-sided memory modules will not result in any performance boost.

Please note that Rank Interleave currently works only if you are using double-sided memory modules. Rank Interleave will not work with two or more single-sided memory modules. The interleaving ranks must be on the same memory module.

It is highly recommended that you enable Rank Interleave for better memory performance. You can also enable this BIOS feature if you are using a mixture of single- and double-sided memory modules. But if you are using only single-sided memory modules, it’s advisable to disable Rank Interleave.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Digital Locked Loop (DLL) – The BIOS Optimization Guide

Digital Locked Loop (DLL)

Common Options : Enabled, Disabled

 

Quick Review

The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL). It is a digital circuit that aligns the data strobe signal (DQS) with the data signal (DQ) to ensure proper data transfer of DDR, DDR2, DDR3 and DDR4 memory. However, it can be disabled to allow the memory chips to run beyond a fixed frequency range.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

It is recommended that you keep this BIOS feature enabled at all times. The digital locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Details

DDR, DDR2, DDR3 and DDR4 SDRAM deliver data on both rising and falling edges of the signal. This requires much tighter timings, necessitating the use of a data strobe signal (DQS) generated by differential clocks. This data strobe is then aligned to the data signal (DQ) using a delay-locked loop (DLL) circuit.

The DQS and DQ signals must be aligned with minimal skew to ensure proper data transfer. Otherwise, data transferred on the DQ signal will be read incorrectly, causing the memory contents to be corrupted and the system to malfunction.

However, the delay-locked loop circuit of every DDR, DDR2, DDR3 or DDR4 chip is tuned for a certain fixed frequency range. If you run the chip beyond that frequency rate, the DLL circuit may not work correctly. That’s why DDR, DDR2, DDR3 and DDR4 SDRAM chips can have problems running at clock speeds slower than what they are rated for.

[adrotate banner=”5″]

If you encounter such a problem, it is possible to disable the DLL. Disabling the DLL will allow the chip to run beyond the frequency range for which the DLL is tuned for. This is where the Digital Locked Loop (DLL) BIOS feature comes in.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

Note : The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL).

It is recommended that you keep this BIOS feature enabled at all times. The delay-locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP ISA Aliasing – The BIOS Optimization Guide

AGP ISA Aliasing

Common Options : Enabled, Disabled

 

Quick Review

The AGP ISA Aliasing BIOS feature allows you to determine if the system controller will perform ISA aliasing to prevent conflicts between ISA devices.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10]. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Details

The origin of the AGP ISA Aliasing feature can be traced back all the way to the original IBM PC. When the IBM PC was designed, it only had ten address lines (10-bits) for I/O space allocation. Therefore, the I/O space back in those days was only 1KB or 1024 bytes in size. Out of those 1024 available addresses, the first 256 addresses were reserved exclusively for the motherboard’s use, leaving the last 768 addresses for use by add-in devices. This would become a critical factor later on.

Later, motherboards began to utilize 16 address lines for I/O space allocation. This was supposed to create a contiguous I/O space of 64KB in size. Unfortunately, many ISA devices by then were only capable of doing 10-bit decodes. This was because they were designed for computers based on the original IBM design which only supported 10 address lines.

To circumvent this problem, they fragmented the 64KB I/O space into 1KB chunks. Unfortunately, because the first 256 addresses must be reserved exclusively for the motherboard, this means that only the first (or lower) 256 bytes of each 1KB chunk would be decoded in full 16-bits. All 10-bits-decoding ISA devices are, therefore, restricted to the last (or top) 768 bytes of the 1KB chunk of I/O space.

As a result, such ISA devices only have 768 I/O locations to use. Because there were so many ISA devices back then, this limitation created a lot of compatibility problems because the chances of two ISA cards using the same I/O space were high. When that happened, one or both of the cards would not work. Although they tried to reduce the chance of such conflicts by standardizing the I/O locations used by different classes of ISA devices, it was still not good enough.

Eventually, they came up with a workaround. Instead of giving each ISA device all the I/O space it wants in the 10-bit range, they gave each a much ISA device smaller number of I/O locations and made up for the difference by “borrowing” them from the 16-bit I/O space! Here’s how they did it.

The ISA device would first take up a small number of I/O locations in the 10-bit range. It then extends its I/O space by using 16-bit aliases of the few 10-bit I/O locations taken up earlier. Because each I/O location in the 10-bit decode area has sixty-three16-bit aliases, the total number of I/O locations expands from just 768 locations to a maximum of 49,152 locations!

More importantly, each ISA card will now require very few I/O locations in the 10-bit range. This drastically reduced the chances of two ISA cards conflicting each other in the limited 10-bit I/O space. This workaround naturally became known as ISA Aliasing.

Now, that’s all well and good for ISA devices. Unfortunately, the 10-bit limitation of ISA devices becomes a liability to devices that require 16-bit addressing. AGP and PCI devices come to mind. As noted earlier, only the first 256 addresses of the 1KB chunks support 16-bit addressing. What that really means is all 16-bit addressing devices are thus limited to only 256 bytes of contiguous I/O space!

When a 16-bit addressing device requires a larger contiguous I/O space, it will have to encroach on the 10-bit ISA I/O space. For example, if an AGP card requires 8KB of contiguous I/O space, it will take up eight of the 1KB I/O chunks (which will comprise of eight 16-bit areas and eight 10-bit areas!). Because ISA devices are using ISA Aliasing to extend their I/O space, there’s now a high chance of I/O space conflicts between ISA devices and the AGP card. When that happens, the affected cards will most probably fail to work.

[adrotate banner=”5″]

There are two ways out of this mess. Obviously, you can limit the AGP card to a maximum of 256 bytes of contiguous I/O space. Of course, this is not an acceptable solution.

The second, and the preferred method, would be to throw away the restriction and provide the AGP card with all the contiguous I/O space it wants.

Here’s where the AGP ISA Aliasing BIOS feature comes in.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10] – the last 6-bits. Only the first 10-bits (address bits 0 to 9) are used for decoding. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!