Tag Archives: Computer memory

Digital Locked Loop (DLL) – The BIOS Optimization Guide

Digital Locked Loop (DLL)

Common Options : Enabled, Disabled

 

Quick Review

The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL). It is a digital circuit that aligns the data strobe signal (DQS) with the data signal (DQ) to ensure proper data transfer of DDR, DDR2, DDR3 and DDR4 memory. However, it can be disabled to allow the memory chips to run beyond a fixed frequency range.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

It is recommended that you keep this BIOS feature enabled at all times. The digital locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Details

DDR, DDR2, DDR3 and DDR4 SDRAM deliver data on both rising and falling edges of the signal. This requires much tighter timings, necessitating the use of a data strobe signal (DQS) generated by differential clocks. This data strobe is then aligned to the data signal (DQ) using a delay-locked loop (DLL) circuit.

The DQS and DQ signals must be aligned with minimal skew to ensure proper data transfer. Otherwise, data transferred on the DQ signal will be read incorrectly, causing the memory contents to be corrupted and the system to malfunction.

However, the delay-locked loop circuit of every DDR, DDR2, DDR3 or DDR4 chip is tuned for a certain fixed frequency range. If you run the chip beyond that frequency rate, the DLL circuit may not work correctly. That’s why DDR, DDR2, DDR3 and DDR4 SDRAM chips can have problems running at clock speeds slower than what they are rated for.

[adrotate banner=”5″]

If you encounter such a problem, it is possible to disable the DLL. Disabling the DLL will allow the chip to run beyond the frequency range for which the DLL is tuned for. This is where the Digital Locked Loop (DLL) BIOS feature comes in.

When enabled, the delay-locked loop (DLL) circuit will operate normally, aligning the DQS signal with the DQ signal to ensure proper data transfer. However, the memory chips should operate within the fixed frequency range supported by the DLL.

When disabled, the delay-locked loop (DLL) circuit will not align the DQS signal with the DQ signal. However, this allows you to run the memory chips beyond the fixed frequency range supported by the DLL.

Note : The Digital Locked Loop (DLL) BIOS option is a misnomer of the Delay-Locked Loop (DLL).

It is recommended that you keep this BIOS feature enabled at all times. The delay-locked loop circuit plays a key role in keeping the signals in sync to meet the tight timings required for double data-rate operations.

It should only be disabled if you absolutely must run the memory modules at clock speeds way below what they are rated for, and then only if you are unable to run the modules stably with this BIOS feature enabled. Although it is not a recommended step to take, running without an operational DLL is possible at low clock speeds due to the looser timing requirements.

It should never be disabled if you are having trouble running the memory modules at higher clock speeds. Timing requirements become stricter as the clock speed goes up. Disabling the DLL will almost certainly result in the improper operation of the memory chips.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Chipkill – The BIOS Optimization Guide

Chipkill

Common Options : Enabled, Disabled

 

Quick Review

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Details

Chipkill is an enhanced ECC (Error Checking and Correcting) technology developed by IBM. Like standard ECC, it can only be enabled if your system has two active ECC memory channels.

Normal ECC technology make use of eight ECC bits for every 64-bits of data and the Hamming code. This allows it to detect all single-bit and double-bit errors, but correct only single bit errors.

IBM’s Chipkill technology makes use of the BCH (Bose, Ray-Chaudhuri, Hocquenghem) code with sixteen ECC bits for every 128-bits of data. It can detect all single-symbol and double-symbol errors, but correct only single-symbol errors.

A symbol, by the way, is a group of 4-bits. A single symbol error is any error combination within that symbol. That means a single symbol error can consist of anything from one to four corrupted bits. Chipkill is therefore capable of detecting and correcting more errors than standard ECC.

Unlike standard ECC, Chipkill can only be used in systems with two channels of ECC memory (128-bits data width configuration). This is because it requires sixteen ECC bits, which can only be obtained using two ECC memory modules. However, it won’t work if you place both ECC modules in the same memory channel. Both memory channels must be active for Chipkill to work.

[adrotate banner=”5″]

This BIOS feature controls the memory controller’s Chipkill functionality.

When enabled, the memory controller will use Chipkill to detect single-symbol and double-symbol errors, and correct single-symbol errors.

When disabled, the memory controller will not use Chipkill. Instead, it will perform standard ECC to detect single-bit and double-bit errors, and correct single-bit errors.

If you already spent so much money buying ECC memory and a motherboard that supports Chipkill, you should definitely enable this BIOS feature, because it offers a much greater level of data integrity than standard ECC.

You should only disable this BIOS feature if your system only uses a single ECC module.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP ISA Aliasing – The BIOS Optimization Guide

AGP ISA Aliasing

Common Options : Enabled, Disabled

 

Quick Review

The AGP ISA Aliasing BIOS feature allows you to determine if the system controller will perform ISA aliasing to prevent conflicts between ISA devices.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10]. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Details

The origin of the AGP ISA Aliasing feature can be traced back all the way to the original IBM PC. When the IBM PC was designed, it only had ten address lines (10-bits) for I/O space allocation. Therefore, the I/O space back in those days was only 1KB or 1024 bytes in size. Out of those 1024 available addresses, the first 256 addresses were reserved exclusively for the motherboard’s use, leaving the last 768 addresses for use by add-in devices. This would become a critical factor later on.

Later, motherboards began to utilize 16 address lines for I/O space allocation. This was supposed to create a contiguous I/O space of 64KB in size. Unfortunately, many ISA devices by then were only capable of doing 10-bit decodes. This was because they were designed for computers based on the original IBM design which only supported 10 address lines.

To circumvent this problem, they fragmented the 64KB I/O space into 1KB chunks. Unfortunately, because the first 256 addresses must be reserved exclusively for the motherboard, this means that only the first (or lower) 256 bytes of each 1KB chunk would be decoded in full 16-bits. All 10-bits-decoding ISA devices are, therefore, restricted to the last (or top) 768 bytes of the 1KB chunk of I/O space.

As a result, such ISA devices only have 768 I/O locations to use. Because there were so many ISA devices back then, this limitation created a lot of compatibility problems because the chances of two ISA cards using the same I/O space were high. When that happened, one or both of the cards would not work. Although they tried to reduce the chance of such conflicts by standardizing the I/O locations used by different classes of ISA devices, it was still not good enough.

Eventually, they came up with a workaround. Instead of giving each ISA device all the I/O space it wants in the 10-bit range, they gave each a much ISA device smaller number of I/O locations and made up for the difference by “borrowing” them from the 16-bit I/O space! Here’s how they did it.

The ISA device would first take up a small number of I/O locations in the 10-bit range. It then extends its I/O space by using 16-bit aliases of the few 10-bit I/O locations taken up earlier. Because each I/O location in the 10-bit decode area has sixty-three16-bit aliases, the total number of I/O locations expands from just 768 locations to a maximum of 49,152 locations!

More importantly, each ISA card will now require very few I/O locations in the 10-bit range. This drastically reduced the chances of two ISA cards conflicting each other in the limited 10-bit I/O space. This workaround naturally became known as ISA Aliasing.

Now, that’s all well and good for ISA devices. Unfortunately, the 10-bit limitation of ISA devices becomes a liability to devices that require 16-bit addressing. AGP and PCI devices come to mind. As noted earlier, only the first 256 addresses of the 1KB chunks support 16-bit addressing. What that really means is all 16-bit addressing devices are thus limited to only 256 bytes of contiguous I/O space!

When a 16-bit addressing device requires a larger contiguous I/O space, it will have to encroach on the 10-bit ISA I/O space. For example, if an AGP card requires 8KB of contiguous I/O space, it will take up eight of the 1KB I/O chunks (which will comprise of eight 16-bit areas and eight 10-bit areas!). Because ISA devices are using ISA Aliasing to extend their I/O space, there’s now a high chance of I/O space conflicts between ISA devices and the AGP card. When that happens, the affected cards will most probably fail to work.

[adrotate banner=”5″]

There are two ways out of this mess. Obviously, you can limit the AGP card to a maximum of 256 bytes of contiguous I/O space. Of course, this is not an acceptable solution.

The second, and the preferred method, would be to throw away the restriction and provide the AGP card with all the contiguous I/O space it wants.

Here’s where the AGP ISA Aliasing BIOS feature comes in.

The default setting of Enabled forces the system controller to alias ISA addresses using address bits [15:10] – the last 6-bits. Only the first 10-bits (address bits 0 to 9) are used for decoding. This restricts all 16-bit addressing devices to a maximum contiguous I/O space of 256 bytes.

When disabled, the system controller will not perform any ISA aliasing and all 16 address lines can be used for I/O address space decoding. This gives 16-bit addressing devices access to the full 64KB I/O space.

It is recommended that you disable AGP ISA Aliasing for optimal AGP (and PCI) performance. It will also prevent your AGP or PCI cards from conflicting with your ISA cards. Enableit only if you have ISA devices that are conflicting with each other.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Idle Cycle Counter – The BIOS Optimization Guide

Dynamic Idle Cycle Counter

Common Options : Enabled, Disabled

 

Quick Review

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

 

Details

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind DRAM Idle Timer.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate banner=”5″]

The Dynamic Idle Cycle Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by DRAM Idle Timer and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by DRAM Idle Timer. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike DRAM Idle Timer, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable Dynamic Idle Cycle Counter for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Idle Cycle Counter and set DRAM Idle Timer to 0T.

Go Back To > The BIOS Optimization Guide | Home

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Fast R-W Turn Around – BIOS Optimization Guide

Fast R-W Turn Around

Common Options : Enabled, Disabled

 

Quick Review

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

As its name suggests, this BIOS feature allows you to skip that delay. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

 

Details

When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated.

Please note that this extra delay is only introduced when there is a switch from reads to writes. Switching from writes to reads will not suffer from such a delay.

As its name suggests, this BIOS feature allows you to skip that delay so that the memory controller can switch or “turn around” from reads to writes faster than normal. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds.

However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable this feature to correct the problem.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

DRAM Read Latch Delay – BIOS Optimization Guide

DRAM Read Latch Delay

Common Options : Enabled, Disabled

Quick Review

This BIOS feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. Start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. If your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Details

This feature is similar to the Delay DRAM Read Latch BIOS feature. It fine-tunes the DRAM timing parameters to adjust for different DRAM loadings.

The DRAM load changes with the number as well as the type of memory modules installed. DRAM loading increases as the number of memory modules increases. It also increases if you use double-sided modules instead of single-sided ones. In short, the more DRAM devices you use, the greater the DRAM loading. As such, a lone single-sided memory module provides the lowest DRAM load possible.

With heavier DRAM loads, you may need to delay the moment when the memory controller latches onto the DRAM device during reads. Otherwise, the memory controller may fail to latch properly onto the desired DRAM device and read from it.

The Auto option allows the BIOS to select the optimal amount of delay from values preset by the manufacturer.

The No Delay option forces the memory controller to latch onto the DRAM device without delay, even if the BIOS presets indicate that a delay is required.

The three timing options (0.5ns, 1.0ns and 1.5ns) give you manual control of the read latch delay.

Normally, you should let the BIOS select the optimal amount of delay from values preset by the manufacturer (using the Auto option). But if you notice that your system has become unstable upon installation of additional memory modules, you should try setting the DRAM read latch delay yourself.

The longer the delay, the poorer the read performance of your memory modules. However, the stability of your memory modules won’t increase together with the length of the delay. Remember, the purpose of the feature is only to ensure that the memory controller will be able to latch onto the DRAM device with all sorts of DRAM loadings.

[adrotate banner=”5″]

The amount of delay should just be enough to allow the memory controller to latch onto the DRAM device in your particular situation. Don’t unnecessarily increase the delay. It isn’t going to increase stability. In fact, it may just make things worse! So, start with 0.5ns and work your way up until your system stabilizes.

If you have a light DRAM load, you can ensure optimal performance by manually using the No Delay option. This forces the memory controller to latch onto the DRAM devices without delay, even if the BIOS presets indicate that a delay is required. Naturally, this can potentially cause stability problems if you actually have a heavy DRAM load. Therefore, if your system becomes unstable after using the No Delay option, simply revert back to the default value of Auto so that the BIOS can adjust the read latch delay to suit the DRAM load.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Apacer BLADE FIRE DDR4 Launched

Apacer is pleased to announce its groundbreaking achievement in DDR history, BLADE FIRE DDR4 with heartbeat LED, perfectly for gaming, overclocking, and M.O.D enthusiasts. It is astonishing to see speed at 3200MHz with LED pulsing at 44 beats per minute yet it is still consuming low voltage at 1.35V.

It is compatible with Intel Z170 platforms, providing unprecedented performance and energy-saving efficiency. Backed by the world’s best technology in industrial memory module and storage, BLADE FIRE DDR4 makes users on fire in the gaming / overclocking star-war.

 

Apacer BLADE FIRE DDR4

Respectable capacity of 32GB, frequency up to 3200MHz  – High stability & compatibility

The world-class BLADE FIRE DDR4 is an advanced generation of Blade DDR4 which was published in Feb 2015, featured a sensational armory design on heat spreader as well as LED lights heartbeat effect.

The meticulously screened ICs allow the optimized stability and compatibility while playing in the heavy workload games. BLADE FIRE is available in 4GB, 8GB, and 16GB capacities in dual-module package and comes equipped with total capacity of 32GB(16GB X 2).

The fastest memory kit available is clocked at 3200MHz, and BLADE FIRE kits are ranged in four different clock speeds from 2400MHz 1.2V to 3200MHz 1.35V with 16-16-18-38. The XMP 2.0 support for simple overclocking setup delivers instant top-level performance for motherboards with Intel’s 100 Series.

Not only low latency for outstanding DDR4 performance but also lower power consumption with less heat and higher reliability provides users the fastest speeds and the highest stability when it comes to gaming and overclocking.

LED heartbeat, Light saber on fire – Aggressive Look with stylish, asymmetrical armory design

The heartbeat LED on top edge of the module undoubtedly brings out the spirit of BLADE FIRE, to win and shine powerfully on battlefield as every beat embodies the enhancing HP(health point). The four modules altogether on motherboard show various LED light patterns, just like a battling saber on fire in the battleground. The design truly adds some serious bling which offers the users gimmicks to show off while meeting their needs with respect to exceptional functionality.

When speaking of the design of BLADE FIRE, the black heat spreader is made from quality aluminum material with matte finish. The metallic silver saber in the middle shows an aggressive look of a gaming memory module. On top edge the serration part of a saber as well as shank at the rear displays an asymmetrical design which presenting an extreme aesthetics.

[adrotate banner=”5″]

With distinguished speed and stability as well as a stylish design of LED light saber on motherboard, BLADE FIRE allows users to experience the most enjoyable and exciting game play ever. If you’re looking for the fastest and coolest memory module available, BLADE FIRE is the one and only you have to look at. Feel the supremacy and prominence of Apacer BLADE FIRE!

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

SDRAM Burst Len – BIOS Optimization Guide

SDRAM Burst Len

Common Options : 4, 8

 

Quick Review

This BIOS feature allows you to control the length of a burst transaction.

When this feature is set to 4, a burst transaction can only comprise of up to four reads or four writes.

When this feature is set to 8, a burst transaction can only comprise of up to eight reads or eight writes.

As the initial CAS latency is fixed for each burst transaction, a longer burst transaction will allow more data to be read or written for less delay than a shorter burst transaction. Therefore, a burst length of 8 will be faster than a burst length of 4.

Therefore, it is recommended that you select the longer burst length of 8 for better performance.

 

Details

This is the same as the SDRAM Burst Length BIOS feature, only with a weirdly truncated name. Surprisingly, many manufacturers are using it. Why? Only they know. 🙂

Burst transactions improve SDRAM performance by allowing the reading or writing of whole ‘blocks’ of contiguous data with only one column address.

In a burst sequence, only the first read or write transfer incurs the initial latency of activating the column. The subsequent reads or writes in that burst sequence can then follow behind without any further delay. This allows blocks of data to be read or written with far less delay than non-burst transactions.

For example, a burst transaction of four writes can incur the following latencies : 4-1-1-1. In this example, the total time it takes to transact the four writes is merely 7 clock cycles.

In contrast, if the four writes are not written by burst transaction, they will incur the following latencies : 4-4-4-4. The time it takes to transact the four writes becomes 16 clock cycles, which is 9 clock cycles longer or more than twice as slow as a burst transaction.

This is where the SDRAM Burst Len BIOS feature comes in. It is a BIOS feature that allows you to control the length of a burst transaction.

When this feature is set to 4, a burst transaction can only comprise of up to four reads or four writes.

When this feature is set to 8, a burst transaction can only comprise of up to eight reads or eight writes.

As the initial CAS latency is fixed for each burst transaction, a longer burst transaction will allow more data to be read or written for less delay than a shorter burst transaction. Therefore, a burst length of 8 will be faster than a burst length of 4.

[adrotate banner=”5″]

For example, if the memory controller wants to write a block of contiguous data eight units long to memory, it can do it as a single burst transaction 8 units long or two burst transactions, each 4 units in length. The hypothetical latencies incurred by the single 8-unit long transaction would be 4-1-1-1-1-1-1-1 with a total time of 11 clock cycles for the entire transaction.

But if the eight writes are written to memory as two burst transactions of 4 units in length, the hypothetical latencies incurred would be 4-1-1-1-4-1-1-1. The time taken for the two transactions to complete would be 14 clock cycles. As you can see, this is slower than a single transaction, 8 units long.

Therefore, it is recommended that you select the longer burst length of 8 for better performance.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Team Group Delta Luminous Memory Series Launched

Team Group Inc. today announced the launch of the all new generation of luminous memory for gaming – Delta. Stability and performance are already the basic requirement for a high speed memory. With the ever growing gamers and modders, the appearance and the visual features of the product has become the new focus of gamers’ attention.

The all new Delta series from Team Group has inherited the excellence of Team Group’s overclocking and mainstream memories. In addition, through the recent coalition with AVEXIR, we are able to combine its patented LED lighting technology with our aluminum forged high-efficiency heat spreader to create Delta, a bionic memory with pulse-like LED light effect.

Delta is Team Group’s first luminous memory series. It is using the exclusive LED luminous cooling system to provide a steady pulse rhythm and soothe gamer’s nervous tension during an intense game, so the gamer is able to win with an optimum state.

To satisfy gamers’ demand for high performance memories, Delta releases two high specification memories, DDR4 2400 CL15-15-15-35 and DDR4 3000 CL16-16-16-36. And it also offers two packages, 4GBx2 and 8GBx2 for gamers to choose from. With computer modding trend on the rise around the globe, Delta is not only gamer’s best companion, its red and black heat spreader design with breathing LED light in three colors of red, white and blue also provide modders more options to build various styles of PC.

[adrotate banner=”5″]

Team Group’s memory module products have been persistently focused on offering the best performance experience. We design Xtreem, Delta and Elite/Elite Plus series specifically for overclocking, gaming/modding and mainstream users. Moreover, in the future we will create products targeting different groups of users with different style, performance and features to meet the needs of more users.

As a leading provider of memory storage products and mobile applications to the consumer market, Team Group is committed to providing the best storage, multimedia and data sharing solutions. All Team memory module products come with a lifetime warranty, repair and replacement services.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Dynamic Counter – BIOS Optimization Guide

Dynamic Counter

Common Options : Enabled, Disabled

 

Quick Review of Dynamic Counter

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism. This mechanism dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio, improving memory performance.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss, to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict, to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Details of Dynamic Counter

DRAM chips are internally divided into memory banks, with each bank made up of an array of memory bits arranged in rows and columns. You can think of the array as an Excel page, with many cells arranged in rows and columns, each capable of storing a single bit of data.

When the memory controller wants to access data within the DRAM chip, it first activates the relevant bank and row. All memory bits within the activated row, also known as a page, are loaded into a buffer. The page that is loaded into the buffer is known as an open page. Data can then be read from the open page by activating the relevant columns.

The open page can be kept in the buffer for a certain amount of time before it has to be closed for the bank to be precharged. While it is opened, any subsequent data requests to the open page can be performed without delay. Such data accesses are known as page hits. Needless to say, page hits are desirable because they allow data to be accessed quickly.

However, keeping the page open is a double-edged sword. A page conflict can occur if there is a request for data on an inactive row. As there is already an open page, that page must first be closed and only then can the correct page be opened. This is worse than a page miss, which occurs when there is a request for data on an inactive row and the bank does not have any open page. The correct row can immediately be activated because there is no open page to close.

Therefore, the key to maximizing performance lies in achieving as many page hits as possible with the least number of page conflicts and page misses. One way of doing so is by implementing a counter to keep track of the number of idle cycles and closing open pages after a predetermined number of idle cycles. This is the basis behind Idle Cycle Limit.

To further improve the page hit-miss ratio, AMD developed dynamic page conflict prediction. Instead of closing open pages after a predetermined number of idle cycles, the memory controller can keep track of the number of page misses and page conflicts. It then dynamically adjusts the idle cycle limit to achieve a better page hit-miss ratio.

[adrotate group=”1″]

The Dynamic Counter BIOS feature controls the memory controller’s dynamic page conflict prediction mechanism.

When enabled, the memory controller will begin with the idle cycle limit set by Idle Cycle Limit and use its dynamic page conflict prediction mechanism to adjust the limit upwards or downwards according to the number of page misses and page conflicts.

It will increase the idle cycle limit when there is a page miss. This is based on the presumption that the page requested is likely to be the one opened earlier. Keeping that page opened longer could have converted the page miss into a page hit. Therefore, it will increase the idle cycle limit to increase the probability of a future page hit.

It will decrease the idle cycle limit when there is a page conflict. Closing that page earlier would have converted the page conflict into a page miss. Therefore, the idle cycle limit will be decreased to reduce the probability of a future page conflict.

When disabled, the memory controller will just use the idle cycle limit set by Idle Cycle Limit. It will not use its dynamic page conflict prediction mechanism to adjust the limit.

Unlike Idle Cycle Limit, the dynamic page conflict mechanism takes the guesswork out of the equation. So, it is recommended that you enable this BIOS feature for better memory performance, irrespective of whether you are configuring a desktop or server.

However, there might be some server users who prefer to force the memory controller to close all open pages whenever there is an idle cycle, to ensure sufficient refreshing of the memory cells. Although it might seem unnecessary, even extreme, for some; server administrators might prefer to err on the side of caution. If so, you should disable Dynamic Counter and set Idle Cycle Limit to 0T.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

32 Byte Granularity – BIOS Optimization Guide

32 Byte Granularity

Common Options : Auto, Enabled, Disabled

 

Quick Review

The 32 Byte Granularity BIOS option determines the burst length of the DRAM controller.

When set to Enabled, the DRAM controller will read or write in bursts of 32 bytes in length.

When set to Disabled, the DRAM controller will read or write in bursts of 64 bytes in length.

When set to Auto, the DRAM controller will use a burst length of 64 bytes if the DRAM interface is 128-bits wide (dual-channel), and a burst length of 32 bytes if the DRAM interface is 64-bits wide (single-channel).

If you are using a discrete graphics card with dedicated graphics memory, you should disable this BIOS option for optimal performance. This is true whether your system is running on dual-channel memory, or single-channel memory.

It is not recommended that you leave the BIOS option at the default setting of Auto when your system is running on a single memory channel. Doing so will cause the DRAM controller to default to a burst length of 32 bytes when a burst length of 64 bytes would be faster.

If you are using your motherboard’s onboard graphics chip which shares system memory, you should enable this BIOS option, but only if your system is running on a single memory channel. If it’s running on dual-channel memory, then you must disable this BIOS option.

 

Details

The 32 Byte Granularity BIOS option determines the burst length of the DRAM controller.

When set to Enabled, the DRAM controller will read or write in bursts of 32 bytes in length.

When set to Disabled, the DRAM controller will read or write in bursts of 64 bytes in length.

When set to Auto, the DRAM controller will use a burst length of 64 bytes if the DRAM interface is 128-bits wide (dual-channel), and a burst length of 32 bytes if the DRAM interface is 64-bits wide (single-channel).

Generally, the larger burst length of 64-bytes is faster. However, a 32-byte burst length is better if the system uses an onboard graphics chip that uses system memory as framebuffer and texture memory. This is because the graphics chip would generate a lot of 32-byte system memory accesses.

Keeping that in mind, the 32-byte burst length is only supported if the DRAM interface is 64-bits wide (single-channel). If the DRAM interface is 128-bits wide, the burst length must be 64 bytes long.

[adrotate banner=”4″]

If you are using a discrete graphics card with dedicated graphics memory, you should disable this BIOS option for optimal performance. This is true whether your system is running on dual-channel memory, or single-channel memory.

It is not recommended that you leave the BIOS option at the default setting of Auto when your system is running on a single memory channel. Doing so will cause the DRAM controller to default to a burst length of 32 bytes when a burst length of 64 bytes would be faster.

If you are using your motherboard’s onboard graphics chip which shares system memory, you should enable this BIOS option, but only if your system is running on a single memory channel. If it’s running on dual-channel memory, then you must disable this BIOS option.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!