Tag Archives: PCIe

AMD Smart Access Memory (Resizable BAR) Guide

Find out what AMD Smart Access Memory is all about, and how to turn it on for a FREE BOOST in performance!

 

Smart Access Memory : PCIe Resizable BAR for AMD!

Smart Access Memory is AMD’s marketing term for their implementation of the PCI Express Resizable BAR (Base Address Registers) capability.

What does that mean exactly?

CPUs are traditionally limited to a 256 MB I/O memory address region for the GPU frame buffer. This of it as an “data dump” for stuff like textures, shaders and geometry.

Since this “data dump” is limited to 256 MB, the CPU can only send texture, shader and geometry data as and when the GPU requires them.

This introduces some latency – delay from when the GPU requires the data, and the CPU send them.

Turning on Resizable BAR or Smart Access Memory greatly expands the size of that data dump, letting the CPU directly access the GPU’s entire frame buffer memory.

Instead of transferring data when requested by the GPU, the CPU processes and stores the data directly in the graphics memory.

Graphics assets can be transferred to graphics memory in full, instead of in pieces. In addition, multiple transfers can occur simultaneously, instead of being queued up.

While this AMD graphic above suggests that Smart Access Memory will widen the memory path (and thus memory bandwidth) between the CPU and GPU, that is not true.

Smart Access Memory / Resizable BAR will not increase memory bandwidth.

What it does is let the CPU directly access the entire GPU frame buffer memory, instead of using the usual 256 MB “dump”. That reduces latency because the graphics assets are now accessible by the GPU at all times.

 

AMD Smart Access Memory : Performance Gains

According to AMD, enabling Smart Access Memory will give you a small but free boost of 5% to 11% in gaming performance.

Here is a summary of the test results from our article, RX 6800 XT Smart Access Memory Performance Comparison!

You can expect up to 16% better performance in some games, but no effect in certain games. But overall, you get a free boost in performance. There is simply no reason not to enable Smart Access Memory.

1080p Resolution (1920 x 1080)

1440p Resolution (2560 x 1440)

2160p Resolution (3840 x 2160)

 

AMD Smart Access Memory : Requirements

Since Smart Access Memory is just an AMD implementation of PCI Express Resizable BAR. Therefore, it can be be implemented for all PCI Express 3.0 and PCI Express 4.0 graphics cards and motherboards.

However, AMD is currently limiting it to a small subset of components, having validated it only for their new Ryzen 5000 series CPUs, select Ryzen 3000 Series Processors and Radeon RX 6000 series graphics cards.

So this is what you currently require to enable AMD Smart Access Memory :

Hardware

Software

  • AMD Radeon Software Driver 20.11.2 or newer
  • Latest Motherboard BIOS (AMD AGESA 1.1.0.0 or newer)

AMD currently recommends these X570 motherboards, because they have updated BIOS available :

 

AMD Smart Access Memory : How To Enable It?

If you have all of those supported components above, and updated your motherboard BIOS, you need to manually enable Smart Access Memory.

Now, the method will vary from motherboard to motherboard, and it probably won’t even be called Smart Access Memory.

Instead, look for variations of Above 4G Decoding, or Resizing BAR, or Resizable BAR, or Re-Size BAR Support.

AMD Generic Method

AMD has provided these generic steps to enable Smart Access Memory :

  1. Enter the System BIOS by press <DEL> or <F12> during the system startup.
  2. Navigate to the Advanced Settings or Advanced menu.
  3. Enable “Above 4G Decoding” and “Re-Size BAR Support“.
  4. Save the changes and restart the computer.

Step-by-Step Method For ASUS Crosshair VIII Hero

In our guide, we are using the ASUS CROSSHAIR VIII Hero (AMD X570) motherboard, as an example :

  1. First you will need to turn off CSM (Compatibility Support Module), or make sure it’s disabled.Go to the Boot menu and look for a CSM / Compatibility Support Module option.

  1. Set CSM (Compatibility Support Module) to Disabled.

  1. Go to the Advanced menu and look for the PCI Subsystem. In other motherboards, look for PCIe / PCI Express configuration options.

  1. Enable Above 4G Decoding.

  1. This will give you access to the Re-Size BAR Support option. Set it to Auto.

  1. Now go to the Exit menu, and select Save Changes & Reset.

  1. It will ask you to confirm the changes. Just verify both, and click OK.

After the motherboard reboots, AMD Smart Access Memory (PCIe Resizable BAR) will be enabled for your Ryzen 5000 series CPU and Radeon RX 6000 series graphics card!

 

CSM Warning For GIGABYTE AORUS X570 Master

AMD currently recommends these X570 motherboards, because they have updated BIOS available :

CSM is disabled by default for the ASUS, ASRock and MSI motherboards. However, it is enabled by default in the GIGABYTE AORUS X570 Master.

If you installed Windows without first turning CSM off, it will be configured as non-UEFI. It will NOT boot if you enable Resizable BAR Support (Smart Access Memory).

You will need to reinstall Windows with CSM support disabled.

 

Recommended Reading

Go Back To > Computer | GamingHome

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Smart Access Memory Now Enabled For Ryzen 3000 CPUs!

AMD just enabled Smart Access Memory for select Ryzen 3000 desktop processors!

Find out what Smart Access Memory does, and how to enable it for a FREE boost in performance!

 

Smart Access Memory Now Enabled For Ryzen 3000 CPUs!

When AMD launched the Radeon RX 6700 XT graphics card, they also mentioned that Smart Access Memory is now enabled for Ryzen 3000 desktop processors, except :

  • AMD Ryzen 5 3400G
  • AMD Ryzen 3 3200G

This would give those older processors a small but FREE boost in performance, when paired with Radeon RX 6000 series graphics cards and AMD 500 series motherboards.

To enable Smart Access Memory for your Ryzen 3000 / 5000 series PC, please follow the steps! 

Unfortunately, AMD has not enabled Smart Access Memory for Radeon RX 5000 series graphics cards, or AMD 400 series motherboards yet.

Recommended : AMD Smart Access Memory (Resizable BAR) Guide

 

Smart Access Memory : How Does It Boost Ryzen 3000 Performance?

Smart Access Memory is AMD’s marketing term for their implementation of the PCI Express Resizable BAR (Base Address Registers) capability.

What does that mean exactly?

CPUs are traditionally limited to a 256 MB I/O memory address region for the GPU frame buffer. This of it as an “data dump” for stuff like textures, shaders and geometry.

Since this “data dump” is limited to 256 MB, the CPU can only send texture, shader and geometry data as and when the GPU requires them.

This introduces some latency – delay from when the GPU requires the data, and the CPU send them.

Turning on Resizable BAR or Smart Access Memory greatly expands the size of that data dump, letting the CPU directly access the GPU’s entire frame buffer memory.

Instead of transferring data when requested by the GPU, the CPU processes and stores the data directly in the graphics memory.

Graphics assets can be transferred to graphics memory in full, instead of in pieces. In addition, multiple transfers can occur simultaneously, instead of being queued up.

While this AMD graphic above suggests that Smart Access Memory will widen the memory path (and thus memory bandwidth) between the CPU and GPU, that is not true.

Smart Access Memory / Resizable BAR will not increase memory bandwidth.

What it does is let the CPU directly access the entire GPU frame buffer memory, instead of using the usual 256 MB “dump”. That reduces latency because the graphics assets are now accessible by the GPU at all times.

 

Smart Access Memory For Ryzen 3000 : Requirements

This is what you currently require to enable AMD Smart Access Memory for Ryzen 3000 desktop processors :

Hardware

Software

  • AMD Radeon Software Driver 20.11.2 or newer
  • Latest Motherboard BIOS (AMD AGESA 1.1.0.0 or newer)

AMD currently recommends these X570 motherboards, because they have updated BIOS available :

 

Recommended Reading

Go Back To > Computer | GamingHome

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


RX 6800 XT Smart Access Memory Performance Comparison!

Find out what AMD Smart Access Memory is all about, and how much of a performance effect it really has on the Radeon RX 6800 XT graphics card!

 

RX 6800 XT Smart Access Memory : How Does It Improve Performance?

Smart Access Memory is really a marketing term for AMD’s implementation of the PCI Express Resizable BAR (Base Address Registers) capability.

CPUs are traditionally limited to a 256 MB I/O memory address “window” for the GPU frame buffer.

Turning on Resizable BAR or Smart Access Memory removes that small access window, letting the CPU directly access the Radeon RX 6800 XT‘s graphics memory.

While the AMD graphics above suggest that Smart Access Memory will widen the memory path, and thus memory bandwidth, between the CPU and GPU, that’s not true.

It does not increase memory bandwidth. Instead, it speeds up CPU to GPU communications, by letting the CPU directly access more of GPU memory, instead of using the usual 256 MB “window”.

Recommended : AMD Smart Access Memory – How To Enable It?

 

RX 6800 XT Smart Access Memory : 3DMark

The 3DMark benchmark results don’t show any significant performance difference, with Smart Access Memory enabled.

 

RX 6800 XT Smart Access Memory : Game Performance Summary

But let’s look at its effect on the real world gaming performance…

Let’s start with a bird’s eye look at the performance effect of Smart Access Memory on the Radeon RX 6800 XT‘s performance.

For more detailed look at Smart Access Memory’s effect on each game, please click to the next page.

1080p Resolution (1920 x 1080)

At 1080p, Smart Access Memory improved frame rates by about 4.33%, and does not always give a performance boost to the Radeon RX 6800 XT.

It had virtually no performance effect in World War Z, The Division 2 and Star Control : Origins.

On the other hand, it delivered up to 16% better frame rates in Total War : Troy.

1440p Resolution (2560 x 1440)

Smart Access Memory had a bigger (5.22% average) effect on the Radeon RX 6800 XT at 1440p.

It had no effect in four games – Metro Exodus, World War Z, The Division 2 and Star Control : Origins.

But it has a large 10%-11% performance boost in F1 2019, Total War : Troy, Dirt 5 and Gears Tactics.

2160p Resolution (3840 x 2160)

At the 4K resolution though, the average performance boost from Smart Access Memory dropped to just 3.11%.

Most of the games had insignificant boosts in frame rates of 2-3%. Oddly enough, World War Z received a significant 4% boost in frame rate at 4K.

F1 2019 received the biggest boost from Smart Access Memory – a large 14% boost in frame rate!

Next Page > RX 6800 XT Smart Access Memory Game Performance

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


RX 6800 XT Smart Access Memory : Gaming Performance

F1 2019

F1 2019 really benefited from Smart Access Memory, with significant boosts in frame rates :

  • 1080p : +6.0%
  • 1440p : +10.8%
  • 2160p : +14.0%

Metro Exodus

On the other hand, Smart Access Memory had no effect on Metro Exodus.

World War Z

World War Z had uneven results with Smart Access Memory, with the greater effect at 4K :

  • 1080p : -1.3%
  • 1440p : +0.5%
  • 2160p : +4.0%

Total War : Troy

Total War : Troy benefited greatly from Smart Access Memory, especially at the 1080p and 1440p resolutions.

  • 1080p : +15.7%
  • 1440p : +10.0%
  • 2160p : +4.0%

The Division 2

The Division 2 actually performed slightly worse with Smart Access Memory enabled :

  • 1080p : -0.6%
  • 1440p : No difference
  • 2160p : -1.5%

Dirt 5

Dirt 5 benefited the most at the 1440p resolution :

  • 1080p : +4.0%
  • 1440p : +10.3%
  • 2160p : +2.5%

Shadow of the Tomb Raider

Shadow of the Tomb Raider 5 benefited the most at the 1080p and 1440p resolutions :

  • 1080p : +7.9%
  • 1440p : +5.5%
  • 2160p : +4.0%

Gears Tactics

Gears Tactics benefited the most at the 1080p and 1440p resolutions :

  • 1080p : +5.7%
  • 1440p : +9.6%
  • 2160p : +3.3%

Star Control: Origins

Smart Access Memory had no effect on Star Control: Origins.

 

Recommended Reading

Go Back To > First PageComputer | GamingHome

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCIE Spread Spectrum from The Tech ARP BIOS Guide!

PCIE Spread Spectrum

Common Options : Down Spread, Disabled

 

PCIE Spread Spectrum : A Quick Review

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

The PCIE Spread Spectrum BIOS feature controls spread spectrum clocking of the PCI Express interconnect.

When set to Down Spread, the motherboard modulates the PCI Express interconnect’s clock signal downwards by a small amount. Because the clock signal is modulated downwards, there is a slight reduction in performance.

The amount of modulation is not revealed and depends on what the manufacturer has qualified for the motherboard. However, the greater the modulation, the greater the reduction of EMI and performance.

When set to Disabled, the motherboard disables any modulation of the PCI Express interconnect’s clock signal.

Generally, frequency modulation via this feature should not cause any problems. Since the motherboard only modulates the signal downwards, system stability is not compromised.

However, spread spectrum clocking can interfere with the operation of timing-critical devices like clock-sensitive SCSI devices. If you are using such devices on the PCI Express interconnect, you must disable PCIE Spread Spectrum.

System stability may also be compromised if you are overclocking the PCI Express interconnect. Therefore, it is recommended that you disable this feature if you are overclocking the PCI Express interconnect.

Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the PCI Express interconnect frequency a little to provide a margin of safety.

If you are not overclocking the PCI Express interconnect, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature.

Otherwise, disable it to remove even the slightest possibility of stability issues.

[adrotate group=”1″]

 

PCIE Spread Spectrum : The Full Details

All clock signals have extreme values (spikes) in their waveform that create EMI (Electromagnetic Interference). This EMI interferes with other electronics in the area. There are also claims that it allows electronic eavesdropping of the data being transmitted.

To prevent EMI from causing problems to other electronics, the FCC enacted Part 15 of the FCC regulations in 1975. It regulates the power output of such clock generators by limiting the amount of EMI they can generate. As a result, engineers use spread spectrum clocking to ensure that their motherboards comply with the FCC regulation on EMI levels.

Spread spectrum clocking works by continuously modulating the clock signal around a particular frequency. Instead of generating a typical waveform, the clock signal continuously varies around the target frequency within a tight range. This “spreads out” the power output and “flattens” the spikes of signal waveform, keeping them below the FCC limit.

Clock signal (courtesy of National Instruments)

The same clock signal, with spread spectrum clocking

The PCIE Spread Spectrum BIOS feature controls spread spectrum clocking of the PCI Express interconnect.

When set to Down Spread, the motherboard modulates the PCI Express interconnect’s clock signal downwards by a small amount. Because the clock signal is modulated downwards, there is a slight reduction in performance.

The amount of modulation is not revealed and depends on what the manufacturer has qualified for the motherboard. However, the greater the modulation, the greater the reduction of EMI and performance.

When set to Disabled, the motherboard disables any modulation of the PCI Express interconnect’s clock signal.

Generally, frequency modulation via this feature should not cause any problems. Since the motherboard only modulates the signal downwards, system stability is not compromised.

However, spread spectrum clocking can interfere with the operation of timing-critical devices like clock-sensitive SCSI devices. If you are using such devices on the PCI Express interconnect, you must disable PCIE Spread Spectrum.

System stability may also be compromised if you are overclocking the PCI Express interconnect. Of course, this depends on the amount of modulation, the extent of overclocking and other factors like temperature, voltage levels, etc. As such, the problem may not readily manifest itself immediately.

Therefore, it is recommended that you disable this feature if you are overclocking the PCI Express interconnect. You will be able to achieve better overclockability, at the expense of higher EMI.

Of course, if EMI reduction is still important to you, enable this feature by all means, but you may have to reduce the PCI Express interconnect frequency a little to provide a margin of safety.

If you are not overclocking the PCI Express interconnect, the decision to enable or disable this feature is really up to you. If you have electronic devices nearby that are affected by the EMI generated by your motherboard, or have sensitive data that must be safeguarded from electronic eavesdropping, enable this feature.

Otherwise, disable it to remove even the slightest possibility of stability issues.

 

Recommended Reading

Go Back To > Tech ARP BIOS GuideComputer | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel Nervana NNP-T1000 PCIe + Mezzanine Cards Revealed!

The new Intel Nervana NNP-T1000 neural network processor comes in PCIe and Mezzanine card options designed for AI training acceleration.

Here is EVERYTHING you need to know about the Intel Nervana NNP-T1000 PCIe and Mezzanine card options!

 

Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

Recommended : Intel Nervana AI Accelerators : Everything You Need To Know!

 

Intel Nervana NNP-T1000

The Intel Nervana NNP-T1000 is not only capable of training even the most complex deep learning models, it is highly scalable – offering near linear scaling and efficiency.

By combining compute, memory and networking capabilities in a single ASIC, it allows for maximum efficiency with flexible and simple scaling.

Each Nervana NNP-T1000 is powered by up to 24 Tensor Processing Clusters (TPCs), and comes with 16 bi-directional Inter-Chip Links (ICL).

Its TPC supports 32-bit floating point (FP32) and brain floating point (bfloat16) formats, allowing for multiple deep learning primitives with maximum processing efficiency.

Its high-speed ICL communication fabric allows for near-linear scaling, directly connecting multiple NNP-T cards within servers, between servers and even inside and across racks.

  • High compute utilisation using Tensor Processing Clusters (TPC) with bfloat16 numeric format
  • Both on-die SRAM and on-package High-Bandwidth Memory (HBM) keep data local, reducing movement
  • Its Inter-Chip Links (ICL) glueless fabric architecture and fully-programmable router achieves near-linear scaling across multiple cards, systems and PODs
  • Available in PCIe and OCP Open Accelerator Module (OAM) form factors
  • Offers a programmable Tensor-based instruction set architecture (ISA)
  • Supports common open-source deep learning frameworks like TensorFlow, PaddlePaddle and PyTorch

 

Intel Nervana NNP-T1000 Models

The Intel Nervana NNP-T1000 is currently available in two form factors – a dual-slot PCI Express card, and a OAM Mezzanine Card, with these specifications :

Specifications Intel Nervana NNP-T1300 Intel Nervana NNP-T1400
Form Factor Dual-slot PCIe Card OAM Mezzanine Card
Compliance PCIe CEM OAM 1.0
Compute Cores 22 TPCs 24 TPCs
Frequency 950 MHz 1100 MHz
SRAM 55 MB on-chip, with ECC 60 MB on-chip, with ECC
Memory 32 GB HBM2, with ECC 32 GB HBM2, with ECC
Memory Bandwidth 2.4 Gbps (300 MB/s)
Inter-Chip Link (ICL) 16 x 112 Gbps (448 GB/s)
ICL Topology Ring Ring, Hybrid Cube Mesh,
Fully Connected
Multi-Chassis Scaling Yes Yes
Multi-Rack Scaling Yes Yes
I/O to Host CPU PCIe Gen3 / Gen4 x16
Thermal Solution Passive, Integrated Passive Cooling
TDP 300 W 375 W
Dimensions 265.32 mm x 111.15 mm 165 mm x 102 mm

 

Intel Nervana NNP-T1000 PCIe Card

This is what the Intel Nervana NNP-T1000 (also known as the NNP-T1300) PCIe card looks like :

 

Intel Nervana NNP-T1000 OAM Mezzanine Card

This is what the Intel Nervana NNP-T1000 (also known as NNP-T1400) Mezzanine card looks like :

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCI-E Max Read Request Size – The Tech ARP BIOS Guide

PCI-E Max Read Request Size

Common Options : Automatic, Manual – User Defined

 

Quick Review of PCI-E Max Read Request Size

This BIOS feature can be used to ensure a fairer allocation of PCI Express bandwidth. It determines the largest read request any PCI Express device can generate. Reducing the maximum read request size reduces the hogging effect of any device with large reads.

When set to Automatic, the BIOS will automatically select a maximum read request size for PCI Express devices. Usually, this would be a manufacturer-preset value that’s designed with maximum “fairness“, rather than performance in mind.

When set to Manual – User Defined, you will be allowed to enter a numeric value (in bytes). Although it appears as though you can enter any value, you must only enter one of these values :

128 – This sets the maximum read request size to 128 bytes. All PCI Express devices will only be allowed to generate read requests of up to 128 bytes in size.

256 – This sets the maximum read request size to 256 bytes. All PCI Express devices will only be allowed to generate read requests of up to 256 bytes in size.

512 – This sets the maximum read request size to 512 bytes. All PCI Express devices will only be allowed to generate read requests of up to 512 bytes in size.

1024 – This sets the maximum read request size to 1024 bytes. All PCI Express devices will only be allowed to generate read requests of up to 1024 bytes in size.

2048 – This sets the maximum read request size to 2048 bytes. All PCI Express devices will only be allowed to generate read requests of up to 2048 bytes in size.

4096 – This sets the maximum read request size to 4096 bytes. This is the largest read request size currently supported by the PCI Express protocol. All PCI Express devices will be allowed to generate read requests of up to 4096 bytes in size.

It is recommended that you set this BIOS feature to 4096, as it maximizes performance by allowing all PCI Express devices to generate as large a read request as they require. However, this will be at the expense of devices that generate smaller read requests.

Even so, this is generally not a problem unless they require a certain degree of quality of service. For example, you may experience glitches with the audio output (e.g. stuttering) of a PCI Express sound card when its reads are delayed by a bandwidth-hogging graphics card.

If such problems arise, reduce the maximum read request size. This reduces the amount of bandwidth any PCI Express device can hog at the expense of the other devices.

 

Details of PCI-E Max Read Request Size

Arbitration for PCI Express bandwidth is based on the number of requests from each device. However, the size of each request is not taken into account. As such, if some devices request much larger data reads than others, the PCI Express bandwidth will be unevenly allocated between those devices.

This can cause problems for applications that have specific quality of service requirements. These application may not have timely access to the requested data simply because another PCI Express device is hogging the bandwidth by requesting for very large data reads.

This BIOS feature can be used to correct that and ensure a fairer allocation of PCI Express bandwidth. It determines the largest read request any PCI Express device can generate. Reducing the maximum read request size reduces the hogging effect of any device with large reads.

However, doing so reduces the performance of devices that generate large reads. Instead of generating large but fewer reads, they will have to generate smaller reads but in greater numbers. Because arbitration is done according to the number of requests, they will have to wait longer for the data requested.

[adrotate group=”1″]

When set to Automatic, the BIOS will automatically select a maximum read request size for PCI Express devices. Usually, this would be a manufacturer-preset value that’s designed with maximum “fairness“, rather than performance in mind.

When set to Manual – User Defined, you will be allowed to enter a numeric value (in bytes). Although it appears as though you can enter any value, you must only enter one of these values :

128 – This sets the maximum read request size to 128 bytes. All PCI Express devices will only be allowed to generate read requests of up to 128 bytes in size.

256 – This sets the maximum read request size to 256 bytes. All PCI Express devices will only be allowed to generate read requests of up to 256 bytes in size.

512 – This sets the maximum read request size to 512 bytes. All PCI Express devices will only be allowed to generate read requests of up to 512 bytes in size.

1024 – This sets the maximum read request size to 1024 bytes. All PCI Express devices will only be allowed to generate read requests of up to 1024 bytes in size.

2048 – This sets the maximum read request size to 2048 bytes. All PCI Express devices will only be allowed to generate read requests of up to 2048 bytes in size.

4096 – This sets the maximum read request size to 4096 bytes. This is the largest read request size currently supported by the PCI Express protocol. All PCI Express devices will be allowed to generate read requests of up to 4096 bytes in size.

It is recommended that you set this BIOS feature to 4096, as it maximizes performance by allowing all PCI Express devices to generate as large a read request as they require. However, this will be at the expense of devices that generate smaller read requests.

Even so, this is generally not a problem unless they require a certain degree of quality of service. For example, you may experience glitches with the audio output (e.g. stuttering) of a PCI Express sound card when its reads are delayed by a bandwidth-hogging graphics card.

If such problems arise, reduce the maximum read request size. This reduces the amount of bandwidth any PCI Express device can hog at the expense of the other devices.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PCI-E Reference Clock from The Tech ARP BIOS Guide

PCI-E Reference Clock

Common Options : 100 MHz, adjustable in 1 MHz steps

 

Quick Review of PCI-E Reference Clock

All PCI Express slots use a 100 MHz reference clock to generate its clocking signals. This is where the PCI-E Reference Clock BIOS option comes in. It controls the frequency of the PCI Express reference clock.

By default, the PCI-E Reference Clock is set to 100 MHz. This is the official reference clock speed for the PCI Express interface. Some BIOSes allow you to adjust this reference clock, usually in steps of 1 MHz.

Adjusting the PCI Express reference clock changes its signalling rate and bandwidth. However, because the PCI Express x16 interface already has such high bandwidth, overclocking it would only have a small effect on real world performance.

In motherboards that suffer from the PCI Express x1 bug, adjusting the reference clock speed up or down can potentially “trick” the motherboard to restore the PCI Express slot to its full x16 mode. However, raising the PCI Express reference clock to 120 MHz can cause timing-sensitive PCI Express devices like SATA controllers to fail. Therefore, it is recommended that you do not exceed 115 MHz, should you choose to overclock the PCI Express reference clock.

 

Details of PCI-E Reference Clock

The PCI Express interface is made up of a series of unidirectional, serial point-to-point links. Each PCI Express lane consists of a pair of those links, making it bidirectional. In its slowest form (PCI Express 1.x), each PCI Express lane has a data transfer rate of 250 MB/s in each direction. The newer PCI Express 2.0 doubles the data transfer rate to 500 MB/s per lane.

For high-bandwidth applications, multiple PCI Express lanes are used to greatly increase the data transfer rate. Each PCI Express slot can support a variety of lanes, from just one lane (x1) up to 32 lanes (x32). At the moment though, the “widest” slot available is the PCI Express x16.

In motherboards that support the PCI Express 1.x standard, the x16 slot delivers a maximum bandwidth of 4 GB/s with a signalling rate of 2.5 gigatransfers per second. The new PCI Express 2.0 standard doubles the signalling rate and the x16 slot’s bandwidth to 8 GB/s.

Whether your motherboard supports the PCI Express 1.x standard or the newer PCI Express 2.0 standard, all PCI Express slots use a 100 MHz reference clock to generate its clocking signals. This is where the PCI-E Reference Clock BIOS option comes in. It controls the frequency of the PCI Express reference clock.

[adrotate group=”2″]

By default, the PCI-E Reference Clock is set to 100 MHz. This is the official reference clock speed for the PCI Express interface. Some BIOSes allow you to adjust this reference clock, usually in steps of 1 MHz.

Adjusting the PCI Express reference clock changes its signalling rate and bandwidth. For example, increasing the reference clock frequency to 110 MHz would raise the PCI Express signalling rate by 10% to 2.75 gigatransfers/s (PCI Express 1.x) or 5.5 gigatransfers/s (PCI Express 2.0). However, because the PCI Express x16 interface already has such high bandwidth, overclocking it would only have a small effect on real world performance.

In motherboards that suffer from the PCI Express x1 bug, adjusting the reference clock speed up or down can potentially “trick” the motherboard to restore the PCI Express slot to its full x16 mode. However, raising the PCI Express reference clock to 120 MHz can cause timing-sensitive PCI Express devices like SATA controllers to fail. Therefore, it is recommended that you do not exceed 115 MHz, should you choose to overclock the PCI Express reference clock.

Go Back To > The Tech ARP BIOS Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


PCI-E Maximum Payload Size – The BIOS Optimization Guide

PCI-E Maximum Payload Size

Common Options : 128, 256, 512, 1024, 2048, 4096

 

Quick Review

The PCI-E Maximum Payload Size BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size used by the PCI Express controller. The TLP payload size determines the amount of data transmitted within each data packet.

When set to 128, the PCI Express controller will only use a maximum data payload of 128 bytes within each TLP.

When set to 256, the PCI Express controller will only use a maximum data payload of 256 bytes within each TLP.

When set to 512, the PCI Express controller will only use a maximum data payload of 512 bytes within each TLP.

When set to 1024, the PCI Express controller will only use a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the PCI Express controller will only use a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the PCI Express controller uses the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set PCI-E Maximum Payload Size to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the motherboard chipset will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set this BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

 

Details of PCI-E Maximum Payload Size

The PCI Express protocol transmits data as well as control messages on the same links. This differs the PCI Express interconnect from the PCI bus and the AGP port, which make use of separate sideband signalling for control messages.

Control messages are delivered as Data Link Layer Packets or DLLPs, while data packets are sent out as Transaction Layer Packets or TLPs. However, TLPs are not pure data packets. They have a header which carries information like packet size, message type, traffic class, etc.

In addition, the actual data (known as the “payload”) is encoded with the 8B/10B encoding scheme. This replaces 8 uncoded bits with 10 encoded bits. This itself results in a 20% “loss” of bandwidth. The TLP overhead is further exacerbated by a 32-bit LCRC error-checking code.

Therefore, the size of the data payload is an important factor in determining the efficiency of the PCI Express interconnect. As the data payload gets smaller, the TLP becomes less efficient, because the overhead will then take up a more significant amount of bandwidth. To achieve maximum efficiency, the TLP should be as large as possible.

The PCI Express specifications defined the following TLP payload sizes :

  • 128 bytes
  • 256 bytes
  • 512 bytes
  • 1024 bytes
  • 2048 bytes
  • 4096 bytes

However, it is up to the manufacturer to set the maximum TLP payload size supported by the PCI Express device. It determines the maximum TLP payload size the device can send or receive. When two PCI Express devices communicate with each other, the largest TLP payload size supported by both devices will be used.

[adrotate group=”1″]

The PCI-E Maximum Payload Size BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size used by the PCI Express controller. The TLP payload size, as mentioned earlier, determines the amount of data transmitted within each data packet.

When set to 128, the PCI Express controller will only use a maximum data payload of 128 bytes within each TLP.

When set to 256, the PCI Express controller will only use a maximum data payload of 256 bytes within each TLP.

When set to 512, the PCI Express controller will only use a maximum data payload of 512 bytes within each TLP.

When set to 1024, the PCI Express controller will only use a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the PCI Express controller will only use a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the PCI Express controller uses the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set PCI-E Maximum Payload Size to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the motherboard chipset will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set this BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

PCI Express Burn-in Mode – The BIOS Optimization Guide

PCI Express Burn-in Mode

Common Options : Default, 101.32MHz, 102.64MHz, 103.96MHz, 105.28MHz, 106.6MHz, 107.92MHz, 109.24MHz

 

Quick Review

The PCI Express Burn-in Mode BIOS feature allows you to overclock the PCI Express bus, even if Intel stamps its foot petulantly and insist that it is not meant for this purpose. While it does not give you direct control of the bus clocks, it allows some overclocking of the PCI Express bus.

When this BIOS feature is set to Default, the PCI Express bus runs at its normal speed of 33MHz.

When this BIOS feature is set to 101.32MHz, the PCI Express bus runs at a higher speed of 101.32MHz.

When this BIOS feature is set to 102.64MHz, the PCI Express bus runs at a higher speed of 102.64MHz.

When this BIOS feature is set to 103.96MHz, the PCI Express bus runs at a higher speed of 103.96MHz.

When this BIOS feature is set to 105.28MHz, the PCI Express bus runs at a higher speed of 105.28MHz.

When this BIOS feature is set to 106.6MHz, the PCI Express bus runs at a higher speed of 106.6MHz.

When this BIOS feature is set to 107.92MHz, the PCI Express bus runs at a higher speed of 107.92MHz.

When this BIOS feature is set to 109.24MHz, the PCI Express bus runs at a higher speed of 109.24MHz.

For better performance, it is recommended that you set this BIOS feature to 109.24MHz. This overclocks the PCI Express bus by about 9%, which should not cause any stability problems with most PCI Express devices. But if you encounter any stability issues, use a lower setting.

[adrotate group=”1″]

 

Details of PCI Express Burn-in Mode

While many motherboard manufacturers allow you to overclock various system clocks, Intel officially does not condone or support overclocking. Therefore, motherboards sold by Intel lack BIOS features that allow you to directly modify bus clocks.

However, some Intel motherboards come with a PCI Express Burn-in Mode BIOS feature. This ostensibly allows you to “burn-in” PCI Express devices with a slightly higher bus speed before settling back to the normal bus speed.

Of course, you can use this BIOS feature to overclock the PCI Express bus, even if Intel stamps its foot petulantly and insist that it is not meant for this purpose. While it does not give you direct control of the bus clocks, it allows some overclocking of the PCI Express bus.

When this BIOS feature is set to Default, the PCI Express bus runs at its normal speed of 33MHz.

When this BIOS feature is set to 101.32MHz, the PCI Express bus runs at a higher speed of 101.32MHz.

When this BIOS feature is set to 102.64MHz, the PCI Express bus runs at a higher speed of 102.64MHz.

When this BIOS feature is set to 103.96MHz, the PCI Express bus runs at a higher speed of 103.96MHz.

When this BIOS feature is set to 105.28MHz, the PCI Express bus runs at a higher speed of 105.28MHz.

When this BIOS feature is set to 106.6MHz, the PCI Express bus runs at a higher speed of 106.6MHz.

When this BIOS feature is set to 107.92MHz, the PCI Express bus runs at a higher speed of 107.92MHz.

When this BIOS feature is set to 109.24MHz, the PCI Express bus runs at a higher speed of 109.24MHz.

As you can see, this BIOS feature doesn’t allow much play with the clock speed. You can only adjust the clock speeds upwards by about 9%.

For better performance, it is recommended that you set this BIOS feature to 109.24MHz. This overclocks the PCI Express bus by about 9%, which should not cause any stability problems with most PCI Express devices. But if you encounter any stability issues, use a lower setting.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Maximum TLP Payload – The BIOS Optimization Guide

Maximum TLP Payload

Common Options : 128, 256, 512, 1024, 2048, 4096

 

Quick Review of Maximum TLP Payload

The Maximum TLP Payload BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size that the motherboard’s PCI Express controller should use. The TLP payload size determines the amount of data transmitted within each data packet.

When set to 128, the motherboard’s PCI Express controller will only support a maximum data payload of 128 bytes within each TLP.

When set to 256, the motherboard’s PCI Express controller will only support a maximum data payload of 256 bytes within each TLP.

When set to 512, the motherboard’s PCI Express controller will only support a maximum data payload of 512 bytes within each TLP.

When set to 1024, the motherboard’s PCI Express controller will only support a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the motherboard’s PCI Express controller will only support a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the motherboard’s PCI Express controller supports the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set the Maximum TLP Payload BIOS feature to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the PCI Express controller will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set the Maximum TLP Payload BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

 

Details of Maximum TLP Payload

The PCI Express protocol transmits data as well as control messages on the same links. This differs the PCI Express interconnect from the PCI bus and the AGP port, which make use of separate sideband signalling for control messages.

Control messages are delivered as Data Link Layer Packets or DLLPs, while data packets are sent out as Transaction Layer Packets or TLPs. However, TLPs are not pure data packets. They have a header which carries information like packet size, message type, traffic class, etc.

In addition, the actual data (known as the “payload”) is encoded with the 8B/10B encoding scheme. This replaces 8 uncoded bits with 10 encoded bits. This itself results in a 20% “loss” of bandwidth. The TLP overhead is further exacerbated by a 32-bit LCRC error-checking code.

Therefore, the size of the data payload is an important factor in determining the efficiency of the PCI Express interconnect. As the data payload gets smaller, the TLP becomes less efficient, because the overhead will then take up a more significant amount of bandwidth. To achieve maximum efficiency, the TLP should be as large as possible.

The PCI Express specifications defined the following TLP payload sizes :

  • 128 bytes
  • 256 bytes
  • 512 bytes
  • 1024 bytes
  • 2048 bytes
  • 4096 bytes

However, it is up to the manufacturer to set the maximum TLP payload size supported by the PCI Express device. It determines the maximum TLP payload size the device can send or receive. When two PCI Express devices communicate with each other, the largest TLP payload size supported by both devices will be used.

[adrotate group=”1″]

The Maximum TLP Payload BIOS feature determines the maximum TLP (Transaction Layer Packet) payload size that the motherboard’s PCI Express controller should use. The TLP payload size, as mentioned earlier, determines the amount of data transmitted within each data packet.

When set to 128, the motherboard’s PCI Express controller will only support a maximum data payload of 128 bytes within each TLP.

When set to 256, the motherboard’s PCI Express controller will only support a maximum data payload of 256 bytes within each TLP.

When set to 512, the motherboard’s PCI Express controller will only support a maximum data payload of 512 bytes within each TLP.

When set to 1024, the motherboard’s PCI Express controller will only support a maximum data payload of 1024 bytes within each TLP.

When set to 2048, the motherboard’s PCI Express controller will only support a maximum data payload of 2048 bytes within each TLP.

When set to 4096, the motherboard’s PCI Express controller supports the maximum data payload of 4096 bytes within each TLP. This is the maximum payload size currently supported by the PCI Express protocol.

It is recommended that you set the Maximum TLP Payload BIOS feature to 4096, as this allows all PCI Express devices connected to send up to 4096 bytes of data in each TLP. This gives you maximum efficiency per transfer.

However, this is subject to the PCI device connected to it. If that device only supports a maximum TLP payload size of 512 bytes, the PCI Express controller will communicate with it with a maximum TLP payload size of 512 bytes, even if you set this BIOS feature to 4096.

On the other hand, if you set the Maximum TLP Payload BIOS feature to a low value like 256, it will force all connected devices to use a maximum payload size of 256 bytes, even if they support a much larger TLP payload size.

Go Back To > BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The Kingston KC1000 NVMe PCIe SSD Announced!

May 25, 2017 – Kingston today announced the Kingston KC1000 NVMe PCIe SSD. Shipping in mid-June, the M.2 NVMe PCIe SSD is over 2x faster than SATA-based SSDs and over 40x faster than a 7200RPM hard disk drive.

 

The Kingston KC1000 NVMe PCIe SSD

The Kingston KC1000 is built for the power user, providing the ultimate, low-latency performance boost for resource-demanding applications including high-resolution video editing, data visualization, gaming and other data intensive workload environments where traditional storage solutions are unable to keep pace with data demand.

The demands of today’s performance power users are constantly being put to the test as new data-intensive applications push the boundaries of what can be achieved with even the market’s high performance professional workstations and most powerful gaming rigs.

The KC1000 is the perfect solution to meet the needs of media and design professionals, gaming enthusiasts and anyone who needs ultra-low latency storage performance to end data bottlenecks.

This native NVMe device offers one of the industry’s most powerful storage solutions for high-resolution content delivery, virtual reality applications, accelerated game play or a competitive edge for the creative professional on tight deadlines.

The Kingston KC1000 delivers up to 290,000 IOPS and will ship in mid-June in 240GB, 480GB and 960GB capacities. The high-performance SSD supports the PCIe Gen3 x4 interface and the latest NVMe protocol.

[adrotate banner=”4″]

The Kingston KC1000 provides accelerated boot and load speeds and increases sequential read/write performance, as well as offering improved endurance and energy efficiency. It is perfect for users seeking instant, breakthrough performance improvements for:

  • High-resolution video editing
  • Virtual and augmented reality applications
  • CAD software applications
  • Streaming media
  • Graphically intensive video games
  • Data visualization
  • Real-time analytics

The KC1000 is backed by a limited five-year warranty and legendary Kingston support.

 

Kingston KC1000 NVMe PCIe SSD Features and Specifications

  • Form Factor: M.2 2280
  • Interface: NVMe PCIe Gen 3.0 x4 Lanes
  • Capacities: 240GB, 480GB, 960GB
  • Controller: Phison PS5007-E7
  • NAND: MLC
  • Sequential Read/Write:
    • 240GB: up to 2700/900MB/s
    • 480GB, 960GB: up to 2700/1600MB/s
  • Maximum 4K Read/Write:
    • 240GB: up to 225,000/190,000 IOPS
    • 480GB, 960GB: up to 290,000/190,000 IOPS
  • Random 4K Read/ Write:
    • 240GB, 480GB: up to 190,000/160,000 IOPS
    • 960GB: up to 190,000/165,000 IOPS
  • PCMARK Vantage HDD Suite Score: 150,000
  • Total Bytes Written (TBW):
    • 240GB: 300TB and .70 DWPD
    • 480GB: 550TB and .64 DWPD
    • 960GB: 1PB and .58 DWPD
  • Power Consumption: .11W Idle / .99W Avg / 4.95W (MAX) Read / 7.40W (MAX) write
  • Storage Temperature: -40°C to 85°C
  • Operating Temperature: 0°C to 70°C
  • Dimensions:
    • 80mm x 22mm x 3.5mm (M.2)
    • 180.98mm x 120.96mm x 21.59mm (with HHHL AIC – standard bracket) 181.29mm x 80.14mm x 23.40mm (with HHHL AIC – low-profile bracket)
  • Weight:
    • 10g (M.2)
    • 76g (with HHHL AIC – standard bracket)
    • 69g (with HHHL AIC – low-profile bracket)
  • Vibration operating: 2.17G Peak (7-800Hz)
  • Vibration non-operating: 20G Peak (20-1000Hz)
  • MTBF: 2,000,000
  • Warranty/support: Limited 5-year warranty with free technical support

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Init Display First – The BIOS Optimization Guide

Init Display First

Common Options : AGP or PCIe, PCI

 

Quick Review

The Init Display First BIOS feature allows you to select whether to boot the system using the PCIe / AGP graphics card or the PCI graphics card. This is important if you have both PCIe / AGP and PCI graphics cards.

If you are only using a single graphics card, the BIOS will ignore this BIOS setting and boot the computer using that graphics card. However, there may be a slight reduction in the time taken to detect and initialize the card if you select the proper setting. For example, if you only use a PCIe / AGP graphics card, then setting Init Display First to PCIe or AGP may speed up your system’s booting-up process.

If you are only using a single graphics card, it is recommended that you set the Init Display First feature to the proper setting for your system :

  • PCIe for a single PCIe card,
  • AGP for a single AGP card, and
  • PCI for a single PCI card.

But if you are using multiple graphics cards, it is up to you which card you want to use as your primary display card. It is recommended that you select the fastest graphics card as the primary display card.

 

Details

Although the PCI Express and AGP buses were designed exclusively for the graphics subsystem, some users still have to use PCI graphics cards for multi-monitor support. This was more common with AGP motherboards because there can be only one AGP port, while PCI Express motherboards can have multiple PCIe slots.

If you want to use multiple monitors on AGP motherboards, you must either get an AGP graphics card with multi-monitor support, or use PCI graphics cards. PCI Express motherboards usually have multiple PCIe slots, but there may still not be enough PCIe slots, and you may need to install PCI graphics cards.

For those who upgraded from a PCI graphics card to an AGP graphics card, it is certainly enticing to use the old PCI graphics card to support a second monitor. The PCI card would do the job just fine as it merely sends display data to the second monitor. You don’t need a powerful graphics card to run the second monitor, if it’s merely for display purposes.

When it comes to a case of a PCI Express or an AGP graphics card working in tandem with a PCI graphics card, the BIOS has to determine which graphics card is the primary graphics card. Naturally, the default would be the PCIe or AGP graphics card since it would naturally be the faster graphics card.

However, there are situations in which you may want to manually select the PCI graphics card instead. For example – you have a PCIe / AGP graphics card as well as a PCI graphics card, but only one monitor. This is where the Init Display First BIOS feature comes in. It allows you to select whether to boot the system using the PCIe / AGP graphics card or the PCI graphics card.

[adrotate banner=”5″]

If you are only using a single graphics card, the BIOS will ignore this BIOS setting and boot the computer using that graphics card. However, there may be a slight reduction in the time taken to detect and initialize the card if you select the proper setting. For example, if you only use a PCIe / AGP graphics card, then setting Init Display First to PCIe or AGP may speed up your system’s booting-up process.

If you are only using a single graphics card, it is recommended that you set the Init Display First feature to the proper setting for your system :

  • PCIe for a single PCIe card,
  • AGP for a single AGP card, and
  • PCI for a single PCI card.

But if you are using multiple graphics cards, it is up to you which card you want to use as your primary display card. It is recommended that you select the fastest graphics card as the primary display card.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Apacer Z280 M.2 PCIe Gen 3 x4 SSD Introduced

The Apacer Z280 is Apace’s latest answer to advanced SSDs. It supports PCIe Gen 3 x4 and is compliant with NVMe 1.2 in a M.2 form factor. The blazing speed will boost the game status without getting too costly.

With sustained read/write performance at 2500MB/s and 1350MB/s, the compact M.2-2280 keeps everything efficient at a massive capacity of 480GB. Want to be one step ahead of others? Look no further than Z280, the high-performing SSD that is compatible with mini PCs and laptops.

Apacer Z280 M.2 PCIe SSD

Fast & Large Storage

The cutting-edge Apacer Z280 is compliant with NVMe 1.2 standard and features the latest PCIe Gen 3 x4 interface to provide up to 4 times of bandwidth – up to 2500 MB/s read & 1350MB/s write. Offering a massive memory capacity of 480GB, the Apacer Z280 has a random write of 175,000 IOPs to ensure all actions in each gaming scene is smoothly processed, fluid and sharp, efficiently boosting a gamer’s status.

With only 80mm in length, the Apacer Z280 adopts the ultra slim size of M.2-2280 and fits most of the laptops and mini PCs, easily turning any devices into highly portable gaming tools. With utmost performances and superb qualities, the Apacer Z280 is undoubtedly the most powerful booster you can find for gameplay enhancement.

[adrotate banner=”4″]

Stable & Safe Storage

The Apacer Z280 is equipped with several advanced features to ensure PCIe Gen 3 x4 performs at optimum, including ECC with 120bit/2KB BCH, End-to-End Data Protection, Smart ECC, Global Wear Leveling, for fortified SSD life, SmartRefresh, for maintaining accuracy and safety of data access, and S.M.A.R.T. monitoring system to keep all performances at their smoothest.

Meanwhile, Apacer publishes specially designed Apacer SSDWidget software which allows users to examine the SSD status as well as instant firmware update. In addition, the Apacer Z280 is an efficient power saver, capable of performing at low-power consumption, enabling to the laptop battery to sustain longer and PCs to be eco-friendlier.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!