Tag Archives: DirectX 12

AMD Radeon RX 6900 XT : Ultimate Big Navi @ Just $999!

The AMD Radeon RX 6900 XT is here, and it only costs $999… if you can find one!

Here is what you need to know about the ultimate Big Navi graphics card!

 

AMD Radeon RX 6900 XT : Official Tech Briefing + Q&A Session

AMD claims that the Radeon RX 6900 XT is the ultimate 4K graphics card, thanks to its new RDNA 2 architecture with 80 enhanced Compute Units and 16 GB GDDR6 memory.

Let’s start with the official tech briefing and Q&A session with Nish Neelalojanan, Senior Manager of Gaming Product Management at the AMD Radeon Technologies Group.

 

AMD Radeon RX 6900 XT : Key Features

The AMD Radeon RX 6000 series graphics cards are built on the new RDNA 2 architecture, which features an enhanced Compute Unit, a new visual pipeline with Ray Accelerators, and the new AMD Infinity Cache.

Recommended : AMD RDNA 2 Architecture : Tech Highlights!

AMD Smart Access Memory

This is an exclusive feature that is enabled when a Radeon RX 6000 series graphics card is paired with the Ryzen 5000 series processor and an X570 motherboard.

The Ryzen 5000 processor is given greater access to its GDDR6 graphics memory, providing up to 13% better performance when combined with the new Rage Mode one-click overclocking setting.

Recommended : AMD Smart Access Memory (PCIe Resizable BAR) Guide

AMD Infinity Cache

The AMD Infinity Cache is a new and very large 128 MB data cache. Think of it as an L3 cache for the GPU.

AMD added it to dramatically increase memory bandwidth, which reducing memory latency and power consumption.

They claim it delivers up to 3.25X the bandwidth of the 256-bit GDDR6 memory, and up to 2.4X more effective bandwidth per watt.

Recommended : AMD Infinity Cache Explained : L3 Cache Comes To The GPU!

DirectX 12 Ultimate Support

They will support the next-generation games with greater realism through DirectX Raytracing (DXR) and Variable Rate Shading.

New Ray Accelerator

Every RDNA 2 computer unit has a new fixed-function Ray Accelerator engine to deliver real-time lighting, shadow and refection realism through DirectX Raytracing (DXR).

It can be paired with AMD FidelityFX to enable hybrid rendering, offering a combination of rasterised and ray-traced effects for a blend of better image quality and higher performance.

Variable Rate Shading (VRS)

Variable Rate Shading dynamically reduces the shading rate for areas of the frame that do not require a high level of visual quality, improving performance at little to no perceptible loss in image quality.

Microsoft DirectStorage Support

They will support the future DirectStorage API for faster load times, and high-quality textures.

 

AMD Radeon RX 6900 XT : Specifications

The AMD Radeon RX 6900 XT headlines the Radeon RX 6000 series :

  • Radeon RX 6900 XT : 80 CUs, 80 RAs, 320 TMUs, 128 ROPs, 16 GB DDR6
  • Radeon RX 6800 XT : 72 CUs, 72 RAs, 288 TMUs, 128 ROPs, 16 GB DDR6
  • Radeon RX 6800 : 60 CUs, 60 RAs, 240 TMUs, 96 ROPs, 16 GB DDR6

Here is a table comparing their key specifications :

Specifications RX
6900 XT
RX
6800 XT
RX
6800
Transistors 26.8 billion
Fab Process 7 nm
Total Graphics Power 300 W 250 W
Compute Units 80 72 60
Ray Accelerators 80 72 60
Stream Processors 5120 4608 3840
Game Clock 2015 MHz 2015 MHz 1815 MHz
Boost Clock 2250 MHz 2250 MHz 2105 MHz
TFLOPS 23.04 20.74 16.17
TMUs 320 288 240
Max. Texture Rate 720 GT/s 648 GT/s 505 GT/s
ROPs 128 96
Max. Pixel Rate 288 GP/s 202 GP/s
Infinity Cache 128 MB
Graphics Memory 16 GB GDDR6 (16 Gbps)
Bus Width 256-bit
Bandwidth 512 GB/s
PCIe Interface PCIe 4.0 x16
PCIe Power 2 x 8-pin

 

AMD Radeon RX 6900 XT : Price + Availability

At launch, the AMD Radeon RX 6900 XT has a SEP (Suggested E-tail Price) of US$999, which is approximately £749 / A$1,349 / S$1,339 / RM4,069.

As far as we can tell, it will only be available in the AMD reference design, although custom designs may be available at a later date.

 

Recommended Reading

Go Back To > Computer | GamingHome

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA Image Sharpening Guide for DirectX, Vulkan, OpenGL!

NVIDIA Game Ready drivers now support Image Sharpening for Vulkan and OpenGL, as well as DirectX games. However, you will need to manually turned it on, and set it up.

In this guide, we will share with you the benefits of turning on NVIDIA Image Sharpening, and how to turn it on for your DirectX, Vulkan and OpenGL games!

 

NVIDIA Image Sharpening For DirectX, Vulkan + OpenGL

NVIDIA first introduced Image Sharpening as a NVIDIA Freestyle filter. They then built it into the NVIDIA Control Panel, enabling it for all DirectX 9, 10, 11 and 12 games in the Game Ready 441.08 driver onwards.

Starting with Game Ready 441.41, they officially added NVIDIA Image Sharpening support for Vulkan and OpenGL games too.

Image sharpness can be adjusted on a per-game basis, or applied globally for all supported titles, with per-game settings overriding global settings.

In addition, you can use your NVIDIA GPU to render at a lower resolution for improved performance, and scale it to the monitor’s native resolution, using Image Sharpening to improved the clarity of the upscaled images.

However, this is a global setting, and cannot be disabled or enabled on a per-game basis.

Note that if you are using a GeForce RTX or GeForce GTX 16-series graphics card, they will leverage the Turing GPU’s 5-tap scaling technology for better image quality.

Recommended : Learn How To Add ReShade Filters To GeForce Experience!

 

NVIDIA Image Sharpening : How To Enable It Globally

  1. Download and install GeForce Game Ready 441.41 driver or newer.
  2. Open the NVIDIA Control Panel, and click on Manage 3D settings.
  3. Scroll down the Global Settings tab to Image Sharpening.
  4. Select the On option, and you will have three further options.
    GPU Scaling : All resolutions below the monitor native resolution will be upscaled by the GPU
    Sharpen (0 to 1.0) : This controls the amount of image sharpening
    Ignore film grain (0 to 1.0) : This reduces any film grain that is generated by image sharpening
  5. Click OK and you are done!

Note : GPU Scaling is only available as a Global setting.

Recommended : How To Enable NVIDIA NULL For G-SYNC Monitors Correctly!

 

NVIDIA Image Sharpening : How To Enable It On A Per-App Basis

  1. Download and install GeForce Game Ready 441.41 driver or newer.
  2. Open the NVIDIA Control Panel, and click on Manage 3D settings.
  3. Click on the Program Settings tab and select the game you want to apply image sharpening.
    If you cannot find the game, click Add, choose the desired game and click Add Selected Program
  4. Scroll down to Image Sharpening.
  5. Select the On option, and you will have two further options.
    Sharpen (0 to 1.0) : This controls the amount of image sharpening
    Ignore film grain (0 to 1.0) : This reduces any film grain that is generated by image sharpening
  6. Click OK and you are done!

Note that the per-app setting will override the global settings.

 

NVIDIA Image Sharpening : Current Limitations

As of 26 November 2019, NVIDIA Image Sharpening has the following limitations :

  • Scaling is not supported on MSHybrid systems.
  • HDR displays driven by pre-Turing GPUs will not support scaling
  • Scaling will not work with VR
  • Scaling will not work with displays using YUV420 format.
  • Scaling uses aspect ration scaling and will not use integer scaling
  • Sharpening will not work with HDR displays
  • GPU scaling engages when games are played only in full-screen mode, and not in windowed or borderless windowed mode.
  • Some G-SYNC displays have a 6-tap / 64-phase scaler which scales better than that offered by Turing’s 5-tap/32-phase scaler.
  • To avoid accidentally triggering scaling by applications or DWM, first change to the desired (non-native) resolution from the NVIDIA Control Panel and then launch the application.
  • Turing’s 5-tap upscaler may not engage on certain monitors, based on the monitor’s vblank timing.
  • Turing’s 5-tap upscaler may not engage if the input resolution is greater than 2560 pixels in either the x or y dimension.
  • Scaling is turned off automatically when switching display devices.
  • “Restore Defaults” option in the control panel currently does not revert the upscaling resolution.

 

Death Stranding : Get It FREE With GeForce RTX!

From now until 29 July 2020, you will receive a Steam code for the PC digital download edition of Death Stranding with the purchase of these selected GeForce RTX graphics card, laptop or desktop!

 

Recommended Reading

Go Back To > Gaming | Computer | SoftwareHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA RTX Real-Time Ray Tracing Technology Explained!

NVIDIA just announced the NVIDIA RTX real-time ray tracing technology at GDC 2018. It promises to bring real-time, cinematic-quality rendering to content creators and game developers. Find out what NVIDIA RTX is all about, and what it means to all of us!

 

The Holy Grail For Photorealism

Ray tracing is the gold standard for creating realistic, lifelike lighting, reflections and shadows. It adds a level of realism far beyond what is possible using traditional rendering techniques.

Real-time ray tracing replaces a majority of the rendering techniques used today, with realistic optical calculations that replicate the way light behaves in the real world. But until today, it has been too computationally-demanding to be practical for real-time, interactive gaming.

 

NVIDIA RTX Real-Time Ray Tracing Technology

NVIDIA RTX is a real-time ray tracing technology that promises to deliver real-time ray tracing with high frame rates and low latency.

It runs exclusively (at the moment) on the NVIDIA Volta GPUs. Applications that run on the newly-announced Microsoft DirectX Raytracing (DXR) API will support NVIDIA RTX when used with an NVIDIA Volta graphics card.

 

GameWorks For Ray Tracing

[adrotate group=”2″]

NVIDIA also announced that the NVIDIA GameWorks SDK will include a ray tracing denoiser module. This suite of tools and resources will help developers increase realism and shorten product cycles for titles developed using the new Microsoft DXR API and NVIDIA RTX.

The upcoming GameWorks SDK — which will support Volta and future generation GPU architectures — enables ray-traced area shadows, ray-traced glossy reflections and ray-traced ambient occlusion.

With these capabilities, developers can create realistic, high-quality reflections that capture the scene around it and achieve physically accurate lighting and shadows.

 

Broad Industry Support

Industry leaders such as 4A Games, Epic, Remedy Entertainment and Unity featured NVIDIA RTX in their technology demonstrations at the Game Developers Conference 2018. They showed how real-time ray tracing can provide amazing, lifelike graphics in future games.

Go Back To > Articles | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The NVIDIA GeForce GTX 1080 Ti Founders Edition Review

As the drumbeats of the AMD Vega graphics cards got louder and louder, NVIDIA introduced their ultimate Pascal-based gaming graphics card – the GeForce GTX 1080 Ti – to take them on. What’s astounding is that the NVIDIA GeForce GTX 1080 Ti is really a faster variant of the NVIDIA TITAN X at a massive discount!

Read our review of the NVIDIA GeForce GTX 1080 Ti Founders Edition graphics card, and find out why we think it deserves our coveted Editor’s Choice Award!

Updated @ 2017-11-01 : Revamped the entire review, and added new benchmark results comparing the GeForce GTX 1070 Ti against the new AMD Radeon RX Vega 64 and Vega 56 graphics cards, as well as the Radeon RX 580.

Originally posted @ 2017-05-17

 

The NVIDIA GeForce GTX 1080 Ti Specifications Comparison

This table compares the specifications of the NVIDIA GeForce GTX 1080 Ti (Price Check) against the TITAN X (Pascal) and the previous-generation GeForce GTX 980 Ti.

SpecificationsNVIDIA GeForce GTX TITAN XNVIDIA GeForce GTX 1080 TiNVIDIA GeForce GTX 980 Ti
GPUNVIDIA GP102NVIDIA GP102NVIDIA GM200
CUDA Cores358435842816
Textures Per Clock224224176
Pixels Per Clock968896
Base Clock Speed1417 MHz1480 MHz1000 MHz
Boost Clock Speed1531 MHz1582 MHz1075 MHz
Texture Fillrate317.4~342.9 GT/s331.5~354.4 GT/s176.0~189.2 GT/s
Pixel Fillrate136.0~147.0 GP/s130.2~139.2 GP/s96.0~104.5 GP/s
Graphics Memory12 GB GDDR5X11 GB GDDR5X6 GB GDDR5
Graphics Memory Bus Width384-bit352-bit384-bit
Graphics Memory Speed1250 MHz1375 MHz1753 MHz
Graphics Memory Bandwidth480 GB/s484 GB/s337 GB/s
TDP250 W250 W250 W
Launch Prices$1,200$699 (Founder's Edition)$649

For more specifications, please take a look at our Desktop Graphics Card Comparison Guide.

[adrotate group=”1″]

 

Unboxing The NVIDIA GeForce GTX 1080 Ti

This is our video showing the unboxing of the NVIDIA GeForce GTX 1080 Ti Founders Edition graphics card (Price Check). This is exactly what you can expect if you purchase the Founders Edition card from NVIDIA.

The NVIDIA GeForce GTX 1080 Ti Founders Edition (Price Check) comes in a really nice cardboard box that doubles as a display stand. Not that anyone would actually leave the card there just for display!

Inside the box, you will find the following items :

  • NVIDIA GeForce GTX 1080 Ti Founders Edition graphics card (Price Check)
  • DisplayPort to DVI dongle / adaptor
  • NVIDIA GeForce GTX 1080 Ti Quick Start Guide
  • NVIDIA GeForce GTX 1080 Ti Support Guide
  • NVIDIA GeForce GTX 1080 Ti case badge

Next Page > The NVIDIA GeForce GTX 1080 Ti Up Close, Thermal Output & Noise Level

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The NVIDIA GeForce GTX 1080 Ti Up Close

In this video, we are going to take a quick look at the NVIDIA GeForce GTX 1080 Ti Founders Edition graphics card (Price Check), and its faceted die-cast aluminium cooler.

Here are close-up pictures of the various aspects of the NVIDIA GeForce GTX 1080 Ti Founders Edition graphics card (Price Check).

Unlike the GeForce GTX 1070 or GeForce GTX 1060, the GeForce GTX 1080 Ti does not come with any dual-linked DVI port. It only has three DisplayPort 1.4 and one HDMI 2.0b ports. That’s where the DisplayPort to DVI adaptor comes in.

The NVIDIA GeForce GTX 1080 Ti (Price Check) has a TDP of 250 W, and requires both 8-pin and a 6-pin PCI Express power cables. It also supports the SLI HB (High Bandwidth) bridge for two-way SLI pairing.

 

Founder’s Edition Advantage

[adrotate group=”2″]

The NVIDIA GeForce GTX 1080 Ti Founders Edition (Price Check) was designed to be the ultimate expression of NVIDIA’s gaming vision. Hence, they crafted it with premium materials and components, including a faceted die-cast aluminium-framed shroud for strength, rigidity and looks.

Aesthetics aside, the GeForce GTX 1080 Ti Founders Edition comes with an improved cooler built around a radial fan and an improved aluminium heatsink. The new heatsink features vapour chamber cooling and has 2x the surface area.

It also boasts a 7-phase power design with 14 high-efficiency dual FETs for both GPU and memory power supplies. Coupled with a low-impedance power delivery network and custom voltage regulators, they deliver better power efficiency and overclocking headroom.

 

The Thermal Output

The NVIDIA GeForce GTX 1080 Ti (Price Check) uses the NVIDIA GP102 GPU, which is fabricated on the 16 nm FinFET process. Thanks to the more efficient FinFET process, and the new NVIDIA Pascal architecture which is designed for power efficiency, the GeForce GTX 1080 Ti has a TDP (Thermal Design Power) of just 250 W.

We recorded the peak exhaust temperature of the GeForce GTX 1080 Founders Edition, and compared it to the Radeon RX Vega 64 (Price Check) and Vega 56 (Price Check) graphics cards, as well as the older GeForce GTX 1070, GeForce GTX 1060 and Radeon RX 580 graphics cards.

Note that these are not the recorded temperatures, but how much hotter the exhaust air is above ambient temperature.

Despite having a 15% lower TDP than the AMD Radeon RX Vega 64 (Price Check), the GeForce GTX 1080 Ti is a cooler-running card, producing 2.8°C cooler exhaust air than the previous-generation GeForce GTX 980 Ti graphics card. In fact, its peak exhaust temperature was just 2.6°C hotter than the exhaust air from the Vega 56 (Price Check) graphics card.

 

The Noise Level

Of course, the lower exhaust temperature might be due to a more powerful, and therefore, noisier, fan. Let’s see how noisy the GeForce GTX 1080 Ti Founders Edition fan really is…

In the video above, the GeForce GTX 1080 Ti Founders Edition (Price Check) was recorded while it was running the 3D Mark Time Spy benchmark. As you can hear, the fan spools up quite a bit at times, but it is still quieter than the Vega 64 (Price Check).

Next Page > Benchmarking Notes, The 3DMark Benchmark Results

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Benchmarking Notes

Our graphics benchmarking test bed has the following specifications :

Operating System : Microsoft Windows 10 64-bit

Processor : AMD Ryzen 7 1800X processor running at 3.6 GHz

Motherboard : AORUS AX370-Gaming 5

Memory : 16 GB Corsair Vengeance LPX DDR4-3000 memory (dual-channel)

Storage : 240 GB HyperX Savage SSD

Monitor : Dell P2415Q Ultra HD Monitor

We used the GeForce driver 385.41 for the NVIDIA graphics cards, and Radeon Software 17.9.1 for the AMD graphics cards.

 

3DMark DirectX 12 Benchmark (2560 x 1440)

3DMark Time Spy is the DirectX 12 benchmark in 3DMark. It supports new API features like asynchronous compute, explicit multi-adapter, and multi-threading.

The NVIDIA GeForce GTX 1080 Ti Founders Edition (Price Check) did very well in this DirectX 12 benchmark. It was 35% faster than the Radeon RX Vega 64 (Price Check), 54% faster than the Vega 56 (Price Check), and 64% faster than the GeForce GTX 1070!

[adrotate group=”1″]

 

3DMark (1920 x 1080)

For Direct 11 performance, we started testing the graphics cards using 3DMark at the entry-level gaming resolution – 1920 x 1080.

Due to the relatively low resolution, this is a CPU-limited test for many high-end graphics cards. Even so, the GeForce GTX 1080 Ti did well, delivering scores that were 26% faster than the Radeon RX Vega 64, 43% faster than the GeForce GTX 1070, and 58% faster than the Vega 56.

 

3DMark (2560 x 1440)

We then took 3DMark up a notch to the resolution of 2560 x 1440. Let’s take a look at the results!

The GeForce GTX 1080 Ti (Price Check) pulled away with the higher resolution. At 1440p, it was 30% faster than the Radeon RX Vega 64, 48% faster than the Radeon RX Vega 56, and 64% faster than the GeForce GTX 1070.

 

3DMark (3840 x 2160)

This is torture, even for the new AMD Vega 64 and Vega 56 graphics cards, but this is definitely the GeForce GTX 1080 Ti’s domain!

At this resolution, the NVIDIA GeForce GTX 1080 Ti (Price Check) was 28% faster than the Radeon RX Vega 64, 45% faster than the Radeon RX Vega 56, and 63% faster than the GeForce GTX 1070.

Next Page > Ashes of the Singularity & Total War: Warhammer Benchmark Results

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Ashes of the Singularity (1920 x 1080)

We tested Ashes of the Singularity in the DirectX 12 mode, which supports the Asynchronous Compute feature. We started with the full HD resolution.

At this resolution, the GeForce GTX 1080 Ti (Price Check) was 3.3% slower than the Radeon RX Vega 56 (Price Check), and 4.5% slower than the Vega 64 (Price Check). The two AMD Vega cards have a big advantage in AOTS, thanks to its support for Asynchronous Compute.

 

Ashes of the Singularity (2560 x 1440)

We then took Ashes of the Singularity up a notch to the resolution of 2560 x 1440. Let’s see how the cards fare now…

At 1440p, the GeForce GTX 1080 Ti (Price Check) was virtually equal to the AMD Radeon RX Vega 64 in performance, and just 2.5% faster than the Vega 56.

 

Ashes of the Singularity (3840 x 2160)

Finally, let’s see how the cards perform with Ashes of the Singularity running at the Ultra HD resolution of 3840 x 2160.

Only at the 4K resolution did the NVIDIA GeForce GTX 1080 Ti pull away from the two AMD Vega cards. Even so, it was just 4% faster than the Radeon RX Vega 64, and 16% faster than the Radeon RX Vega 56. It completely outclassed the GeForce GTX 1060 and the Radeon RX 580, beating both by 75%!

[adrotate group=”1″]

 

Warhammer (1920 x 1080)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, recorded by Total War : Warhammer‘s internal DirectX 12 benchmark.

All six graphics cards were so fast, they were CPU-limited at this resolution. But we can already see that support for Asynchronous Compute gave the new AMD Vega cards a major performance advantage. They actually beat the GeForce GTX 1080 Ti by 6-8%!

 

Warhammer (2560 x 1440)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, recorded by Total War : Warhammer‘s internal DirectX 12 benchmark.

At the 1440p resolution, the GeForce GTX 1080 Ti pulled just ahead of the Radeon RX Vega 56 (Price Check), beating it by 3.3%. The AMD Radeon RX Vega 64 (Price Check) was faster, but the performance gap dropped to just 3%.

 

Warhammer (3840 x 2160)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, recorded by Total War : Warhammer‘s internal DirectX 12 benchmark.

Only at this 4K resolution did the NVIDIA GeForce GTX 1080 Ti (Price Check) show its true mettle. Suddenly, it was 41% faster than the Radeon RX Vega 64, 60% faster than the Radeon RX Vega 56, and 65% faster than the GeForce GTX 1070.

It was also the only graphics card to deliver an average frame rate at or above 60 fps. If you want to play Warhammer at 4K in Ultra quality, this is definitely the card to use!

Next Page > The Witcher 3 & For Honor Benchmark Results

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The Witcher 3 (1920 x 1080)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, that FRAPS recorded in The Witcher 3.

The NVIDIA GeForce GTX 1080 Ti (Price Check) was really fast, delivering average frame rates in excess of 150 fps! This made it 37% faster than the Radeon RX Vega 64 (Price Check), 55% faster than the Radeon RX Vega 56 (Price Check), and 61% faster than the GeForce GTX 1070.

 

The Witcher 3 (2560 x 1440)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, that FRAPS recorded in The Witcher 3.

All the cards took a massive hit in frame rate with the resolution boost to 1440p. But the NVIDIA GeForce GTX 1080 Ti was the only card capable of delivering an average frame rate in excess of 100 fps. In fact, its minimum frame rate was higher than the average frame rate of all the other cards in the comparison!

At this resolution, the GeForce GTX 1080 Ti was 48% faster than the Radeon RX Vega 64, 65% faster than the Radeon RX Vega 56, and 71% faster than the GeForce GTX 1070.

 

The Witcher 3 (3840 x 2160)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, that FRAPS recorded in The Witcher 3.

The 4K resolution in The Witcher 3 is really tough on graphics cards, virtually halving their frame rates. The NVIDIA GeForce GTX 1080 Ti (Price Check) remained strong though. It was the only card to deliver an average frame rate in excess of 60 fps at this resolution.

Astoundingly, the GeForce GTX 1080 Ti was 53% faster than the Radeon RX Vega 64, 72% faster than the Radeon RX Vega 56, and 76% faster than the GeForce GTX 1070 at this resolution.

[adrotate group=”1″]

 

For Honor (1920 x 1080)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, recorded by For Honor‘s internal DirectX 12 benchmark.

Even at 1080p, the NVIDIA GeForce GTX 1080 Ti (Price Check) delivered eye-popping results, with an average frame rate of 170 fps. Its minimum frame rate was actually higher than the average frame rate of the other graphics cards in this test!

At this resolution, the GeForce GTX 1080 Ti was 39% faster than the Radeon RX Vega 64 (Price Check), 58% faster than both Radeon RX Vega 56 (Price Check) and GeForce GTX 1070.

 

For Honor (2560 x 1440)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, recorded by For Honor‘s internal DirectX 12 benchmark.

At this resolution, the NVIDIA GeForce GTX 1080 Ti was the only card to achieve an average frame rate in excess of 100 fps. It was 43% faster than the Radeon RX Vega 64, 63% faster than the Radeon RX Vega 56, and 68% faster than the GeForce GTX 1070.

 

For Honor (3840 x 2160)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, recorded by For Honor‘s internal DirectX 12 benchmark.

The 4K resolution in For Honor is a real frame rate killer. Even the GeForce GTX 1080 Ti was not able to deliver an average frame rate of 60 fps, although it came close. In fact, it is the only graphics card you can consider if you want to play For Honor at the 4K resolution.

At this resolution, the GeForce GTX 1080 Ti (Price Check) was 48% faster than the Radeon RX Vega 64, 66% faster than the Radeon RX Vega 56, and 74% faster than the GeForce GTX 1070.

Next Page > Mass Effect: Andromeda Benchmark Results, Our Verdict & Award

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Mass Effect: Andromeda (1920 x 1080)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, that FRAPS recorded in Mass Effect: Andromeda.

At this entry-level gaming resolution, all six cards did well, delivering average frame rates far in excess of 60 fps. Notably, the NVIDIA GeForce GTX 1080 Ti (Price Check), GeForce GTX 1070, Radeon RX Vega 64 (Price Check), and Radeon RX Vega 56 (Price Check) are so fast, their frame rates never dropped below 60 fps.

Thanks to Asynchronous Compute and the CPU limit at this resolution, the AMD Radeon RX Vega 64 was 7% faster than the GeForce GTX 1080 Ti. Even the much cheaper Radeon RX Vega 56 was just 4% slower than the GTX 1080 Ti.

 

Mass Effect: Andromeda (2560 x 1440)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, that FRAPS recorded in Mass Effect: Andromeda.

With the jump in resolution, every graphics card took a massive hit in their frame rates. But the NVIDIA GeForce GTX 1080 Ti (Price Check) registered only a slight dip in frame rate. This allowed it to overtake the Radeon RX Vega 64.

At this resolution, it was 26% faster than the Radeon RX Vega 64, 35% faster than the Radeon RX Vega 56, and 44% faster than the GeForce GTX 1070.

 

Mass Effect: Andromeda (3840 x 2160)

This chart shows you the minimum and maximum frame rates, as well as the average frame rate, that FRAPS recorded in Mass Effect: Andromeda.

When the resolution was increased to 4K, even the NVIDIA GeForce GTX 1080 Ti (Price Check) took a large hit in its frame rate. Even so, it managed to deliver an average frame rate just shy of 60 fps.

At this extremely high resolution, the GeForce GTX 1080 Ti was 48% faster than the Radeon RX Vega 64, 63% faster than the Radeon RX Vega 56, and 72% faster than the GeForce GTX 1070.

[adrotate group=”1″]

 

Our Verdict & Award

The NVIDIA GeForce GTX 1080 Ti (Price Check) is the ultimate desktop gaming graphics card you can buy today.

Built on the NVIDIA GP102 GPU, it is actually a faster variant of the NVIDIA TITAN X. Both the TITAN X and the newer TITAN Xp are US$ 1,200 cards designed for deep learning and machine learning. The GeForce GTX 1080 Ti, on the other hand, is meant for gaming, and it has no real competition.

As our benchmark results show, the NVIDIA GeForce GTX 1080 Ti is in a class of its own when it comes to 4K gaming. At that resolution, it was, on average, 38% faster than the Radeon RX Vega 64, 55% faster than the Radeon RX Vega 56, and 63% faster than the GeForce GTX 1070.

In most games, it will deliver average frame rates in excess 60 fps even at the 4K resolution of 3840 x 2160. It does so well at 4K gaming that it would be a real shame if you don’t pair it with a 4K Ultra HD monitor.

Thanks to the NVIDIA Pascal architecture and the 16 nm FinFET fabrication technology, the GeForce GTX 1080 Ti was not just much faster than its predecessor (the GeForce GTX 980 Ti), it actually ran cooler and quieter. Surprisingly, it also used less power and produced less heat than the AMD Radeon RX Vega 64, which was fabricated on the smaller 14 nm FinFET process.

What is probably most amazing though is the value proposition. The NVIDIA GeForce GTX 1080 Ti (Price Check) is actually faster than the NVIDIA TITAN X, but costs just over half as much. Of course, this is part of NVIDIA’s price rationalisation, designed to make their cards more competitive against the AMD Vega onslaught.

For its unchallenged performance lead and greatly improved value proposition, we think the NVIDIA GeForce GTX 1080 Ti (Price Check) deserves our Editor’s Choice Award. If 4K gaming is what you are aiming for, you can’t go wrong with this card! Great job, NVIDIA!

Go Back To > First Page | Reviews | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

AMD Retires CrossFire & Limits mGPU Capability

When AMD announced the ability to run two Radeon RX Vega cards simultaneously, they conspicuously called it mGPU (short for multiple GPU) instead of the far more familiar CrossFire. That’s because they are retiring the CrossFire brand in favour of the generic mGPU moniker. They also limited the mGPU capability. Find out why!

 

End of the road for AMD CrossFire

The first AMD Polaris-based graphics card, the AMD Radeon RX 480, was showcased in Computex 2016 with Raja Koduri showing off its CrossFire performance in Ashes of the Singularity. But when AMD released the Radeon RX Vega family, they did not mention any CrossFire support.

In fact, the AMD Radeon RX Vega graphics cards was only capable of running as single cards until the release of Radeon Software 17.9.2. It also represented the end of the road for AMD CrossFire. With this release, AMD officially abandoned it for mGPU.

Why? Here is AMD’s response when they were asked that very question by Brad Chacos of PCWorld :

CrossFire isn’t mentioned because it technically refers to DX11 applications.

In DirectX 12, we reference multi-GPU as applications must support mGPU, whereas AMD has to create the profiles for DX11.

We’ve accordingly moved away from using the CrossFire tag for multi-GPU gaming.

This is a surprising turn of event because the CrossFire brand goes all the way back to 2005. Almost 12 years to the day, as a matter of fact. That’s a lot of marketing history for AMD to throw away. But throw it all away, they did.

Nothing has changed though. They just decided to call the ability to use multiple graphics cards as mGPU, instead of CrossFire. In other words – this is a branding decision.

AMD will continue to use CrossFire for current and future DirectX 11 profiles, but refer to mGPU for DirectX 12 titles.

[adrotate group=”1″]

 

Limited mGPU Capability

AMD is also limiting the mGPU support to just two graphics cards. The 4-way mGPU capabilities that top-of-the-line Radeon cards used to support have been dropped. The AMD Radeon RX Vega family are therefore limited to two cards in mGPU mode :

Gamers can pair two Radeon RX Vega 56 GPUs or two Radeon RX Vega 64 GPUs

This move was not surprising. Even NVIDIA abandoned three or four card configurations with the GeForce GTX 10 series last year. With fewer games supporting multi GPUs and interest in power efficiency burgeoning, the days of 3-way or 4-way multi GPUs are over.

Go Back To > Articles | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

NVIDIA Announces GameWorks DX12 For Game Developers

March 1, 2017 — NVIDIA today announced GameWorks DX12, a collection of resources for game developers that will increase realism and shorten product cycles in titles designed using DirectX 12, Microsoft’s API that unifies graphics and simulation.

These resources include updates to the NVIDIA GameWorks SDK for creating interactive cinematic experiences on PC games; updates to the NVIDIA VRWorks SDK for creating immersive virtual reality experiences; new developer tools; and a new Game Ready Driver.

Together, they provide developers with substantial performance gains, multiple new rendering and simulation effects, and other capabilities to help create games optimised for DirectX 12.

“We have invested over 500 engineering-years of work to deliver the most comprehensive platform for developing DirectX 12 games, including the world’s most advanced physics simulation engine,” said Tony Tamasi, senior vice president of content and technology at NVIDIA. “These resources will ensure that GeForce gamers can enjoy the very best game experience on DirectX 12 titles, just as they have on DirectX 11 games.”

“NVIDIA’s commitment to DirectX 12 is clear,” said Cam McRae, technical director at the Coalition, developers of Gears of War 4. “Having them onsite during the development of Gears of War 4 was immensely beneficial, and helped us to deliver a game that is fast, beautiful and stable.”

“NVIDIA creates stunning special effects that run in real time on a PC and provides them to game developers,” said Hajime Tabata, division executive of Square Enix. “A lot of the visual magic you see in video games today is a direct result of NVIDIA’s work behind the scenes. They are providing an invaluable combination of source code, tools, technology and the engineering effort it takes to help developers implement them. The advancement that we are trying to create through this collaboration is not simply about an evolution in visual appearance, but also to use new technology to create new user experiences.”

 

GameWorks Physics Simulation Comes to DX12

The latest version of GameWorks builds on the over 2 million lines of documented code that are available to developers, providing them with a huge range of rendering and simulation effects. GameWorks technologies are currently used in more than 1,000 titles.

DirectX 12 introduced asynchronous compute, which unified graphics and simulation by allowing GPUs to run non-graphics workloads for effects such as post-processing, lighting and physics. But these effects are currently limited because most games can only allocate a few milliseconds to run these types of non-graphical simulations while still delivering smooth gameplay.

To maximize the efficiency of asynchronous compute for gaming effects, NVIDIA introduced the world’s most advanced real-time physics simulation engine to DX12, with two technologies that take advantage of asynchronous compute:

  •  NVIDIA Flow 1.0 – a visual effects library that provides simulation and volume rendering of dynamic, combustible fluid, fire and smoke. Supports both DirectX 12 and 11.
  •  NVIDIA FleX 1.1 – a unified particle-based simulation technique for real-time visual effects. Supports DirectX 12 compute.

FleX and Flow are available immediately for free to registered developers.

GameWorks updates also include NVIDIA HairWorks 1.3, a library that enables developers to simulate and render realistic fur and hair for their games. Version 1.3 supports DirectX 12 and is also available immediately.

 

VRWorks Comes to DirectX 12

VRWorks includes APIs, libraries and features that enable headset and application developers to achieve a new level of immersion in VR. It has been updated to support DirectX 12 with better performance, lower latency and plug-and-play compatibility. It will be supported in the Unity 2017.1 beta, which ships this spring, and the Unreal Engine 4 game engines — thus covering a majority of game development platforms.

 

World’s Most Advanced DirectX 12 Developer Tools

[adrotate banner=”4″]

NVIDIA also introduced several developer resources created to improve DirectX 12 game development, including:

  •  NVIDIA Aftermath 1.0 – a diagnostic utility that developers can use for analyzing DirectX 12 error reports.
  •  Nsight Visual Studio Edition 5.3 – a tool that lets developers debug and profile VR and DirectX 12 applications in real time. Includes support for the Oculus, OpenVR (HTC Vive) and DirectX 12 APIs.
  •  PIX Plug-in – PIX is a DirectX 12 debugging tool developed by Microsoft. NVIDIA collaborated with the Microsoft PIX team to expose NVIDIA GPU Performance Counters to PIX for Windows via a PIX Plug-in.

 

Game Ready Driver Optimised for DX12

NVIDIA also revealed an upcoming Game Ready Driver optimised for DirectX 12 games. The company refined code in the driver and worked side by side with game developers to deliver performance increases of up to 16 percent on average across a variety of DirectX 12 games, such as Ashes of the Singularity, Gears of War 4, Hitman, Rise of the Tomb Raider and Tom Clancy’s The Division.

Since the first launch of its Pascal architecture — the world’s most advanced DX12 GPU family, including the performance-leading GeForce GTX 1080 Ti and GTX 1080 GPUs — NVIDIA has continuously improved DX12 game performance through releases of Game Ready drivers. The drivers are timed with the release of the latest games by leading partners.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Deus Ex Mankind Divided DirectX12 Patch Released

Eidos-Montreal and Square Enix have announced their official Deus Ex Mankind Divided DirectX 12 patch, scaling performance to a whole new level with DirectX 12 multi-GPU and frame pacing support on Radeon graphics.

This Deus Ex Mankind Divided DirectX 12 patch enables advanced DirectX 12 features and enhanced performance with support for :

[adrotate banner=”4″]

Asynchronous shaders

  • Asynchronous Compute Engines in AMD’s GCN and Polaris architectures can submit commands without waiting for other tasks to complete.
  • Delivers vastly improved GPU efficiency, boosting graphics processing performance, reducing latency and enabling consistent frame rates.

Enhanced multi-GPU experience

  • AMD’s GCN and Polaris architectures lead with greater performance and scalability in multi-GPU DirectX 12.
  • Gamers will benefit from up to 100 percent faster performance using Radeon Software Crimson Edition driver 16.10.2 running Deus Ex: Mankind Divided on 8GB Radeon RX 480 multi-GPU1 in DirectX 12.
  • The 4GB Radeon RX 470 also offers up to 100 percent faster multi-GPU performance in DirectX 12 in Deus Ex than with the 4GB Radeon RX 470 single-GPU.2

DirectX 12 Frame Pacing

  • AMD’s support of multi-GPU DirectX 12 Frame Pacing enables smooth gameplay on Radeon Graphics by evenly distributing frame times and providing them at a more consistent rate, delivering an exceptional gaming experience.

 

Deus EX Mankind Divided

Deus Ex Mankind Divided is the sequel to the critically acclaimed Deus Ex Human Revolution and builds on the franchise’s trademark choice and consequence, plus action-RPG based gameplay, to create both a memorable and highly immersive experience.

Players will once again take on the role of Adam Jensen, now an experienced covert agent, and will gain access to his new arsenal of customizable state-of-the-art weapons and augmentations. With time working against him, Adam must choose the right approach, along with who to trust, in order to unravel a vast worldwide conspiracy.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AMD Doubles Down On mGPU Frame Pacing

Adding to Radeon Software Crimson Edition’s enhancements for DirectX 9, DirectX 10, and DirectX 11, Radeon Software 16.9.1 enables multi-GPU frame pacing support to DirectX12 on all GCN-enabled GPUs and AMD A8 APUs or higher with GCN.

Frame pacing delivers consistency by increasing smoothness in gameplay. In multi-GPU (mGPU) configurations, GPUs render alternating frames and push each frame to your screen. Each render can be created at various speeds causing differences in frame time. With frame pacing enabled, frames are distributed evenly, i.e. with less variance between frames, creating liquid smooth gameplay. For more details, please watch the following video:

 

Radeon Tech Talk: DirectX 12 mGPU Frame Pacing

A number of games currently take advantage of frame pacing in DirectX 12. Total War – Warhammer, Rise of the Tomb Raider™ and the 3DMark Time Spy benchmark also show smoother run-throughs.

[adrotate banner=”5″]

Let’s look at the some real-life scenarios:

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

More DirectX 12 Games Tuned For Radeon Graphics Cards

SUNNYVALE, California — March 23, 2016 — AMD today once again took the pole position in the DirectX 12 era with an impressive roster of state-of-the-art DirectX 12 games and engines, each extensively tuned for Radeon graphics cards powered by the Graphics Core Next architecture.

“DirectX 12 is poised to transform the world of PC gaming, and Radeon GPUs are central to the experience of developing and enjoying great content,” said Roy Taylor, Corporate Vice President, Content and Alliances, AMD. “With a definitive range of industry partnerships for exhilarating content, plus an indisputable record of winning framerates, Radeon GPUs are an end-to-end solution for consumers who deserve the latest and greatest in DirectX 12 gaming.”

“DirectX 12 is a game-changing low overhead API for both developers and gamers,” said Bryan Langley, Principal Program Manager, Microsoft. “AMD is a key partner for Microsoft in driving adoption of DirectX 12 throughout the industry, and has established the GCN Architecture as a powerful force for gamers who want to get the most out of DirectX 12.”

 

Tuned For Radeon Graphics

  • Ashes of the Singularity by Stardock and Oxide Games
  • Total War: WARHAMMER by Creative Assembly
  • Battlezone VR by Rebellion
  • Deus Ex: Mankind Divided by Eidos-Montréal
  • Nitrous Engine by Oxide Games

Total War: WARHAMMER
A fantasy strategy game of legendary proportions, Total War: WARHAMMER combines an addictive turn-based campaign of epic empire-building with explosive, colossal, real-time battles, all set in the vivid and incredible world of Warhammer Fantasy Battles.

Sprawling battles with high unit counts are a perfect use case for the uniquely powerful GPU multi-threading capabilities offered by Radeon graphics and DirectX 12. Additional support for DirectX 12 asynchronous compute will also encourage lightning-fast AI decision making and low-latency panning of the battle map.

Battlezone VR
Designed for the next wave of virtual reality devices, Battlezone VR gives you unrivalled battlefield awareness, a monumental sense of scale and breathless combat intensity. Your instincts and senses respond to every threat on the battlefield as enemy swarms loom over you and super-heated projectiles whistle past your ears.

Rolling into battle, AMD and Rebellion are collaborating to ensure Radeon GPU owners will be particularly advantaged by low-latency DirectX 12 rendering that’s crucial to a deeply gratifying VR experience.

Ashes of the Singularity
AMD is once again collaborating with Stardock in association with Oxide to bring gamers Ashes of the Singularity. This real-time strategy game set in the far future, redefines the possibilities of RTS with the unbelievable scale provided by Oxide Games’ groundbreaking Nitrous engine. The fruits of this collaboration has resulted in Ashes of the Singularity being the first game to release with DirectX 12 benchmarking capabilities.

Deus Ex: Mankind Divided
Deus Ex: Mankind Divided, the sequel to the critically acclaimed Deus Ex: Human Revolution, builds on the franchise’s trademark choice and consequence, action-RPG based gameplay, to create both a memorable and highly immersive experience. AMD and Eidos-Montréal have engaged in a long term technical collaboration to build and optimize DirectX 12 in their engine including special support for GPUOpen features like PureHhair based on TressFX Hair and Radeon exclusive features like asynchronous compute.

[adrotate banner=”5″]

 

Nitrous Engine

Radeon graphics customers the world over have benefitted from unmatched DirectX 12 performance and rendering technologies delivered in Ashes of the Singularity via the natively DirectX 12 Nitrous Engine. Most recently, Benchmark 2.0 was released with comprehensive support for DirectX 12 asynchronous compute to unquestionably dominant performance from Radeon graphics.

With massive interplanetary warfare at our backs, Stardock, Oxide and AMD announced that the Nitrous Engine will continue to serve a roster of franchises in the years ahead. Starting with Star Control and a second unannounced space strategy title, Stardock, Oxide and AMD will continue to explore the outer limits of what can be done with highly-programmable GPUs.

 

Premiere Rendering Efficiency with DirectX 12 Asynchronous Compute

Important PC gaming effects like shadowing, lighting, artificial intelligence, physics and lens effects often require multiple stages of computation before determining what is rendered onto the screen by a GPU’s graphics hardware.

In the past, these steps had to happen sequentially. Step by step, the graphics card would follow the API’s process of rendering something from start to finish, and any delay in an early stage would send a ripple of delays through future stages. These delays in the pipeline are called “bubbles,” and they represent a brief moment in time when some hardware in the GPU is paused to wait for instructions.

What sets Radeon GPUs apart from its competitors, however, is the Graphics Core Next architecture’s ability to pull in useful compute work from the game engine to fill these bubbles. For example: if there’s a rendering bubble while rendering complex lighting, Radeon GPUs can fill in the blank with computing the behavior of AI instead. Radeon graphics cards don’t need to follow the step-by-step process of the past or its competitors, and can do this work together—or concurrently—to keep things moving.

Filling these bubbles improves GPU utilization, input latency, efficiency and performance for the user by minimizing or eliminating the ripple of delays that could stall other graphics cards. Only Radeon graphics currently support this crucial capability in DirectX 12 and VR.

 

An Undeniable Trend

With five new DirectX 12 game and engine partnerships; unmatched DirectX 12 performance in every test thus far; plus, exclusive support for the radically powerful DirectX 12 asynchronous compute functionality, Radeon graphics and the GCN architecture have rapidly ascended to their position as the definitive DirectX 12 content creation and consumption platform.

This unquestionable leadership in the era of low-overhead APIs emerges from a calculated and virtuous cycle of distributing the GCN architecture throughout the development industry, then partnering with top game developers to design, deploy and master Mantle’s programming model. Through the years that followed, open and transparent contribution of source code, documentation and API specifications ensured that AMD philosophies remained influential in landmark projects like DirectX 12.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!