Tag Archives: Graphics card

COLORFUL GeForce GTX 1080 Released

COLORFUL GeForce GTX 1080 Released

13 May 2016 Taipei, Taiwan – Colorful Technology Company Limited proudly debuts the world’s first GeForce GTX 1080 graphics card. Announced May 6th, the new COLORFUL GeForce GTX 1080 will feature NVIDIA’s latest GPU architecture codenamed Pascal boasting a 16nm FinFET fabrication process.

The COLORFUL GeForce GTX 1080 is based on the GP104 GPU armed with 2560 CUDA cores and will have a base engine clock of 1607 MHz and can boost up 1733 MHz. Complementing it will be 8 GB of GDDR5X video memory running at an effective clock rate of 10 Ghz on a 256-bit bus.

COLORFUL GeForce GTX 1080 Released

The COLORFUL GeForce GTX 1080 is designed to deliver 3x more performance than previous-generation graphics card and its breakthrough innovations in gaming give gamers new possibilities to enjoy via VR experiences.

Colorful looks forward to bring the COLORFUL GeForce GTX 1080 on May 27th in first spot during release day.

Custom design COLORFUL GTX 1080s will be presented at COMPUTEX 2016 this June.

[adrotate banner=”5″]

 

COLORFUL GTX 1080 Specifications

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP Always Compensate – BIOS Optimization Guide

AGP Always Compensate

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature determines if the AGP controller should be allowed to dynamically adjust the AGP driving strength or use preset drive strength values.

By default, it is set to automatically adjust the AGP drive strength once or at regular intervals. The circuitry can also be disabled or bypassed and a user setting used. However, this BIOS feature does not allow manual configuration.

When you enable AGP Always Compensate, the auto-compensation circuitry will automatically adjust the AGP drive strength at regular intervals.

If you disable it, the circuitry will only adjust the drive strength once at boot-up. The drive strength values derived at boot-up will remain until the system is rebooted.

It is recommended that you enable AGP Always Compensate so that the AGP controller can dynamically adjust the AGP driving strength at regular intervals.

 

Details

This feature is somewhat similar to the AGP Drive Strength feature. It determines if the AGP controller should be allowed to dynamically adjust the AGP driving strength or use preset drive strength values.

Due to the tighter tolerances of the AGP 8X and AGP 4X bus, the AGP controller features auto-compensation circuitry that compensate for the motherboard’s impedance on the AGP bus. It does this by dynamically adjusting the drive strength of the I/O pads over a range of temperature and voltages.

The auto-compensation circuitry has two operating modes. By default, it is set to automatically compensate for the impedance once or at regular intervals by dynamically adjusting the AGP drive strength. The circuitry can also be disabled or bypassed. In this case, it is up to the user (through the BIOS) to write the desired drive strength value to the AGP I/O pads.

[adrotate banner=”5″]

This is where AGP Always Compensate differs from the AGP Drive Strength feature. While AGP Drive Strength allows you to switch to manual configuration by the user, AGP Always Compensate does not. It only allows you to change the auto-compensation mode.

When you enable AGP Always Compensate, the auto-compensation circuitry will dynamically compensate for changes in the impendance at regular intervals.

If you disable it, the circuitry will only compensate for the impedance once at boot-up. The drive strength values derived at boot-up will remain until the system is rebooted.

It is recommended that you enable AGP Always Compensate so that the AGP controller can initiate dynamic compensation at regular intervals. This will allow it to compensate for any changes in the impedance.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

NVIDIA GeForce GTX 1080 Launched

AUSTIN, TX – May 6 2016NVIDIA today announced the NVIDIA GeForce GTX 1080 the first gaming GPU based on the company’s new Pascal architecture — providing up to 2x more performance in virtual reality compared to the GeForce GTX TITAN X.

Pascal offers massive leaps in performance, memory bandwidth and power efficiency over its predecessor, the high-performance Maxwell™ architecture. And it introduces groundbreaking graphics features and technologies that redefine the PC as the ultimate platform for playing AAA games and enjoying virtual reality.

“The PC is the world’s favorite gaming platform, and our new Pascal GPU architecture will take it to new heights,” said Jeff Fisher, senior vice president of NVIDIA’s PC business. “Our first Pascal gaming GPU, the GeForce GTX 1080, enables incredible realism in gaming and deeply immersive VR experiences, with dramatically improved performance and efficiency. It’s the most powerful gaming GPU ever built, and some of our finest work.”

 

NVIDIA GeForce GTX 1080

Five Marvels of Pascal

NVIDIA engineered the Pascal architecture to handle the massive computing demands of technologies like VR. It incorporates five transformational technologies:

  • Next-Gen GPU Architecture- Pascal is optimized for performance per watt. The GTX 1080 is 3x more power efficient than the Maxwell Architecture.
  • 16nm FinFET Process- The GTX 1080 is the first gaming GPUs designed for the 16nm FinFET process, which uses smaller, faster transistors that can be packed together more densely. Its 7.2 billion transistors deliver a dramatic increase in performance and efficiency.
  • Advanced Memory- Pascal-based GPUs are the first to harness the power of 8GB of Micron’s GDDR5X memory. The 256-bit memory interface runs at 10Gb/sec., helping to drive 1.7x higher effective memory bandwidth than that delivered by regular GDDR5.
  • Superb Craftsmanship- Increases in bandwidth and power efficiency allow the GTX 1080 to run at clock speeds never before possible — over 1700 MHz — while consuming only 180 watts of power. New asynchronous compute advances improve efficiency and gaming performance. And new GPU Boost™ 3 technology supports advanced overclocking functionality.
  • Groundbreaking Gaming Technology- NVIDIA is changing the face of gaming from development to play to sharing. New NVIDIA VRWorks software features let game developers bring unprecedented immersiveness to gaming environments. NVIDIA’s Ansel™ technology lets gamers share their gaming experiences and explore gaming worlds in new ways.

“We were blown away by the performance and features of the GTX 1080,” said Tim Sweeney, founder of Epic Games. “We took scenes from our Paragon game cinematics that were designed to be rendered offline, and rendered them in real time on GTX 1080. It’s mind-blowing and we can’t wait to see what developers create with UE4 and GTX 1080 in the world of games, automotive design, or architectural visualization — for both 2D screens and for VR.”

[adrotate banner=”5″]

 

VRWorks: A New Level of Presence for VR

To fully immerse users in virtual worlds, the enhanced NVIDIA VRWorks software development kit offers a never before experienced level of “VR presence.” It combines what users see, hear and touch with the physical behavior of the environment to convince them that their virtual experience is real.

2x VR Graphics Performance: VRWorks Graphics now includes a simultaneous multi-projection capability that renders natively to the unique dimensions of VR displays instead of traditional, 2D monitors. It also renders geometry for the left and right eyes simultaneously in a single pass.

Enveloping Audio: VRWorks Audio uses the NVIDIA OptiX ray-tracing engine to trace the path of sounds across an environment in real time, fully reflecting the size, shape and material of the virtual world.
Interactive Touch and Physics: NVIDIA PhysX for VR detects when a hand controller interacts with a virtual object, and enables the game engine to provide a physically accurate visual and haptic response. It also models the physical behavior of the virtual world around the user so that all interactions — whether an explosion or a hand splashing through water — behave as if in the real world.

NVIDIA has integrated these technologies into a new VR experience called VR Funhouse.
“GeForce GTX 1080 promises to be the ultimate graphics card for experiencing EVE: Valkyrie,” said Hilmar Veigar Pétursson, CEO of CCP Games. “We are looking forward to bringing NVIDIA’s new VRWorks features to Valkyrie to take the game’s visuals and performance to another level.”

Ansel: Capturing the Artistry of Gaming
NVIDIA also announced Ansel, a powerful game capture tool that allows gamers to explore, capture and share the artistry of gaming in ways never before possible.

With Ansel, gamers can compose the gameplay shots they want, pointing the camera in any direction and from any vantage point within a gaming world. They can capture screenshots at up to 32x screen resolution, and then zoom in where they choose without losing fidelity. With photo-filters, they can add effects in real time before taking the perfect shot. And they can capture 360-degree stereo photospheres for viewing in a VR headset or Google Cardboard.

[adrotate banner=”5″]

 

GeForce GTX 1080 Availability and Pricing

The NVIDIA GeForce GTX 1080 “Founders Edition” will be available on May 27 for $699. It will be available from ASUS, Colorful, EVGA, Gainward, Galaxy, Gigabyte, Innovision 3D, MSI, NVIDIA. Palit, PNY and Zotac. Custom boards from partners will vary by region and pricing is expected to start at $599.

The GeForce GTX 1080 will also be sold in fully configured systems from leading U.S.-based system builders, including AVADirect, Cyberpower, Digital Storm, Falcon Northwest, Geekbox, IBUYPOWER, Maingear, Origin PC, Puget Systems, V3 Gaming and Velocity Micro, as well as system integrators outside North America.

The NVIDIA GeForce GTX 1070 “Founders Edition” will be available on June 10 for $449. Custom boards from partners are expected to start at $379.

Ansel will be available in upcoming releases and patches of games such as Tom Clancy’s The Division, The Witness, Lawbreakers, The Witcher 3, Paragon, No Man’s Sky, Obduction, Fortnite and Unreal Tournament.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

More DirectX 12 Games Tuned For Radeon Graphics Cards

SUNNYVALE, California — March 23, 2016 — AMD today once again took the pole position in the DirectX 12 era with an impressive roster of state-of-the-art DirectX 12 games and engines, each extensively tuned for Radeon graphics cards powered by the Graphics Core Next architecture.

“DirectX 12 is poised to transform the world of PC gaming, and Radeon GPUs are central to the experience of developing and enjoying great content,” said Roy Taylor, Corporate Vice President, Content and Alliances, AMD. “With a definitive range of industry partnerships for exhilarating content, plus an indisputable record of winning framerates, Radeon GPUs are an end-to-end solution for consumers who deserve the latest and greatest in DirectX 12 gaming.”

“DirectX 12 is a game-changing low overhead API for both developers and gamers,” said Bryan Langley, Principal Program Manager, Microsoft. “AMD is a key partner for Microsoft in driving adoption of DirectX 12 throughout the industry, and has established the GCN Architecture as a powerful force for gamers who want to get the most out of DirectX 12.”

 

Tuned For Radeon Graphics

  • Ashes of the Singularity by Stardock and Oxide Games
  • Total War: WARHAMMER by Creative Assembly
  • Battlezone VR by Rebellion
  • Deus Ex: Mankind Divided by Eidos-Montréal
  • Nitrous Engine by Oxide Games

Total War: WARHAMMER
A fantasy strategy game of legendary proportions, Total War: WARHAMMER combines an addictive turn-based campaign of epic empire-building with explosive, colossal, real-time battles, all set in the vivid and incredible world of Warhammer Fantasy Battles.

Sprawling battles with high unit counts are a perfect use case for the uniquely powerful GPU multi-threading capabilities offered by Radeon graphics and DirectX 12. Additional support for DirectX 12 asynchronous compute will also encourage lightning-fast AI decision making and low-latency panning of the battle map.

Battlezone VR
Designed for the next wave of virtual reality devices, Battlezone VR gives you unrivalled battlefield awareness, a monumental sense of scale and breathless combat intensity. Your instincts and senses respond to every threat on the battlefield as enemy swarms loom over you and super-heated projectiles whistle past your ears.

Rolling into battle, AMD and Rebellion are collaborating to ensure Radeon GPU owners will be particularly advantaged by low-latency DirectX 12 rendering that’s crucial to a deeply gratifying VR experience.

Ashes of the Singularity
AMD is once again collaborating with Stardock in association with Oxide to bring gamers Ashes of the Singularity. This real-time strategy game set in the far future, redefines the possibilities of RTS with the unbelievable scale provided by Oxide Games’ groundbreaking Nitrous engine. The fruits of this collaboration has resulted in Ashes of the Singularity being the first game to release with DirectX 12 benchmarking capabilities.

Deus Ex: Mankind Divided
Deus Ex: Mankind Divided, the sequel to the critically acclaimed Deus Ex: Human Revolution, builds on the franchise’s trademark choice and consequence, action-RPG based gameplay, to create both a memorable and highly immersive experience. AMD and Eidos-Montréal have engaged in a long term technical collaboration to build and optimize DirectX 12 in their engine including special support for GPUOpen features like PureHhair based on TressFX Hair and Radeon exclusive features like asynchronous compute.

[adrotate banner=”5″]

 

Nitrous Engine

Radeon graphics customers the world over have benefitted from unmatched DirectX 12 performance and rendering technologies delivered in Ashes of the Singularity via the natively DirectX 12 Nitrous Engine. Most recently, Benchmark 2.0 was released with comprehensive support for DirectX 12 asynchronous compute to unquestionably dominant performance from Radeon graphics.

With massive interplanetary warfare at our backs, Stardock, Oxide and AMD announced that the Nitrous Engine will continue to serve a roster of franchises in the years ahead. Starting with Star Control and a second unannounced space strategy title, Stardock, Oxide and AMD will continue to explore the outer limits of what can be done with highly-programmable GPUs.

 

Premiere Rendering Efficiency with DirectX 12 Asynchronous Compute

Important PC gaming effects like shadowing, lighting, artificial intelligence, physics and lens effects often require multiple stages of computation before determining what is rendered onto the screen by a GPU’s graphics hardware.

In the past, these steps had to happen sequentially. Step by step, the graphics card would follow the API’s process of rendering something from start to finish, and any delay in an early stage would send a ripple of delays through future stages. These delays in the pipeline are called “bubbles,” and they represent a brief moment in time when some hardware in the GPU is paused to wait for instructions.

What sets Radeon GPUs apart from its competitors, however, is the Graphics Core Next architecture’s ability to pull in useful compute work from the game engine to fill these bubbles. For example: if there’s a rendering bubble while rendering complex lighting, Radeon GPUs can fill in the blank with computing the behavior of AI instead. Radeon graphics cards don’t need to follow the step-by-step process of the past or its competitors, and can do this work together—or concurrently—to keep things moving.

Filling these bubbles improves GPU utilization, input latency, efficiency and performance for the user by minimizing or eliminating the ripple of delays that could stall other graphics cards. Only Radeon graphics currently support this crucial capability in DirectX 12 and VR.

 

An Undeniable Trend

With five new DirectX 12 game and engine partnerships; unmatched DirectX 12 performance in every test thus far; plus, exclusive support for the radically powerful DirectX 12 asynchronous compute functionality, Radeon graphics and the GCN architecture have rapidly ascended to their position as the definitive DirectX 12 content creation and consumption platform.

This unquestionable leadership in the era of low-overhead APIs emerges from a calculated and virtuous cycle of distributing the GCN architecture throughout the development industry, then partnering with top game developers to design, deploy and master Mantle’s programming model. Through the years that followed, open and transparent contribution of source code, documentation and API specifications ensured that AMD philosophies remained influential in landmark projects like DirectX 12.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

NVIDIA Quadro M6000 24GB Speeds Up Design Workflows

March 23, 2016 — Automotive designers, visual effects artists and geoscientists do amazing work while pushing the limits of technology. They also work on impossibly tight schedules. We’re about to make their lives a whole lot easier.

The massive size and sheer complexity of digital models for a car, Hollywood VFX shot or seismic volume can slow down design workflows. It’s difficult to interactively visualise huge digital models in high fidelity without resorting to offline rendering or scaling them way back. The models are just too big to fit in the average GPU’s memory, and the computational demands are too high to enable smooth interactivity. This creates bottlenecks — for example, multiple meetings are needed to review digital models, make changes and render them offline.

No longer. With the new NVIDIA Quadro M6000 24GB, designers, artists and scientists have double the graphics memory previously available. They can easily work with their largest, most complex datasets, speeding workflows and allowing for interactive collaborative reviews. Ultimately, enabling them to make better decisions faster.

 

Quadro M6000 24GB – More Memory, Faster Reviews

The huge memory boost of the Quadro M6000 24GB, combined with extreme processing power of our Maxwell architecture, lets designers tap into the power of interactive global illumination for their production environments.

Global illumination models how light bounces between surfaces. With this understanding of indirect lighting and shadows, engineers like those at Nissan Motor Company can spot potential design flaws — such as errant light reflections from mirrors or glare from side window glass — much earlier and then easily address them.

“Global illumination is the pinnacle of design,” said Dennis Malone, virtual prototype engineer at Nissan. “With enough graphics memory, we can make better decisions faster, streamlining everything we do and making our design process more cost-effective.”

 

Automakers Aren’t the Only Ones Who Stand to Benefit

The new Quadro M6000 24GB allows artists, animators and editors to work up to six times faster on their most complex simulations and interactive visual effects — even those with multiple layers and large numbers of 3D elements. It can all fit in the onboard graphics memory.

“At Sony Pictures Imageworks, we regularly push the limits of our ability to display and interact with very complex scenes. The Quadro M6000 24GB gives us a 10x performance boost with the throughput necessary to display these types of large scenes smoothly and interactively.” — Erik Strauss, executive director of software development at Sony Pictures Imageworks.

Geophysicists can accelerate their seismic exploration with the Quadro M6000 24GB by examining substantially larger datasets without cutting down the size of the data or reducing fidelity. “With the Quadro M6000 24GB GPU, our customers can visualise more data in real time and process seismic volumes and reservoir models with unprecedented speed.” — Robert Bond, product manager for interpretation at Paradigm

With the addition of the M6000 24GB, the Quadro Visual Computing Platform offers users the most advanced GPU-accelerated rendering capabilities, display technologies and software tools for the creation of the ultimate design workspace and immersive environments.[adrotate banner=”4″]

Features of the platform include:

  • GPU-powered rendering enables artists to visualise creations with photorealistic image quality and predict their designs more accurately with NVIDIA Iray using physically based lights and materials.
  • VR-ready support for head-mounted displays, enabling developers to more efficiently build and users to experience virtual reality creations.
  • Quadro Sync, Mosaic and Warp/Blend technologies, for image synchronisation and resolution scaling of a synchronised display surface with multiple projectors or displays.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

MSI Back To School Promotion For Malaysian Fans

MSI Malaysia just announced an MSI Back to School promotion, where MSI fans in Malaysia get to go home with free gifts with purchase of selected MSI motherboards and graphics cards. Depending on the MSI motherboard or graphics card you purchase, you get to go home with either the MSI DS 300 gaming mouse (worth RM 299), an MSI ThunderStorm mouse pad (worth RM 158) or an MSI Gaming Mousepad (worth RM 69).

Bundle period: 2016/3/1 to 2016/4/30

HOW TO REDEEM: Please send message to MSI Malaysia Fanclub for redemption (text your name, contact number, address and attached with product SN photo and invoice photo)

 

MSI Back To School Promotion Qualifying Products

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

DBI Output for AGP Trans. – BIOS Optimization Guide

DBI Output for AGP Trans.

Common Options : Enabled, Disabled

 

Quick Review

The full name for this BIOS feature is Dynamic Bus Inversion Output for AGP Transmitter. DBI Output for AGP Trans. is an AGP 3.0-specific BIOS feature which will only appear when you install an AGP 3.0-compliant graphics card.

When enabled, the AGP controller is allowed to use the Dynamic Bus Inversion scheme to reduce power consumption and signal noise.

When disabled, the AGP controller will not use the Dynamic Bus Inversion scheme to reduce power consumption and signal noise.

The AGP bus has 32 data lines divided into two sets. Sometimes, a large number of these data lines may switch together to the same polarity (either 1 or 0) and then switch back to the opposite polarity. This mass switching to the same polarity is called simultaneous switching outputs and it creates a lot of unwanted electrical noise at the AGP controller and GPU interfaces.

To avoid this, the AGP 3.0 specifications introduced a scheme called Dynamic Bus Inversion or DBI. It makes use of two new DBI lines – one for each 16-line set. These DBI lines are only supported by AGP 3.0-compliant graphics cards.

Dynamic Bus Inversion ensures that the data lines are limited to a maximum of 8 simultaneous switchings or transitions per 16-line set. It does so by switching the DBI line instead of the data lines when the number of simultaneous transitions exceeds 8 or 50% of the data lines. This ensures that electrical noise due to simultaneous switching outputs are minimized.

In short, DBI improves stability of the AGP interface by reducing signal noises that occur as a result of simultaneous switching outputs. It also reduces the AGP controller’s power consumption.

Therefore, it is recommended that you enable DBI Output for AGP Trans. to save power as well as reduce signal noise from simultaneous switching outputs.

 

Details

The full name for this BIOS feature is Dynamic Bus Inversion Output for AGP Transmitter. DBI Output for AGP Trans. is an AGP 3.0-specific BIOS feature which will only appear when you install an AGP 3.0-compliant graphics card.

The AGP bus has 32 data lines divided into two sets. In each set, there are 16 data lines which individually switches to either a high (1) or low (0) as it sends out data. Sometimes, a large number of these data lines may switch together to the same polarity (either 1 or 0) and then switch back to the opposite polarity.

This mass switching to the same polarity is called simultaneous switching outputs and it creates a lot of unwanted electrical noise at the AGP controller and GPU interfaces. This is only significant if the number of lines simultaneously switching to the same polarity exceeds50% of the data lines.

To avoid this, the AGP 3.0 specifications introduced a scheme called Dynamic Bus Inversion or DBI. It makes use of two new DBI lines – one for each 16-line set. These DBI lines are only supported by AGP 3.0-compliant graphics cards.

When enabled, it will ensure that the data lines are limited to a maximum of 8 simultaneous switchings or transitions per 16-line set. When the number of simultaneous transitions exceeds 8 or 50% of the data lines, the AGP controller switches the polarity of the DBI line instead. The data lines that were supposed to switch en masse to the opposite polarity remain at the same polarity.

When disabled, there will be no restrictions to the number of simultaneous switchings that the data lines can perform.

[adrotate banner=”4″]

At the receiving end however, the data is reproduced exactly as it was meant to. This is because the DBI line actually serves as a reference signal for the AGP data signals! Although the data signals may have been inverted on the transmitter end, the inverted DBI signal corrects it at the receiving end.

But because only one, instead of 9 or more, data lines switched to the opposite polarity, the amount of electrical noise generated is significantly reduced. In short, DBI improves stability of the AGP interface by reducing signal noises that occur as a result of simultaneous switching outputs. It also reduces the AGP controller’s power consumption.

Therefore, it is recommended that you enable DBI Output for AGP Trans. to save power as well as reduce signal noise from simultaneous switching outputs.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

AMD FirePro S7150 Hardware-Virtualised GPUs Launched

AMD today revealed the world’s first hardware virtualized GPU products – AMD FirePro S-Series GPUs with Multiuser GPU (MxGPU) technology. AMD’s ground-breaking hardware-virtualized GPU architecture delivers an innovative solution in response to emerging user experiences such as remote workstation, cloud gaming, cloud computing, and Virtual Desktop Infrastructure (VDI).

In the virtualization ecosystem, key components like the CPU, network controller and storage devices are being virtualized in hardware to deliver optimal user experiences, but prior to today the GPU was not hardware virtualized. AMD MxGPU technology, for the first time, brings the modern virtualization industry standard to the GPU hardware.

What does this mean? Consistent performance and enhanced security across virtual machines. MxGPU controls GPU scheduling delivering predictable quality of service to the user.

AMD MxGPU technology, based on SR-IOV (Single Root I/O Virtualization), a PCI Express® standard:

  • Delivers hardware GPU scheduling logic with high-precision quality of service to the user.
  • Preserves the data integrity of Virtualized Machines (VM) and their application data through hardware-enforced memory isolation logic preventing one VM from being able to access another VM’s data.
  • Exposes all graphics functionality of the GPU to applications allowing for full virtualization support for not only graphics APIs like DirectX® and OpenGL but also GPU compute APIs like OpenCL.

 

AMD FirePro S7150 & FirePro S7150 x2

The new AMD FirePro S7150 and AMD FirePro S7150 x2 server graphics cards will combine with industry-leading OEM offerings to create high-performance virtual workstations and address IT needs of simple installation and operation, critical data security and outstanding performance-per-dollar. Typical VDI use cases include Computer-Aided Design (CAD), Media and Entertainment, and office applications powered by the industry’s first hardware-based virtualized GPU.

IT budgets can realize support for up to 16 simultaneous users with a single AMD FirePro S7150 GPU card which features 8 GB of GDDR5 memory, while up to twice as many simultaneous users (32 in total) can be supported by a single AMD FirePro S7150 x2 card which includes a total of 16 GB of GDDR5 memory (8GB per GPU). Both models feature 256-bit memory bandwidth.

Based on AMD’s Graphics Core Next (GCN) architecture to optimize utilization and maximize performance, the AMD FirePro S7150 and S7150 x2 server GPUs feature :

  • AMD Multiuser GPU (MxGPU) technology to enable consistent, predictable and secure performance from virtualized workstations with the world’s first hardware-based virtualized GPU products to enable users with workstation-class experiences matched with full ISV certifications.
  • [adrotate banner=”4″]GDDR5 GPU Memory to help accelerate applications and process computationally complex workflows with ease.
  • Error Correcting Code (ECC) Memory to ensure the accuracy of computations by correcting any single or double bit error as a result of naturally occurring background radiation.
  • OpenCL 2.0 support to help professionals tap into the parallel computing power of modern GPUs and multicore CPUs to accelerate compute-intensive tasks in leading CAD/CAM/CAE and Media & Entertainment applications that support OpenCL allowing developers to take advantage of new GPU features.
  • AMD PowerTune is an intelligent power management system that monitors both GPU activity and power draw. AMD PowerTune optimizes the GPU to deliver low power draw when GPU workloads do not demand full activity and delivers the optimal clock speed to ensure the highest possible performance within the GPU’s power budget for high intensity workloads.

AMD FirePro S7150 has an MSRP of USD $2399 and AMD FirePro S7150 x2 has an MSRP of USD $3999.

AMD FirePro S7150 and S7150 x2 server GPUs are expected to be available from server technology providers in the first half of 2016.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

Anti-Dot Crawl – The BIOS Optimization Guide

Anti-Dot Crawl

Common Options : Enabled, Disabled

 

Quick Review of Anti-Dot Crawl

Dot crawl is a visual artifact that plagues composite video signals (e.g. NTSC signals). Fortunately, it’s possible to greatly reduce dot crawl by using a comb filter.

The Anti-Dot Crawl BIOS feature controls the composite video decoder’s comb filter.

When enabled, the comb filter is enabled to suppress dot crawl artifacts.

When disabled, the comb filter is disabled and dot crawl artifacts are allow to manifest normally.

If you intend to play composite video signals on your system, it is highly recommended that you enable this BIOS feature to suppress dot crawl artifacts.

 

Details of Anti-Dot Crawl

Dot crawl is a visual artifact that plagues composite video signals (e.g. NTSC signals). The artifact appears as shimmering checkerboard or line patterns between contrasting colours. As the dots in the artifact crawl between those colours, that gave birth to the term “dot crawl”.

This visual artifact is due to crosstalk between the luminance (colourless) and chrominance (colour information) components of the composite video signal. As such, the only solution is to replace composite video signals with component video signals. Only by using separate component signals can such crosstalk be avoided completely.

Fortunately, it’s possible to greatly reduce dot crawl by using a comb filter. A comb filter is a phase cancellation filter that works by adding a slightly delayed version of the signal to the signal itself. It can be implemented in the composite video hardware or in software.

[adrotate group=”1″]

The Anti-Dot Crawl BIOS feature controls the composite video decoder’s comb filter.

When enabled, the comb filter is enabled to suppress dot crawl artifacts.

When disabled, the comb filter is disabled and dot crawl artifacts are allow to manifest normally.

If you intend to play composite video signals on your system, it is highly recommended that you enable this BIOS feature to suppress dot crawl artifacts.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP 3.0 Calibration Cycle – BIOS Optimization Guide

AGP 3.0 Calibration Cycle

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature controls the AGP 3.0 calibration cycle feature of the motherboard chipset. It is only found in motherboards that support the AGP 3.0 standard.

When enabled, the motherboard chipset will periodically initiate a dynamic calibration cycle on the AGP bus. This allows the AGP bus to maintain its timings and signal integrity.

When disabled, the motherboard chipset will not initiate any dynamic calibration cycle on the AGP bus. The AGP bus timings and signal integrity may suffer from changes in voltage and temperature during operation.

As the dynamic calibration cycle maintains the AGP 3.0 bus’ timings and signal integrity, it is highly recommended that you leave it at the default setting of Enabled.

However, please note that this feature is only implemented if both motherboard chipset and AGP graphics card are operating in the AGP 3.0 mode. It is automatically disabled when the AGP 2.0 mode is used.

 

Details

The AGP 3.0 signaling scheme has very tight tolerances for its high-speed, source-synchronous signals which include the AD (data) bus and DBI (Dynamic Bus Inversion) signals. Unfortunately, key parameters like termination impedence, signal swing and slew rate can change due to changes in voltage and temperature during operation. These variations in key parameters can affect timing and signal integrity.

[adrotate banner=”4″]Therefore, the AGP 3.0 standard includes support for a dynamic calibration cycle. This feature allows the AGP 3.0 bus to dynamically recalibrate its source-synchronous signals over time.

The dynamic calibration cycle is periodically initiated by the motherboard chipset. By default, the AGP bus undergoes a dynamic calibration cycle every 4 ms. But the period between calibrations may be extended up to 256 ms, depending on motherboard implementation.

When a dynamic calibration cycle occurs, the chipset takes control of the AGP bus and initializes the dynamic calibration cycle. This takes three or more clock cycles to complete. Thereafter, the bus is released and a new AGP transaction may begin.

This BIOS feature controls the AGP 3.0 calibration cycle feature of the motherboard chipset. It is only found in motherboards that support the AGP 3.0 standard.

When enabled, the motherboard chipset will periodically initiate a dynamic calibration cycle on the AGP bus. This allows the AGP bus to maintain its timings and signal integrity.

When disabled, the motherboard chipset will not initiate any dynamic calibration cycle on the AGP bus. The AGP bus timings and signal integrity may suffer from changes in voltage and temperature during operation.

As the dynamic calibration cycle maintains the AGP 3.0 bus’ timings and signal integrity, it is highly recommended that you leave it at the default setting of Enabled.

However, please note that this feature is only implemented if both motherboard chipset and AGP graphics card are operating in the AGP 3.0 mode. It is automatically disabled when the AGP 2.0 mode is used.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

AGP Read Synchronization – BIOS Optimization Guide

AGP Read Synchronization

Common Options : Enabled, Disabled

 

Quick Review

This BIOS feature ensures proper synchronization of data transferred on the AGP bus.

When enabled, the chipset will wait until all writes in the Global Write Buffer are completely written to the system memory before it initiates any writes to the AGP graphics card.

When disabled, the chipset will allow data from the system memory to be written to the AGP graphics card, even if the Global Write Buffer has not completed its data transfers to the system memory.

While it may seem that AGP Read Synchronization should be enabled, this is actually not true. To avoid data synchronization problems, the chipset actually allows the Global Write Buffer to be snooped. If the data is found in the Global Write Buffer, it is read directly from the buffer and then the buffer is flushed.

However, it is not possible for both methods to be enabled simultaneously. One of these two methods must be enabled for proper synchronization of data, but not both at the same time.

For performance reasons alone, AGP Read Synchronization should be disabled. In addition, many motherboard BIOS setup utilities do not allow you to disable the snooping of the Global Write Buffer. Therefore, if you enable AGP Read Synchronization, you will experience problems with your graphics card, especially when data is written to the AGP aperture.

 

Details

When the graphics processor writes data to the AGP aperture, it doesn’t directly write the data to the system memory. Doing so will tie up the graphics processor for a long time as the AGP bus (as well as system memory) is many, many times slower than the local memory buffer.

Instead, the graphics processor writes the data to a Global Write Buffer. This allows the graphics processor to be quickly released for other duties. The Global Write Buffer then writes the data to the system memory, while the graphics processor is working on something else.

Unfortunately, the use of the Global Write Buffer means that data synchronization may be a problem. If the graphics processor writes data to the AGP aperture and then requests the same data before the write buffer completes the write process, the graphics processor will receive outdated or incorrect data.

This is where the AGP Read Synchronization BIOS feature comes in. It ensures proper synchronization of data transferred on the AGP bus.

When enabled, the chipset will wait until all writes in the Global Write Buffer are completely written to the system memory before it initiates any writes to the AGP graphics card.

When disabled, the chipset will allow data from the system memory to be written to the AGP graphics card, even if the Global Write Buffer has not completed its data transfers to the system memory.

While it may seem that AGP Read Synchronization should be enabled, this is actually not true. To avoid data synchronization problems, the chipset actually allows the Global Write Buffer to be snooped. If the data is found in the Global Write Buffer, it is read directly from the buffer and then the buffer is flushed.

[adrotate banner=”4″]

However, it is not possible for both methods to be enabled simultaneously. If AGP Read Synchronization is enabled, no writes to the AGP graphics card can occur until the Global Write Buffer is emptied. But for the Global Write Buffer to be snooped, a write must first be initiated by the chipset. This is a Catch-22 situation which is logically not allowed by the chipset.

One of these two methods must be enabled for proper synchronization of data, but not both at the same time. Snooping the Global Write Buffer provides some performance advantage when data is found in the buffer, because the graphics processor can read directly from the buffer. On the hand, waiting for the Global Write Buffer to complete writing its data, before the chipset can initiate writes to the AGP graphics card, reduces performance.

For performance reasons alone, AGP Read Synchronization should be disabled. In addition, many motherboard BIOS setup utilities do not allow you to disable the snooping of the Global Write Buffer. Therefore, if you enable AGP Read Synchronization, you will experience problems with your graphics card, especially when data is written to the AGP aperture.

Go Back To > The BIOS Optimization Guide | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

AGP to DRAM Prefetch – BIOS Optimization Guide

AGP to DRAM Prefetch

Common Options : Enabled, Disabled

 

Quick Review

This feature controls the system controller’s AGP prefetch capability.

When enabled, the system controller will prefetch data whenever the AGP graphics card reads from system memory. This speeds up AGP reads as it allows contiguous memory reads by the AGP graphics card to proceed with minimal delay.

It is highly recommended that you enable this feature for better AGP read performance.

 

Details

This feature controls the system controller’s AGP prefetch capability. When enabled, the system controller will prefetch data whenever the AGP graphics card reads from system memory. Here is how it works.

[adrotate banner=”4″]Whenever the system controller reads AGP-requested data from system memory, it also reads the subsequent chunk of data. This is done on the assumption that the AGP graphics card will request for the subsequent chunk of data. When the AGP graphics card actually initiates a read command for that chunk of data, the system controller can immediately send it to the AGP graphics card.

This speeds up AGP memory reads as the AGP graphics card won’t need to wait for the system controller to read from system memory. In other words, AGP to DRAM Prefetch allows contiguous memory reads by the AGP graphics card to proceed with minimal delay.

It is highly recommended that you enable this feature for better AGP read performance. Please note that AGP writes to system memory do not benefit from this feature.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

BIOSTAR GeForce GAMING GTX 750 Ti OC Launched

December 17th, 2015 Taipei, Taiwan – BIOSTAR is proud to bring gamers the best in performance and value and brings both to gamers with its  GeForce GTX 750 Ti graphics card for mainstream gaming. Powered by the high-efficiency Maxwell architechture, the BIOSTAR GeForce GAMING GTX 750 Ti OC delivers stunning visuals with low power draw.

Complimented by 2 GB of GDDR5 on a 128-bit wide bus, you can play the latest games smoothly thanks to this Special GAMING OC revision designed to meet competitive gamers’ needs, its 640 CUDA cores are factory overclocked to 1127 MHz with a Boost frequency of 1178 MHz for that extra kick.

The FPS unique dual-fan cooling design helps keep the GTX 750 Ti OC cool and running in top condition while looking the part as well. Designed to complement the BIOSTAR GAMING line of motherboards. The BIOSTAR GeForce GAMING GTX 750 Ti OC supports multiple displays via dual DVI and mini-HDMI output.

 

Factory Overclocked For Higher Performance

Enjoy a higher level of gaming with the higher clocks thanks to BIOSTAR’s GAMING OC. Faster than reference clock speeds improves gameplay by shifting the GTX 750 Ti to overdrive, churning out more frame rates for a smoother gaming experience.

 

Unique Dual-Fan Cooling Design

Keeping your card cool is important to get the most out of your card. The new BIOSTAR GAMING FPS design cooler cools the full-sized PCB card and maintain ideal temperatures. Quiet operation also ensures no distraction so you can game in full immersion without worrying about noisy fans.

 

BIOSTAR GeForce GAMING GTX 750 Ti OC Specifications

Engine Clock1059 - 1137 MHz
Memory Clock1350 MHz (5400 MHz QDR)
Memory Size2048 MB
Memory TypeGDDR5
Memory Bus128-bit
CUDA Cores640
InterfacePCI Express 3.0 x16
Max. ResolutionDigital : 4096 x 2160
VGA : 2048 x 1536
OutputsDual-DVI
HDMI
Accessories1 x DVI-VGA Adapter
1 x Driver CD
1 x Quick Guide
1 x Mini-HDMI - HDMI Adapter