nVidia GRD vGPU

Posted

by

Estimated reading time:
7 minutes
Home » Define Tomorrow Blog Posts » nVidia GRD vGPU

Back in August 2017 nVidia announced the latest version of their graphics virtualisation technology, GRID 5.0 together with the new datacentre-grade GRID graphics cards that have been released, the Tesla P6 and Tesla P40.

This blog post aims to give the reader an overview of nVidia GRID vGPU and a brief history and overview of the cards. For those readers who may be unfamiliar with graphics cards and the existing nVidia GRID cards let’s have a quick recap before we look at the latest ‘Pascal’ Series GRID vGPU cards.

Why do we need Graphics Cards?

For any gamers reading this you will likely be familiar with the concept of having specific dedicated hardware resource separate to the CPU and RAM of your computer to process the multitude of simultaneous operations and rendering that games and other graphical workloads such as 3D CAD modelling require and you may wish to skip this section!

However for those unfamiliar with graphics cards and why we need them, we increasingly need dedicated graphics resource in modern computer systems to support the graphical elements of the Windows OS itself (Windows 10 is currently 40% more graphically intensive than any other version of windows), but we also need dedicated graphics resource if we are running any form of graphical applications that require high levels of graphics processing or rendering of graphics.

This is because the computations required for these workloads are very intensive and consequently a processor separate to the main CPU is often a good idea – GPU’s (Graphics Processing Unit) also have a different internal architecture to CPU’s that are more suited to processing graphical workloads and multiple tasks in parallel.

1.jpg

Graphics cards also have their own RAM often referred to graphics or video RAM (vRAM), again this is separate to the RAM in the computer and again has a slightly different architecture and is used to offload the processing overhead from the main system RAM.

nVidia have an excellent article that’s worth a read here with regards to the differences between CPU’s and GPU’s if you want more info: https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/

Why do we need Graphics Cards in the Datacentre?

One of my main areas of responsibility at ComputerWorld is to architect End-user computing solutions and desktop virtualisation in particular is a passion – I’m not going to extoll the virtues of virtualising your users’ desktops here beyond saying that it’s something you should consider if you haven’t already!

There is a trend towards using GPU hardware to address certain High Performance Compute requirements in the datacentre and that may be covered in a later blog post but for now I’ll focus on the VDI aspect.

Historically, a problem when looking at virtualising users’ desktops has been virtualising 3D engineering and/or design workloads, CAD applications for example, and this is where the nVidia GRID technology comes into the story.

nVidia GRID graphics cards are essentially the same as the cards you would insert into a CAD workstation or gaming PC they are built using the same technology and are tested with the same procedures – there are other differences but in essence they just have a lot more resources available for use… The power of an nVidia GPU is typically measured in the number of CUDA cores that it has.

More information on CUDA cores can be found here: http://www.nvidia.co.uk/object/cuda-parallel-computing-uk.html

nVidia GRID vGPU

I mentioned graphics or video RAM (vRAM) earlier but first lets run through the physical GPU and consider the vRAM aspect in a moment.

Virtualising Physical GPU’s the old way… Vmware vSGA & vDGA

Prior to release of the nVidia GRID vGPU technology the options for assigning physical GPU graphics resource to virtual desktops was pretty much all or nothing – the available physical GPU resource could be accessed by all users – i.e. shared (VMware vSGA); or accessed by one user – i.e. dedicated (VMware vDGA).

Meaning in the shared scenario that all users had access to graphics resources but if a single user starts to consume a lot of resources then the other users then suffered. Conversely in the dedicated scenario, only a single user has access to the available graphics resource resulting in that user having a vast amount of graphics resource available but with others having none – good for specific user scenarios but bad when multiple users require access to the GPU.

Virtualising Physical GPU’s the new way… nVidia vGPU

With the advent of vGPU, physical GPU graphics resource could be shared equally with users with each user having the same time-sliced access to the physical GPU cores (much as physical CPU is shared on a virtualisation host), meaning the vSGA problem of a single user consuming all the available graphics resource was no longer a problem nor was the cost inefficiency of having a single physical GPU dedicated to a single user with vDGA.

In summary vGPU was far more efficient with regards to allocating and utilising the resources available making graphics virtualisation much more feasible from a cost perspective.

2.png

Graphics/Video RAM – Frame Buffer & vGPU Profiles

Each physical GRID card has a finite amount of vRAM or to use the correct term ‘Frame Buffer’. The Frame Buffer that is allocated to each user is static and does not change, a portion of the total Frame Buffer present on the GRID card is allocated to each virtual desktop VM when the VM is powered on, and remains allocated until the VM is powered off – at which point it is released and can be re-allocated to a different VM.

The Frame Buffer of a GRID card is distributed evenly between the physical GPU’s meaning each GPU has a set amount of Frame Buffer that it can use. As I have mentioned above the Frame Buffer allocation that each VM receives is static and does not change and this is set/controlled via the use of vGPU ‘profiles’.

vGPU profiles allow the allocation of differing levels of vGPU Frame Buffer memory to VM’s and, in doing so allow an appropriate amount of Frame Buffer resource to be allocated to VM’s based upon their workload.

For example, a user that runs low to moderate graphical workloads may only require 1GB of vGPU Frame Buffer; whereas a CAD engineer may require 4GB of vGPU Frame Buffer – vGPU profiles allow us to do this on the same GRID card…. Provided there are two physical GPU’s on the card since each GPU can only host one type of vGPU profile – this is important to remember when capacity planning in mixed vGPU profile environments.

More information on vGPU can be found here: http://www.nvidia.com/object/grid-technology.html

Since the focus of this blog post is about nVidia GRID vGPU I’m not going to consider the cards that only support VMware vSGA and vDGA but they are out there and have their place still for niche requirements. GRID vGPU is an nVidia technology it is only supported on the nVidia GRID cards.

One other thing to note about vGPU that isn’t discussed further here but may form a future post is that vGPU resource can be assigned to virtual RDS servers to provide dedicated graphics resource to published applications in those environments.

nVidia GRID Licensing Requirement

Introduced with GRID 2.0 was a licensing requirement, GRID 1.0 did not require a licensing element. With the M-series cards, the GRID vGPU license required was dictated by the vGPU profiles in use and there were three tiers of licensing available:

  • GRID Virtual Application (for providing vGPU resource to RDS-hosted applications)

  • GRID Virtual PC (Moderate graphical workloads)

  • GRID Virtual Datacentre Workstation (High-end graphical workloads)

The GRID license cost should always be considered when looking into the feasibility/affordability of a GRID vGPU solution and the license required is dictated by a number of different factors and ComputerWorld can help with identifying which license is most appropriate for your needs – a listing of vGPU profiles and their required licenses is included in a later section of this blog post.

To aid in understanding the differences between the different license types the table below illustrates the different features that each license entitles:

ag.jpg

Hardware that supports nVidia GRID

One final thing to note is that although the servers listed on the nVidia certified hardware listing (http://www.nvidia.com/object/grid-certified-servers.html) may support the use of GRID cards it’s often not as simple as installing the cards into your existing servers, there are certain requirements such as specific risers & uprated PSU’s and it’s far easier and cheaper to specify these components at the time of purchase even if GRID cards are not being installed right away, than it is to retro-actively fit them.

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you ready for
Define Tomorrow 2025?

We’re putting the finishing touches to this year’s events – to stay updated sign-up below or view see highlights of the 2023 event.