Is it possible to provide server-side graphic acceleration via ThinLinc?

tl;dr: yes, provided that the following conditions are met:

  • the server has a GPU available
  • the application is OpenGL-based
  • the application is run via VirtualGL

For a more detailed description, see below.

Background

Many Linux applications - even entire desktop environments - require hardware-based graphic acceleration in order to function properly. This can pose a problem for headless installations (which remote desktop servers often are), since many servers don’t provide a GPU for this purpose.

Without one, these applications will either fail to start at all, or fall back to software-based rendering (e.g. via llvmpipe) which can result in poor performance. If you have remote applications which require hardware acceleration, you will need to make sure that your server has a GPU available.

The GPU must also be accessible by the applications which need to use it. In the case of a ThinLinc session, the main process (Xvnc) is non-privileged and has no direct access to the GPU. Applications run from within a ThinLinc session will therefore not have access to the GPU either.

VirtualGL is a software package which gives otherwise non-privileged applications access to the GPU. It interposes itself between the application and the display server, re-routing 3D operations to the GPU in a way which is transparent to the application, and occurs with minimal overhead. VirtualGL can be used within a ThinLinc session to provide server-side graphic acceleration.

Note: VirtualGL is not part of ThinLinc, nor developed or supported by Cendio AB. For information regarding professional services related to VirtualGL, ping @drc on this forum, or see the VirtualGL website.

Installing and Configuring VirtualGL

The VirtualGL project recommends AMD® or NVIDIA® GPUs, using the official proprietary drivers (where applicable). The first step in enabling server-side graphic acceleration therefore is to ensure you have a suitable GPU with the appropriate driver installed[1].

VirtualGL itself can be installed either from your Linux distribution’s repositories, or a package downloaded from the VirtualGL website. If installing from repositories, make sure to check that the version is recent enough to support those features you need. For example, VirtualGL’s EGL backend is only available in versions 3.0 and up[2]. While the GLX backend requires an existing 3D X server (i.e. “root console”) to be present, the EGL backend doesn’t, and is more secure when dealing with multiple users.

Once VirtualGL is installed, it needs to be configured. This is done using the vglserver_config tool included as part of VirtualGL. Run this command as root, following the configuration instructions in the VirtualGL documentation. Two points to note here:

  • If you are using the GLX backend, you will lose the ability to log in locally to a Wayland session via GDM. In general, it is discouraged to log in locally when using the GLX backend anyway, as this may disrupt applications using VirtualGL.

  • During setup, vglserver_config will create the group ‘vglusers’. Anyone wanting to make use of the GPU will need to be a member of this group. If using the GLX backend and a recent version of GDM, the gdm account must also be a member of vglusers.

Once this step is complete, perform the sanity checks as described in the VirtualGL documentation to make sure everything works as expected.

Using VirtualGL with ThinLinc

If you haven’t done so already, download ThinLinc, and install it on the same server(s)[3] as VirtualGL by following the instructions in the ThinLinc documentation.

To test that graphic acceleration is working, log in to a ThinLinc session and run glxspheres64 both with and without VirtualGL. For example, at the terminal:

$ /opt/VirtualGL/bin/vglrun /opt/VirtualGL/bin/glxspheres64

and:

$ /opt/VirtualGL/bin/glxspheres64

The first command should give a better framerate (as shown in the bottom left-hand corner of the visualisation), and the OpenGL Renderer: string at the terminal should show the name of your GPU/driver, rather than llvmpipe as per the second command.

For ease of use, applications requiring graphic acceleration can be defined using the ThinLinc Desktop Customizer (TLDC)[4] and added to the users’ desktop menu, to ensure that the application is always invoked via vglrun.

Launching a single application rather than an entire desktop can be done by creating a new profile in ThinLinc, and setting the command line parameter to:

tl-single-app /opt/VirtualGL/bin/vglrun <application>

In general, running entire desktop environments through VirtualGL is likely to be problematic. If you are experiencing poor performance with the desktop itself, try using a lightweight, non-compositing DE such as MATE, LXQt, or XFCE instead.


For more information on the VirtualGL project, see https://virtualgl.org.

For more information on choosing desktop environments in ThinLinc, see this post.


  1. Note that according to the VirtualGL documentation, the driver should be (re)installed after VirtualGL. This is to ensure that symlinks created by the driver remain intact. ↩︎

  2. Some applications may not work properly with the EGL backend. If you experience issues using this backend, try with the GLX backend instead. ↩︎

  3. VirtualGL only needs to be installed on the server(s) where the applications are actually run. Depending on your configuration, this may be all, some, or none of the servers running ThinLinc. ↩︎

  4. TLDC will not work with desktop environments which lack a traditional menu structure, such as GNOME 3. A .desktop file can be created and installed manually for such environments. ↩︎

5 Likes

It works fine for OpenGL games and for people that make use of WebGL websites, it boosts browser’s performance too. I’ve tested on https://webglsamples.org by running a webbrowser on a ThinLinc session with and without VirtualGL

There’s something that confused me at first because I was trying to play a game and resulting FPS didn’t match the expected performance from what the game was telling me it was doing. Later I found out that it was due to the way the frames are rendered on the client through ThinLinc: the frame has to be fully drawn on screen before next one is sent and due to network latency and bandwidth, some frames are skipped.

VirtualGL works great over ThinLinc