Recommendation regarding ThinLinc on Bare Metal vs VM vs Container

Hi,

I was wondering about any recommendations, best practices, success (or horror) stories of running ThinLinc on a bare metal server vs as a virtual machine vs as a container.

I assume that the best performance will be had on bare metal then containers, then VM?

But what about security aspects?

For example: I want to mount the users’ home folders over the network via NFS. In a container under Proxmox, I would have to make it a privileged container to do this and mount the NFS shares on the Proxmox host, then pass them through to the container. That doesn’t sound like the most secure solution. While in a VM I could mount the shares directly inside the VM with systemd automount and/or fstab. With bare metal, of course, I could mount them the same way.

Other idea: What if a user crashes their session? Will it affect other users and/or the host if everything is running inside a container that shares the kernel with the host? Would a VM be able to crash the host or would only the VM be affected?

I would love to hear any recommendations on this topic and any experiences with one or the other, good or bad!

1 Like

Hello @sswirski

If we are talking specifically about the ThinLinc services (vsmserver, vsmagent etc.), they would run perfectly fine on either a VM or on physical rust. Running the services in a container could probably work as well, but it’s not something we generally recommend since it’s not part of our routines to validate and test. Also, ThinLinc requires that several requirements are installed, which containers typically lack (SystemD, being one).

Check out GitHub - oposs/tl-docker: ThinLinc Server in a Docker for inspiration!

The heavy lifting in a typical ThinLinc environment will be carried out by the host(s) housing the users’ graphical sessions, this would be where the vsmagent.service is running. The graphical session consists of the display server (Xvnc), the desktop environment (GNOME, KDE) and any applications that the user starts.

I’d say that you’d typically get the best performance by hosting the sessions on physical hardware, but the reality is that most(?) implementations we see nowadays are being done in a virtualized environment. The performance is really good, it’s possible to pass-through specific hardware for niche use-cases (GPUs, for example), and scaling is easier.

I’d love to hear about some horror stories, though :slight_smile:

Regards,
Martin

1 Like

I do actually run everything in LXC containers right now: Two agents and one master with Proxmox as the host OS on one physical server.

My reasoning was the performance gain over a VM, but since I want to open this setup to the internet soon, I’m a little concerned over the security of running as containers.

To make my NFS shares available inside the ThinLinc containers, I currently mount these on the Proxmox host and then pass them to the containers (which are also then running as “privileged”). In my home LAN this was fine, but I kinda wanna switch to VMs when exposing everything to the public internet and mount the shares directly in the VMs, not on Proxmox.

I see.

I’m not that familiar with running containers, in general. Personally, I’d rather go for VM’s if I was introducing external users to the system, it feels very fragile from a security perspective to have the containers running in privileged mode with external access.

Regards,
Martin

I’ll definitely switch to VMs, then. A small performance hit is very tolerable to me compared to any potential security risks! :slight_smile:

Sounds great. I approve :slight_smile:

1 Like

FYI: I have Thinlinc server running in Arch with KDE all in an un privileged LXC container on Proxmox.

So that’s an option if you can live without live migration.