When Viewing Text Files with a Wi Fi connection from Bangalore to San Antonio I have an employee of my client who has thinlinc deployed reporting high latency when connecting to a gnome desktop session and displaying text files.
It seems to be that the best performance I can get involves It seemed to minimize the problem while I was able to retain full-color depth was to set AutoSelect, Custom Compression level between 4-6 allowing jpeg compression and SSH Compression:
Which results in this kind of connection info:
This is with VirtualGL and libjpeg-turbo installed on the server. Not much I can do with the client, I’m fairly certain they are running the windows thinlinc client. I’m assuming they won’t have any control over settings performance if they are using the web client.
Any suggestions or pointers on things I can alter in the configuration would be greatly appreciated.
It’s really performing well on a wired connection at this distance, it’s just a symptom of wireless internet, and potentially Indias internet itself that I suspect is causing the problem.
Thanks for reaching out, and sorry for keeping you waiting for a couple of days.
First, I would recommend going for a much more lightweight desktop environment, for example MATE or XFCE with compositing disabled, this could really help improve the user experience when latency is an issue.
Also, I’m not sure about having SSH compression enabled, I’ve read that it generally gives you very little reduction in bandwith, but can use quite a bit of CPU.
I would start by leaving the Optimization settings at their defualt values, and see how they perceive the latency with another window manager with compositing disabled.
No worries about the delay in responding. The response is appreciated. I think you’re spot on with the compositing disable suggestion. Specifically with their most noticed problem, the “chuggy” scrolling of text. I’ll give that a try right away. Allow the client to optimize itself, and take the compositing load out of the picture I guess?
Yes, that should be a good starting point.
Do you happen to know the latency ms between the client and the ThinLinc server?
From my geographic replication of the distance Detroit → Lisbon → San Antonio 200ms.
Going to get a specific measure from the user onsite shortly.
From my geographic replication of the distance Detroit → Lisbon → San Antonio
200ms, and at the edge of being enjoyable.
From the customer directly from India → San Antonio
- 240ms on wifi to a 5g point of presence.
- 290ms on a wired connection to a 4g point of presence.
When they use a Wired connection to their router device they for some reason are limited to a 4g point of presence and their latency goes up to 285-292 ms.
We’re going to try to address that with their connectivity… I’m not familiar with Indian home/office networking configurations however. Maybe someone on this community is?
I’m temporarily going to assume that a 5G is 10Gbps and 4G is 1Gbps
We also followed your suggestion with XFCE disabling compositing turning the desktop into a solid color etc. I’m going to try to tune XFCE as much as possible for them to more ably survive their essential Python Jupyter Notebook and VSCode json editing tasks as best they can.
Okay. Just a quick follow-up question. Do you have any measurements on what bandwidth they actually get from the San Antonio ThinLinc system? A rough measurement would suffice.
Here is a measurement of the data transfer speeds:
Looks like a mean throughput ranging from 200-310 KB/s, I have a long video of the transmittal of a 1GB file, and a 500Mb file, randomly sampling it seemed this was the typical range.
Some of the peak throughput measurements are 441.8KB/s, 777.1KB/s, 5.8MB/s but mostly it sat around 270KB/s
The 1Gb File and 500 Mb File were purely /dev/urandom generated data.