LTSP 5 is in minimal maintenance mode
The new LTSP is hosted at https://ltsp.github.io

IRC chat logs for #ltsp on irc.freenode.net (webchat)


Channel log from 20 September 2019   (all times are UTC)

00:10vagrantc has left IRC (vagrantc!~vagrant@unaffiliated/vagrantc, Quit: leaving)
01:39adrianor1 has joined IRC (adrianor1!~adrianorg@177.18.96.72)
01:42adrianorg has left IRC (adrianorg!~adrianorg@179.187.27.37.dynamic.adsl.gvt.net.br, Ping timeout: 240 seconds)
05:08kjackal has joined IRC (kjackal!~quassel@nat/canonical/x-oztzylisiydzwhrp)
05:20os_a has joined IRC (os_a!~Thunderbi@195.112.116.22)
05:25woernie has joined IRC (woernie!~werner@p5B296964.dip0.t-ipconnect.de)
05:30woernie has left IRC (woernie!~werner@p5B296964.dip0.t-ipconnect.de, Remote host closed the connection)
05:34ricotz has joined IRC (ricotz!~ricotz@ubuntu/member/ricotz)
05:50statler has joined IRC (statler!~Georg@p54897245.dip0.t-ipconnect.de)
06:25kjackal has left IRC (kjackal!~quassel@nat/canonical/x-oztzylisiydzwhrp, Ping timeout: 265 seconds)
07:05alkisg has left IRC (alkisg!~alkisg@ubuntu/member/alkisg, Quit: Leaving.)
07:10alkisg has joined IRC (alkisg!~alkisg@ubuntu/member/alkisg)
07:45woernie has joined IRC (woernie!~werner@p578bb7b6.dip0.t-ipconnect.de)
07:46woernie has left IRC (woernie!~werner@p578bb7b6.dip0.t-ipconnect.de, Remote host closed the connection)
07:46woernie has joined IRC (woernie!~werner@p578bb7b6.dip0.t-ipconnect.de)
08:37statler has left IRC (statler!~Georg@p54897245.dip0.t-ipconnect.de, Remote host closed the connection)
09:12statler has joined IRC (statler!~Georg@gwrz3.lohn24.de)
09:15gdi2k_ has joined IRC (gdi2k_!~gdi2k@37.230.130.32)
09:17gdi2k has left IRC (gdi2k!~gdi2k@host81-158-240-159.range81-158.btcentralplus.com, Ping timeout: 240 seconds)
09:58kjackal has joined IRC (kjackal!~quassel@nat/canonical/x-oriagojwvhfclvad)
10:24kjackal has left IRC (kjackal!~quassel@nat/canonical/x-oriagojwvhfclvad, Ping timeout: 246 seconds)
10:51gdi2k__ has joined IRC (gdi2k__!~gdi2k@host81-158-240-159.range81-158.btcentralplus.com)
10:54gdi2k_ has left IRC (gdi2k_!~gdi2k@37.230.130.32, Ping timeout: 250 seconds)
11:55Faith has joined IRC (Faith!~Paty_@unaffiliated/faith)
12:01mgariepy has joined IRC (mgariepy!~mgariepy@ubuntu/member/mgariepy)
12:02woernie has left IRC (woernie!~werner@p578bb7b6.dip0.t-ipconnect.de, Remote host closed the connection)
12:16section1 has joined IRC (section1!~section1@178.33.109.106)
13:59os_a has left IRC (os_a!~Thunderbi@195.112.116.22, Remote host closed the connection)
16:42alkisg has left IRC (alkisg!~alkisg@ubuntu/member/alkisg, Quit: Leaving.)
16:44alkisg has joined IRC (alkisg!~alkisg@ubuntu/member/alkisg)
16:49
<alkisg>
Heh, systemd-homed in the future will allow users to "bring their own home directory"
16:49
This will perfectly match ltsp :D
16:50
E.g. super fast usb3 sticks with many gbps transfer rates => no strain on the network anymore
16:50
<fiesh>
are you sure usb sticks outperform gbit lan?
16:51
not to mention the track reckord of consumer grade memory solutions as being close to write-only ;)
16:51
record
16:51
<alkisg>
Talking about the future
16:51
I imagine in a few years, the current usb sticks will seem like floppy drives to the future ones
16:52
<fiesh>
hmm maybe, but it would still destroy a lot of advantages of a network mounted home directory, most notable being available everywhere and properly backuped
16:52
besides, I think home directories are very rarely performance bottlenecks anyhow?
16:53
<alkisg>
Currently SSD disks give 500 MB/sec, or 5 gbps
16:53
Network can't really give 5 gbps per user *now*, which local disks can do
16:53
Fortunately programs etc still support old hdds, which have much less transfer rates, bigger latencies etc
16:53
<fiesh>
true, but how much data do you read from your home directory as opposed to say /usr
16:53
<alkisg>
About the same
16:54
<section1>
usb32ssd :D
16:54
<alkisg>
So in e.g. 5-10 years I imagine the LAN won't be enough without some help from local disks/sticks/whatever
16:54
<fiesh>
I hope 10gbit is the standard in 5-10 years, but of course hard disks will have evolved by then as well
16:54
<alkisg>
E.g. 20 users with 5 gpbs each is 100 gbps *now*
16:55
<fiesh>
but I think you're the exception if you read and write that much data to your home directory
16:55
<alkisg>
LAN can't hope to match the "many local SSDs" transfer rates
16:55
<fiesh>
typical users will run far bigger applications, like browsers, that generate very little data in their home directory
16:55
plus unless you do a lot of writing, I'd rather invest in RAM and do some cache warming in advance for data that will need to be read at some point
16:55
<alkisg>
/usr is pretty much read only and cached
16:56
/home is read write, so not too much cached, so it results in a lot of traffic
16:56
<fiesh>
how much do you write?
16:56
<alkisg>
I can't say "per session". E.g. KDE programs spam ~/.cache a whole lot when they run; others are more conservative
16:57
I haven't measured exactly
16:57
<fiesh>
compiling software is one of the things that writes a lot, but we run that locally on the server anyway because of other performance advantages, mainly the number of cores and how suboptimally distcc / its spin-offs scale sometimes
16:57
hmm ok, no idea about KDE
16:57
<alkisg>
VMs also cause extreme /home traffic, but they're not the normal
16:57
<fiesh>
seems to me most such caching should go to /tmp though
16:57
<alkisg>
No it's cache that survives logins/reboots
16:57
<fiesh>
that's true, cloning VMs in virtualbox does suck over network
16:57
oh I see
16:58
<alkisg>
Video/audio editing also causes very much traffic
16:58
And, thumbnail generation, navigating with nautilus in a media folder
16:58
It reads all the contents to then generate the thumbnails
16:58
<fiesh>
heh ok all things I'd never do anyway :-) ok maybe audio / video editing
16:58
<alkisg>
In schools, I see sshd (for /home) wasting a couple of cpu cores on the server
16:59
But I didn't do an extensive test, to give exact numbers
16:59
<fiesh>
hmm ok, we use NFS to avoid that
16:59
but I can see that being expensive
16:59
<alkisg>
The traffic is still there even if you save the cpu part
17:00
<fiesh>
that's true, I've never measured the latency over network, which usually tends to be the bigger problem for IO as compared to throughput anyway
17:00
so I'm not convinved 500mb/s vs 100mb/s matter that much
17:00
<alkisg>
fiesh: btw, I saw a recent issue with nfs, do you mind doing: grep nfs /proc/self/mountinfo, on any client,
17:01
and telling me the rsize/wsize there?
17:01jgee has left IRC (jgee!~jgee@190.159.118.121, Ping timeout: 246 seconds)
17:01
<fiesh>
hmm I'll have to do that on Monday, don't have sshd enabled on the clients and am at home right now
17:01
sorry, but I'll paste it on Monday
17:01
<alkisg>
Sure
17:02
<fiesh>
I do have to say one thing that's improved my life is the new NUCs finally supporting more than just 16gb of ram
17:02
<alkisg>
I found out that on 100 mbps clients, 1 gbps server, the default rsize/wsize causes extreme lags, increase of bandwidth, loss of speed and responsiveness
17:02
<fiesh>
oh wow, what's the default, 8192?
17:02
<alkisg>
1M
17:02
<fiesh>
that's... gigantic?!
17:02
<alkisg>
Yeah
17:03
<fiesh>
I remember back in the NFS2 and 3 days, when I benchmarked, UDP with 8192 was the best
17:03
but that's of course an older protocol and ancient hardware now, UDP isn't even supported anymore
17:03jgee has joined IRC (jgee!~jgee@190.159.118.121)
17:03
<fiesh>
(which is funny because 8192 caused fragmentation, but that was still better than something below the MTU)
17:03
<alkisg>
Now it's tcp, and 4k is the fastest for remote booting and /home, while 32k for best overall, and 1m = whole lot of lag, but a lot less lag on gigabit
17:04
I'm still benchmarking all those
17:04
<fiesh>
yeah that may be very much worthwhile
17:04
<alkisg>
I think there's a bug in libc and readahead involved
17:04
<fiesh>
oh
17:04
<alkisg>
Hopefully if that's resolved, we'll see much better nfs results
17:05
<fiesh>
that's with an MTU of 1500 I gather?
17:05
<alkisg>
Yes
17:05
<fiesh>
funny how jumbo frames never really made it
17:05
like 95.264% of the hardware supports it, and the rest just keeps screwing it up
17:05
<alkisg>
I'm not sure they'd help. If booting requires reading a lot small files, jumbo frames would just delay the rensponses
17:07
<fiesh>
hmm quite possible, yeah
17:07
<alkisg>
Btw, NBD and SSHFS have a lot of problems, but not that. They don't have bandwidth issues without involving things like rsize/wsize
17:08
<fiesh>
the thing is though with booting, I feel that about 75% of the boot time are wasted on the initial start and the NUCs PXE getting their initial dhcp lease
17:08
when that's done, iPXE kicks in and everything works quite fast
17:08
but the dhcp lease, always like 5 seconds or so...
17:09
<alkisg>
I don't mind much about boot time. As long as it's under a minute. But launching firefox in 10 secs vs 40 secs does make a lot of difference.
17:39
<fiesh>
hmm firefox takes less than 10 seconds to start I'd say
17:39
maybe like 3 to 5?
17:51
<alkisg>
The second time it's cached
17:51
Use: sync; echo 3 > /proc/sys/vm/drop_caches to measure the initial launch
17:52
(on the client)
17:54
I just tested here on my i5/1TB hdd, it needs 10 secs to show the window, before trying to load all my tabs etc which need another 10 secs
17:54
Of course on ssd it should be a lot faster
17:58
<||cw>
jumbo frames don't do much for typical network traffic, but does for things like iscsi, where it's used a lot. usually not the full jumbo size, but about 4-6K depending on equipment and use case
17:58
might help with NBD too
17:59
if the protocol can use something other than 512byte blocks as is typical with block device emulation
17:59
<alkisg>
I'm guessing it saves 10% of tcp overhead; it would be nice but I'm more concerned about the 500% lag of nfs with wrong rsize...
18:00
My latest benchmarks: https://marc.info/?l=linux-nfs&m=156897388913959&w=2
18:00
84 secs to boot could be 31 secs to boot instead
18:01
And 1250 MB transferred to boot could be 320 MB instead
18:10shored has left IRC (shored!~shored@87-92-122-167.bb.dnainternet.fi, Read error: Connection reset by peer)
18:10shored has joined IRC (shored!~shored@87-92-122-167.bb.dnainternet.fi)
18:40
<alkisg>
!kvm
18:40
<ltsp>
kvm: Virtual thin client: kvm -m 256 -vga vmware -ctrl-grab -no-shutdown -net nic,model=virtio -net user,tftp=/var/lib/tftpboot,bootfile=/ltsp/i386/pxelinux.0
18:51spaced0ut has left IRC (spaced0ut!~spaced0ut@unaffiliated/spaced0ut, Remote host closed the connection)
18:55spaced0ut has joined IRC (spaced0ut!~spaced0ut@unaffiliated/spaced0ut)
19:23statler has left IRC (statler!~Georg@gwrz3.lohn24.de, Remote host closed the connection)
19:47pppingme has left IRC (pppingme!~pppingme@unaffiliated/pppingme, Ping timeout: 258 seconds)
19:50pppingme has joined IRC (pppingme!~pppingme@unaffiliated/pppingme)
19:58section1 has left IRC (section1!~section1@178.33.109.106, Quit: Leaving)
20:17kjackal has joined IRC (kjackal!~quassel@nat/canonical/x-iqofkgocpqxvlavd)
20:28Faith has left IRC (Faith!~Paty_@unaffiliated/faith, Quit: Leaving)
21:59ricotz has left IRC (ricotz!~ricotz@ubuntu/member/ricotz, Quit: Leaving)
22:05kjackal has left IRC (kjackal!~quassel@nat/canonical/x-iqofkgocpqxvlavd, Ping timeout: 268 seconds)