03:36 | GodFather has left IRC (GodFather!~rcc@wsip-66-210-242-210.ph.ph.cox.net, Ping timeout: 276 seconds) | |
03:50 | lucas_ has joined IRC (lucas_!~lucascast@177-185-133-170.dynamic.isotelco.net.br) | |
03:51 | lucas_ is now known as Guest49163 | |
03:59 | lucascastro has left IRC (lucascastro!~lucascast@177-185-133-170.dynamic.isotelco.net.br, *.net *.split) | |
03:59 | sutula has left IRC (sutula!~sutula@184.97.9.9, *.net *.split) | |
04:01 | sutula has joined IRC (sutula!~sutula@184.97.9.9) | |
04:04 | vagrantc has left IRC (vagrantc!~vagrant@unaffiliated/vagrantc, Quit: leaving) | |
06:01 | RaphGro has joined IRC (RaphGro!~raphgro@fedora/raphgro) | |
06:41 | ricotz has joined IRC (ricotz!~ricotz@ubuntu/member/ricotz) | |
07:44 | woernie has joined IRC (woernie!~werner@pd9e8b5cc.dip0.t-ipconnect.de) | |
07:55 | woernie has left IRC (woernie!~werner@pd9e8b5cc.dip0.t-ipconnect.de, Ping timeout: 264 seconds) | |
07:56 | woernie has joined IRC (woernie!~werner@pd9e8b5cc.dip0.t-ipconnect.de) | |
07:59 | sfxworks has joined IRC (sfxworks!46a07ce1@ip70-160-124-225.hr.hr.cox.net) | |
08:00 | <sfxworks> Hi. Following the guide: I get "I don't know how to mount /srv/ltsp/ubuntu.img"
| |
08:00 | <alkisg> Hi sfxworks, and what is that image?
| |
08:00 | <sfxworks> Could not parse output of: ip -o route get 192.168.67.1
| |
08:00 | LTSP command failed: blkid -po export /srv/ltsp/ubuntu.img
| |
08:00 | was the error
| |
08:00 | <alkisg> What's the output of `file /srv/ltsp/ubuntu.img`
| |
08:00 | <sfxworks> root@ryzen1:~/vms# file /srv/ltsp/ubuntu.img
| |
08:01 | srv/ltsp/ubuntu.img: symbolic link to /root/vms/focal-server-cloudimg-amd64.img
| |
08:01 | It is the ubuntu 20.04 image
| |
08:01 | Though I did try to add my ssh key to it
| |
08:01 | I used virt-sysprep -a focal-server-cloudimg-amd64.img --ssh-inject root:file:/root/.ssh/id_rsa.pub
| |
08:02 | <alkisg> What's the output of `file /root/vms/focal-server-cloudimg-amd64.img`
| |
08:02 | I haven't seen that image
| |
08:02 | <sfxworks> `/root/vms/focal-server-cloudimg-amd64.img: QEMU QCOW2 Image (v2), 2361393152 bytes`
| |
08:02 | woernie has left IRC (woernie!~werner@pd9e8b5cc.dip0.t-ipconnect.de, Ping timeout: 264 seconds) | |
08:02 | <sfxworks> It is from https://cloud-images.ubuntu.com/focal/current/
| |
08:02 | <alkisg> sfxworks: ltsp doesn't support qcow2, it only supports raw images
| |
08:02 | You'll need to decompress it
| |
08:02 | <sfxworks> Ah okay, thanks!
| |
08:02 | <alkisg> np
| |
08:04 | woernie has joined IRC (woernie!~werner@www.velometrik.eu) | |
08:09 | <alkisg> (initially, I started using libguestfs to support qcow2 as well, but then I wanted to be able to mount the VMs from the initramfs as well, to boot clients directly with them, so I stopped using libguestfs)
| |
08:15 | <sfxworks> I... See- hmm .I think I followed the wrong installation guide possibly? https://ltsp.org/docs/installation/ as soon as I ran the ltsp image command my server rebooted into an ubuntu desktop environment of which I do not know the password to...
| |
08:16 | <alkisg> sfxworks: eeeh, ltsp doesn't modify your server
| |
08:16 | Are you talking about your client?
| |
08:17 | That page is the one and only installation guide, it's fine
| |
08:17 | <sfxworks> No err the server. I ran it on a machine in a datacenter and it just suddenly rebooted into a desktop env.
| |
08:17 | <alkisg> Can you share the history of commands that you ran?
| |
08:17 | <sfxworks> I had to connect to it via it's BMC
| |
08:17 | <alkisg> I.e. type history or sudo history
| |
08:17 | (put it to pastebin if so)
| |
08:17 | Or you could share your screen with me if you like, it'll certainly be faster...
| |
08:18 | <sfxworks> Here is what I have: https://gist.github.com/sfxworks/7e2d1a014242d67031a145f8e704a0d9
| |
08:19 | Ah I do not want to take your time with a screenshare! Sorry about this though I do appreciate the support.
| |
08:19 | <alkisg> No worries. I usually prefer screen sharing as it's 10 times faster than IRC :)
| |
08:19 | * alkisg does "remote support" as it's main 3 jobs... | |
08:21 | <sfxworks> Erm, I am sorry I am a bit hesitant in doing that. Though if you are available for a call using discord or similar I could share my screen there. I would just not be ok with yielding control.
| |
08:22 | <alkisg> No worries
| |
08:22 | sfxworks: I don't use cloud images (why do you opted on cloud instead of a usual desktop image), but I don't see anything that resembles the installation of a desktop environment there
| |
08:23 | <sfxworks> Yeah that's what I am wondering about...
| |
08:23 | <alkisg> So on your server, what's the output of: ls /usr/share/xsessions
| |
08:23 | <sfxworks> It definitely has taken me by surprise. Especially since this was an environment with containers and nothing is installed on this host.
| |
08:23 | <alkisg> And of : cat /var/log/apt/history.log
| |
08:25 | <sfxworks> https://gist.github.com/sfxworks/a6f0c7cf98fe6513bd907b4c48f66c63
| |
08:25 | I see `gnome-control-center-data:amd64`
| |
08:25 | from the apt install --install-recommends
| |
08:25 | ltsp etc
| |
08:26 | <alkisg> sfxworks: ah, you installed epoptes in a headless environment
| |
08:26 | epoptes is a graphical tool, you should have omitted that on headless servers
| |
08:26 | <sfxworks> Ah understood!
| |
08:26 | <alkisg> I'd purge all these packages that were installed by that last command, then I'd reinstall ltsp without epoptes
| |
08:29 | <sfxworks> Do you think I would be just as safe with a purge on just epotes?
| |
08:29 | Running a purge on the installer command yields:
| |
08:29 | kubelet : Depends: ethtool but it is not going to be installed
| |
08:29 | and similar
| |
08:29 | <alkisg> Try sudo apt purge --auto-remove epoptes epoptes-client, and see
| |
08:29 | If it removes gnome, ok, if not, no
| |
08:30 | Autoremove doesn't remove packages that were recommended by OTHER packages than epoptes
| |
08:30 | So it'll leave packages that weren't there yesterday
| |
08:33 | <sfxworks> Gotcha.. Understood. Giving that a shot. Thank you so much!
| |
08:33 | <alkisg> np
| |
08:33 | The best would be to get the list of the packages, and remove just them
| |
08:33 | A simple sed command should produce the list...
| |
08:41 | <sfxworks> Perfect! Situation resolved.
| |
08:41 | <alkisg> Great
| |
08:59 | adrianorg has left IRC (adrianorg!~adrianorg@177.156.230.68, Ping timeout: 276 seconds) | |
08:59 | adrianorg has joined IRC (adrianorg!~adrianorg@189.58.183.22.dynamic.adsl.gvt.net.br) | |
09:10 | <sfxworks> Should I be concerned about: Mar 09 09:07:42 ryzen1 dnsmasq-tftp[6954]: file /srv/tftp/ltsp/ltsp.img not found?
| |
09:10 | <alkisg> sfxworks: yes, it means you didn't run `ltsp initrd`
| |
09:10 | The clients won't boot without it
| |
09:10 | <sfxworks> Ah
| |
09:11 | woernie has left IRC (woernie!~werner@www.velometrik.eu, Ping timeout: 256 seconds) | |
09:11 | woernie has joined IRC (woernie!~werner@pd9e8b5cc.dip0.t-ipconnect.de) | |
09:13 | <sfxworks> Thanks! Sorry I thought that was optional. I'm guessing ltsp nfs is required too?
| |
09:13 | <alkisg> sfxworks: yes all the steps in the installation page are mandatory
| |
09:15 | <sfxworks> Understood, thanks!
| |
09:19 | No errors so far, but when I ran nfs, intrig and image I got `Could not parse output of: ip -o route get 192.168.67.1` even though I have that IP on the machine via netplan on eno2. Is this something to be concerned about?
| |
09:20 | It appears as `local 192.168.67.1 dev lo table local src 192.168.67.1 uid 0 \ cache <local>`
| |
09:20 | It also appears on ip a ` inet 192.168.67.1/24 brd 192.168.67.255 scope global eno2`
| |
09:21 | <alkisg> sfxworks: it shouldn't cause any issues, but can you upload the whole output of `ip -o route get 192.168.67.1` to pastebin?
| |
09:21 | <sfxworks> Just can't seem to ping the machine fetching the files yet: `PING 192.168.67.247 (192.168.67.247) 56(84) bytes of data.` `From 192.168.67.1 icmp_seq=1 Destination Host Unreachable`
| |
09:22 | The whole output is just `local 192.168.67.1 dev lo table local src 192.168.67.1 uid 0 \ cache <local>`
| |
09:22 | <alkisg> OK, I'll note it down and troubleshoot that later
| |
09:22 | It shouldn't cause any issues
| |
09:23 | <sfxworks> Understood.
| |
09:23 | The files were served... `sent /srv/tftp/ltsp/ubuntu/initrd.img to 192.168.67.247`. Hmm I wonder why it isn't up yet...
| |
09:23 | <alkisg> Next message there would be the nfs successful mount notification
| |
09:24 | Run `journalctl -f` on the server, and watch the client screen too
| |
09:29 | <sfxworks> Hmm I am not seeing this. Though I made sure I ran `ltsp nfs` and I have the nfs-kernel-server package `nfs-kernel-server is already the newest version (1:1.3.4-2.5ubuntu6).`
| |
09:29 | <alkisg> What messages do you see on the client screen?
| |
09:30 | <sfxworks> Unfortunately these clients do not have a bmc. I can try running this on a home lab.
| |
09:30 | <alkisg> What is a bmc?
| |
09:31 | A screen?
| |
09:31 | Can't you test with a VM client with a normal screen?
| |
09:31 | <sfxworks> baseboard management controller
| |
09:31 | Err yeah thats a good idea
| |
09:32 | I'd just have to remake some bridges...
| |
09:48 | ghaoil has joined IRC (ghaoil!~ghaoil@31.7.247.13) | |
09:53 | <sfxworks> Can't replicate using the VM. Interesting... I wonder if one of my machines is having issues in general.
| |
09:53 | Nice though, glad to see it in action!
| |
09:54 | <alkisg> Great
| |
09:59 | <sfxworks> This appears on the clients from the log... `error 8 User aborted the transfer received from 192.168.67.247`
| |
09:59 | I found this as a result: https://dnsmasq-discuss.thekelleys.org.narkive.com/dM1IXeKH/dnsmasq-tftp-failed-sending-file
| |
09:59 | <alkisg> sfxworks: and the client fails to boot, or continues normally?
| |
10:00 | <sfxworks> For the two baremetal machines (not the VM) the logs show that dnsmasq-tftp is sending more files for a bit. Though it isn't reachable.
| |
10:01 | I can ping the vm and ssh (though I need to adjust my keys) but can't do anything with the bare metal machines.
| |
10:02 | <alkisg> Eeeh, so did you see a login screen on the VM? Were you able to login with a user there?
| |
10:02 | <sfxworks> yeah I have a login screen on the VM
| |
10:03 | Though it didn't save the ssh public key I injected
| |
10:03 | Assuming it is working otherwise though.
| |
10:03 | <alkisg> !ssh
| |
10:03 | <ltspbot> I do not know about 'ssh', but I do know about these similar topics: 'sshd'
| |
10:03 | <alkisg> !sshd
| |
10:03 | <ltspbot> sshd: Exposing sshd host keys over NFS is unsafe, so it's disabled by default and !epoptes is recommended instead. If you insist on running sshd in LTSP clients, read https://github.com/ltsp/ltsp/discussions/310#discussioncomment-101549
| |
10:04 | <alkisg> About the bare metal client, we'd need to read its screen
| |
10:43 | <ghaoil> Is ltsp still affected by the sshfs / gnome-keyring bug described here: https://bugzilla.gnome.org/show_bug.cgi?id=730587
| |
10:43 | <alkisg> ghaoil: yes and no; i.e. it is, but a work around disables gnome-keyring if it detects sshfs
| |
10:44 | So the end result is that ltsp users that use sshfs, just can't use gnome-keyring, but without the "65000 temp files" side effect
| |
10:45 | <ghaoil> I tried mounting the '~/.local/share/keyrings' separately and it works around the issue while still allowing you to use gnome keyring.
| |
10:45 | So it's recursively mounted.
| |
10:45 | <alkisg> ghaoil: where is it mounted from, e.g. sshfs, or nfs, or tmpfs?
| |
10:46 | <ghaoil> It's mounted from sshfs too. It works aound the issue for whatever reason.
| |
10:46 | <alkisg> How are you doing a second sshfs mount without requiring the user to enter his password again?
| |
10:46 | passwordless sshfs?
| |
10:46 | <ghaoil> https://pastebin.com/zxrZtdsF
| |
10:47 | <alkisg> pam_mount?
| |
10:47 | <ghaoil> I'm using pam_mount.
| |
10:47 | <alkisg> LTSP is disabling symlink forwarding for safety and in order for chrome to work,
| |
10:48 | one could enable them and have sshfs working (like you do), but at that ^ cost
| |
10:48 | <ghaoil> With the configuration in that paste, chrome works.
| |
10:48 | <alkisg> And .ICEauthority too?
| |
10:49 | ghaoil: ah ok you disable hardlinks, you don't enable symlink forwarding
| |
10:49 | <ghaoil> I could test it if that would be helpful. You'd have to tell me how though.
| |
10:49 | <alkisg> I don't know what that breaks
| |
10:50 | That means that the ln (without -s) command would fail, along with any program that uses it
| |
10:50 | I'd prefer to spend a couple of hours and fix this in gnome-keyring, rather than try to find which other programs would break if applying "disable_hardlinks"...
| |
10:52 | <ghaoil> ln without -s works.
| |
10:52 | <alkisg> Although, if you're only applying that inside that dir, it shouldn't affect other programs...
| |
10:52 | Inside that dir?
| |
10:52 | Or outside it?
| |
10:52 | <ghaoil> Inside and out
| |
10:53 | <alkisg> man sshfs => -o disable_hardlink => link(2) will return with errno set to ENOSYS. Hard links don't currently work perfectly on sshfs, and this confuses some programs. If that happens try disabling hard links with this option.
| |
10:54 | I read that as "the link call will fail"; not sure which program will misbehave with that
| |
10:54 | <ghaoil> alkisg, fwiw I remembered this bug and we have been using this workaround for more than two years.
| |
10:55 | <alkisg> You could probably just put SSH_OPTIONS="-o disable_hardlink" in ltsp.conf instead of that
| |
10:55 | <ghaoil> This exact configuration. I'd love to see a bug fix in gnome-keyring but given that the bug is open since 05-2014, I won't hold my breath.
| |
10:55 | <alkisg> Yeah someone affected should spend some time to send a patch
| |
10:56 | I didn't want to do that myself too, as we don't even use sshfs here, we're using nfs
| |
10:56 | <ghaoil> I'm not using LTSP, I used to, we could not imagine not having gnome-keyring as we're using evolution extensively.
| |
10:57 | <alkisg> Eh, it's just an ltsp.conf option to make it work
| |
10:57 | SSH_OPTIONS="-o disable_hardlink"
| |
10:57 | And if it doesn't break anything else for you, np then
| |
10:57 | <ghaoil> I, just wanted to bring it to your attention.
| |
10:57 | <alkisg> Thank you for that, much appreciated
| |
10:57 | <ghaoil> So are you saying that I could do a single mount?
| |
10:58 | <alkisg> Maybe disabling hardlinks is a better default than disabling gnome-keyring
| |
10:58 | Yes
| |
10:58 | Well... not sure if any programs would break with that
| |
10:58 | You'd need to test; just put disable_hardlink in the main sshfs mount
| |
11:01 | <ghaoil> Okay, I will do some tests with that I need to make sure that that wouldn't break anything else before I deploy that.
| |
11:01 | <alkisg> :thumbs:
| |
12:05 | bcg__ has left IRC (bcg__!~b@dg4ybwyyyyyyyyyyyyyyt-3.rev.dnainternet.fi, Quit: bcg__) | |
12:05 | bcg has joined IRC (bcg!~b@dg4ybwyyyyyyyyyyyyyyt-3.rev.dnainternet.fi) | |
12:13 | Douglas_br has joined IRC (Douglas_br!bd4cbe5a@189.76.190.90) | |
12:15 | <Douglas_br> Hello! How are you? I am reading ltsp.conf doc. so very interting about client multiseat. Does It works with hdmi and VGA devices?
| |
12:16 | and DVI. Nowday there are mobos with 3 options
| |
12:36 | please question: when configure an user with autologin, at the server side, I add an user in GUI mode an put password base64 or not, only inside ltsp.conf goes password base64?
| |
12:52 | for example: Gui mode: user01 password: user01. So, On the ltsp.conf change for base64?
| |
13:30 | ghaoil has left IRC (ghaoil!~ghaoil@31.7.247.13, Read error: Connection reset by peer) | |
13:30 | ghaoil has joined IRC (ghaoil!~ghaoil@31.7.247.13) | |
13:54 | Douglas_br has left IRC (Douglas_br!bd4cbe5a@189.76.190.90, Ping timeout: 240 seconds) | |
14:02 | Douglas_br has joined IRC (Douglas_br!bd4cbe5a@189.76.190.90) | |
14:04 | <Douglas_br> hello
| |
14:04 | please question: when configure an user with autologin, at the server side, I add an user in GUI mode an put password base64 or not, only inside ltsp.conf goes password base64?
| |
14:04 | for example: Gui mode: user01 password: user01. So, On the ltsp.conf change for base64?
| |
14:13 | <sebd> Douglas_br: I have never used the autologin feature yet, sorry. But please wait some more time, someone will know and answer you.
| |
14:13 | Usualy questions get answered quite fast here.
| |
14:17 | <alkisg> Douglas_br: regarding multiseat, ltsp currently supports what systemd supports, which is clients with MORE than one graphics cards
| |
14:18 | Xorg does supports multiuser with graphics cards that have multiple outputs, but systemd doesn't support that, so to do that in ltsp, it's possible, but it's not automated
| |
14:19 | Autologin only needs a password if you use sshfs. If you use nfs, there's no need to specify a password,
| |
14:19 | the password is set for the user normally on the server, and in ltsp.conf it goes in base64 form
| |
14:51 | sfxworks has left IRC (sfxworks!46a07ce1@ip70-160-124-225.hr.hr.cox.net, Ping timeout: 240 seconds) | |
14:52 | <alkisg> ghaoil: you're using iscsi to netboot the clients? Did you put a custom script in initramfs-tools for that?
| |
14:53 | <ghaoil> There is a parameter set in the iscsi configuration on the client
| |
14:53 | That is all that's needed
| |
14:54 | I have ipxe make the initial connection
| |
14:54 | <alkisg> ghaoil: can you share the output of `cat /proc/cmdline` of an ltsp client?
| |
14:59 | <ghaoil> alkisg: https://pastebin.com/Z5K8KBMi
| |
14:59 | We don't use LTSP
| |
14:59 | <alkisg> ghaoil: sure, but... where's the netboot information there? :D
| |
14:59 | ipxe can connect to iscsi, sure, but what about the initramfs, how can it find it?
| |
15:00 | Are you sure that's from a netbooted (sorry not ltsp) client, and not from the server itself?
| |
15:01 | <ghaoil> You create the file '/etc/iscsi/iscsi.initramfs'
| |
15:01 | And set its contents to 'ISCSI_AUTO=true'
| |
15:01 | <alkisg> Something like this? https://github.com/intel/intelRSD/issues/26#issuecomment-311656769
| |
15:02 | If so, ok then, netbooting is handled by open-iscsi...
| |
15:02 | And I guess you send the information via dhcp
| |
15:03 | <ghaoil> Yes, we're using open-iscsi as described in that post.
| |
15:04 | <alkisg> ghaoil: how does the "cow" part work? Each client gets each own pool/snapshot of zfs/lvm/something?
| |
15:05 | <ghaoil> In zfs we create a anspshot after we changed something. Than the clone is made of the last snapshot.
| |
15:05 | An existing snapshot gets deleted unless we choose not to. So it allows for 'persistent' images.
| |
15:05 | <alkisg> But how can multiple clients work with a single snapshot? Or does each client get its own snapshot or image?
| |
15:06 | <ghaoil> In LVM the LV gets cloned right away.
| |
15:06 | Each client gets it's own snapshot using the mac address as identifier which we retrieve through IPXE
| |
15:07 | <alkisg> Is this secure? If a client fakes another mac, it can inject data to the other's disk?
| |
15:09 | <ghaoil> Yes, theocratically knowing the mac of another computer, you could connect to it's image I can image using CHAP authentication to prevent this. Haven't implemented this.
| |
15:09 | Also, iSCSI is not secure.
| |
15:09 | <alkisg> Is there any benefit to "not using cow", other than saving a few MB RAM on the netbooted client?
| |
15:10 | Why not just use an overlay?
| |
15:10 | <ghaoil> We ran into issues where printing was a big problem because of the lack of storage space.
| |
15:11 | <alkisg> Also, theoretically, a misbehaving client may fill up your server's disk by writing junk in its own / ?
| |
15:12 | <ghaoil> No, it's limited to a maximum space which we set to 32GB clients will typically use less than that.
| |
15:12 | <alkisg> Very nice
| |
15:12 | <ghaoil> Also, we include a swap partition in our image.
| |
15:12 | <alkisg> Does it also support a /swapfile? Or it hangs over iscsi?
| |
15:12 | Got it
| |
15:13 | <ghaoil> And we compress the image using ZSTD on the filesystem level which has improved performance dramatically.
| |
15:13 | <alkisg> Although, the image isn't compressed in squashfs, right? So disk access should be around 5 times lower...
| |
15:13 | Aaaah
| |
15:14 | Hehe you answer right before I ask the question :D
| |
15:14 | OK then only the metadata is uncompressed, it should be around 2 times slower than squashfs...
| |
15:15 | (based on benchmarks I've done with squashfs over nfs, vs btrfs image with compression over nfs)
| |
15:16 | ghaoil: how stable is iscsi? E.g. nfs allows even server rebooting without client hanging; while nbd was so unstable, that a small disconnection would make clients hang
| |
15:16 | <ghaoil> Yes, squashfs is faster but we've found this to be more theoretical that practical as we couldn't tell the difference in day to day use. And we're happy not to have to build images.
| |
15:18 | We've tried NBD too, although slightly faster, again purely theoretical, iSCSI is a lot more stable and supports reconnect.
| |
15:18 | The client does freeze eventually but continues when the server is available again.
| |
15:18 | <alkisg> Awesome
| |
15:20 | So essentially you gain a lot, and only lose the ability to netboot a chroot or an .iso
| |
15:20 | (the new ltsp supports netbooting .isos directly, with no ltsp image involved)
| |
15:21 | <ghaoil> You can export ISO's with Lio
| |
15:21 | But I've never done anything with it.
| |
15:23 | <alkisg> I definitely want to add iscsi booting support to ltsp, and I also played with zfs snapshots a bit and I was amazed by their possibilities
| |
15:23 | The ltsp web server will also be similar to what you have for ipxe, except it'll manage more stuff, like ltsp.img and settings
| |
15:24 | Another thing I want to add support for, is local mirroring / caching, as with the new disk technology, local disks are waaaaay faster than networks
| |
15:25 | <ghaoil> It's very nice to be able to 'do stuff' as the client connects to your server at boot time.
| |
15:25 | <alkisg> NVMe drives = 20 gbps each, times 10 clients => 200 gbps, can't have network that fast
| |
15:26 | <ghaoil> I've found that on a server that has enough memory, you can not win on performance using fast drives if you have enough memory and use ZFS.
| |
15:26 | <alkisg> Yeah unfortunately I had to rewrite ltsp from scratch, and I didn't have the time to add a web service in time
| |
15:26 | I've tested ltsp on up to 80 gbps network with zfs
| |
15:27 | It's excellent, but it won't reach the speed of local nvme drives
| |
15:27 | <ghaoil> As long as the image fits in the memory of the server, you don't need fast drives.
| |
15:27 | <alkisg> The problem isn't while reading or caching the image on the server
| |
15:27 | It's while sending it to the clients
| |
15:27 | If the server has a 10 gbps NIC, it can't send more than that
| |
15:28 | If it has 1 gbps nic... then 10 clients launching libreoffice will need 15 seconds each
| |
15:28 | <ghaoil> True, I'ce never reached nvme speeds on the clients
| |
15:28 | <alkisg> This is fine, but raising to 100 clients causes a bottleneck
| |
15:29 | While with bmcache or other technologies, local caching could dramatically improve this
| |
15:29 | For local caching to work though, the same read only base should be synced between clients
| |
15:29 | And a writable overlay can be created over it using local storage, to have enough memory for swap, printing etc
| |
15:31 | 10 years in the future, I imagine I'll have usb 4.0 sticks that will costs 5€ and be able to transfer at 10+ gbps each; while my local network will still be less than that for ALL clients... I'll definitely want to utilize local storage then
| |
15:32 | (a usb 3 ssd can do that currently with 20 €)
| |
15:33 | <ghaoil> It's possible to have local nvme and sync an image locally.
| |
15:34 | Especially with ZFS where you can just send the delta, this should be fast.
| |
15:37 | <alkisg> Indeed that's a very good option, we're also using it for remote /home backups
| |
15:38 | <ghaoil> Do you have anything online regarding the ltsp web interface
| |
15:38 | <alkisg> Only a blueprint, https://github.com/ltsp/ltsp/issues/147
| |
15:39 | Is your code uploaded publicly somewhere? Or is it internal for now?
| |
15:39 | <ghaoil> Are you writing it in python?
| |
15:39 | <alkisg> I haven't started yet. I'm between python, php and node
| |
15:40 | The "real" code will definitely be shell and python; just the web service may be python/php/node
| |
15:40 | <ghaoil> I wrote our we iinterface in node and have it connect to a python daemon using sockets.
| |
15:41 | It's very, but it does the trick.
| |
15:41 | It was my first python project.
| |
15:41 | <alkisg> I also want to implement long polling there, so that the server will be able to push notifications to the clients at any time
| |
15:42 | I described what I could in that url above; but I'll start implementing it next year, not now...
| |
15:43 | <ghaoil> I had a socket connection between the client and the server once. I didn't have a real need for it so I dropped it.
| |
15:43 | More recently I have been thinking about avahi
| |
15:44 | But that's more for client discovery, not sending commands or messages.
| |
15:44 | <alkisg> Yeah, and it also doesn't work over WAN
| |
15:45 | <ghaoil> true
| |
15:45 | <alkisg> Some schools here already have 250Mbps connections; I want to be able to tell them "netboot this school client from my remote support office, and I'll install the XXX OS for you"
| |
15:46 | <ghaoil> Well, I sure that our use case is a lot more narrow than the things you've come across.
| |
15:47 | I wouldn't mind to make some of the code available, so you can have a look.
| |
15:48 | <alkisg> ghaoil: thank you for that. I'd love to! But it'll have to wait until next year, as I'm finishing my ...very...late... phd this year :)
| |
15:49 | <ghaoil> Very encouraging, much grace with that.
| |
16:01 | ghaoil has left IRC (ghaoil!~ghaoil@31.7.247.13, Ping timeout: 276 seconds) | |
16:28 | Douglas_br has left IRC (Douglas_br!bd4cbe5a@189.76.190.90, Quit: Connection closed) | |
17:52 | ghaoil has joined IRC (ghaoil!~ghaoil@hs-user-138.wia.cz) | |
18:45 | GodFather has joined IRC (GodFather!~rcc@wsip-66-210-242-210.ph.ph.cox.net) | |
18:48 | ghaoil has left IRC (ghaoil!~ghaoil@hs-user-138.wia.cz, Ping timeout: 256 seconds) | |
18:56 | GodFather has left IRC (GodFather!~rcc@wsip-66-210-242-210.ph.ph.cox.net, Quit: Ex-Chat) | |
18:56 | GodFather has joined IRC (GodFather!~rcc@wsip-66-210-242-210.ph.ph.cox.net) | |
19:13 | ghaoil has joined IRC (ghaoil!~ghaoil@hs-user-138.wia.cz) | |
20:09 | woernie has left IRC (woernie!~werner@pd9e8b5cc.dip0.t-ipconnect.de, Remote host closed the connection) | |
20:40 | ghaoil has left IRC (ghaoil!~ghaoil@hs-user-138.wia.cz, Ping timeout: 246 seconds) | |
21:23 | RaphGro has left IRC (RaphGro!~raphgro@fedora/raphgro, Quit: Please remember your own message. It'll be read as soon as possible.) | |
22:04 | bcg has left IRC (bcg!~b@dg4ybwyyyyyyyyyyyyyyt-3.rev.dnainternet.fi, Ping timeout: 272 seconds) | |
22:06 | ricotz has left IRC (ricotz!~ricotz@ubuntu/member/ricotz, Quit: Leaving) | |
22:27 | bcg has joined IRC (bcg!~b@dg4ybwyyyyyyyyyyyyyyt-3.rev.dnainternet.fi) | |