00:26 | lucascastro has left IRC (lucascastro!~lucascast@177-185-130-132.dynamic.isotelco.net.br, Read error: Connection reset by peer) | |
00:28 | lucascastro has joined IRC (lucascastro!~lucascast@177-185-130-132.dynamic.isotelco.net.br) | |
01:30 | Vercas has left IRC (Vercas!~Vercas@gateway/tor-sasl/vercas, Remote host closed the connection) | |
01:30 | Vercas has joined IRC (Vercas!~Vercas@gateway/tor-sasl/vercas) | |
02:56 | ServerStatsDisco has left IRC (ServerStatsDisco!~serversta@2001:470:69fc:105::1a) | |
04:11 | lucascastro has left IRC (lucascastro!~lucascast@177-185-130-132.dynamic.isotelco.net.br, Read error: Connection reset by peer) | |
04:11 | lucascastro has joined IRC (lucascastro!~lucascast@177-185-130-132.dynamic.isotelco.net.br) | |
04:34 | woernie has left IRC (woernie!~werner@p5dded819.dip0.t-ipconnect.de, Ping timeout: 256 seconds) | |
04:35 | woernie has joined IRC (woernie!~werner@p5b296fbf.dip0.t-ipconnect.de) | |
06:43 | ricotz has joined IRC (ricotz!~ricotz@ubuntu/member/ricotz) | |
06:54 | shored1 has joined IRC (shored1!~shored@user/shored) | |
06:55 | shored has left IRC (shored!~shored@user/shored, Ping timeout: 264 seconds) | |
09:00 | DouglasGiovaniOe has left IRC (DouglasGiovaniOe!~doguibnum@2001:470:69fc:105::1:929, Quit: You have been kicked for being idle) | |
09:27 | <Hyperbyte> !splash
| |
09:27 | <ltspbot> splash: to disable the splash screen in Ubuntu, in order to see any boot error messages, run `sudo gedit /var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default` and remove quiet splash .
| |
09:28 | <Hyperbyte> Old :-)
| |
09:28 | alkisg, any idea why I don't see a splash screen during boot on my (VM) client?
| |
09:28 | <alkisg> !forget splash
| |
09:28 | <ltspbot> The operation succeeded.
| |
09:29 | <alkisg> !learn splash as `To enable the splash screen, you may put KERNEL_PARAMETERS="quiet splash" under [clients] in ltsp.conf, and then run: ltsp ipxe`
| |
09:29 | <ltspbot> The operation succeeded.
| |
09:29 | <alkisg> !splash
| |
09:29 | <ltspbot> splash: To enable the splash screen, you may put KERNEL_PARAMETERS="quiet splash" under [clients] in ltsp.conf, and then run: ltsp ipxe
| |
09:30 | <alkisg> It's hidden by default because I prefer efficiency over windows-like boot messages
| |
09:30 | "Please wait, we are setting up some things for you"...
| |
09:30 | "BSOD 0x12345678: go figure!"
| |
09:33 | <Hyperbyte> I absolutely agree.
| |
09:34 | But for many users (especially older people), the systemd/kernel boot messages can be a little bit confusing.
| |
09:34 | Looks more pro too with a bootup logo :-)
| |
09:36 | <alkisg> As long as a sysadmin is around that can re-enable the boot messages when troubleshooting, sure, go for it
| |
09:36 | But if the elders are without a sysadmin, then I'd force-feed them the boot messages over plymouth :D
| |
09:44 | <Hyperbyte> Well I'm the sysadmin ;-)
| |
09:45 | <alkisg> 👍️
| |
09:46 | <Hyperbyte> Should I worry about "Dev loopN: unable to read RBD block 8" during boot?
| |
09:49 | <alkisg> RBD isn't installed by default, I don't know if you did any experiments with it. I'd uninstall it.
| |
09:51 | <Hyperbyte> Clean install of Ubuntu 20.04
| |
09:51 | Fresh VM... (server is a VM too)
| |
09:51 | So I guess somehow it's been installed.
| |
09:51 | I'll remove it :-)
| |
10:07 | vsuojanen has left IRC (vsuojanen!~vsuojanen@cable-hml-585682-65.dhcp.inet.fi, Ping timeout: 256 seconds) | |
10:09 | vsuojanen has joined IRC (vsuojanen!~vsuojanen@cable-hml-585682-65.dhcp.inet.fi) | |
10:10 | <Hyperbyte> Is there a ppa/repository somewhere that already has the remoteapps included alkisg? Else I'll try drop the applets in manually, the changes seem easy enough
| |
10:11 | <alkisg> !proposed
| |
10:11 | <ltspbot> Error: "proposed" is not a valid command.
| |
10:12 | <alkisg> https://ltsp.org/advanced/proposed-ppa/
| |
10:13 | <Hyperbyte> Nice :-)
| |
10:31 | shored has joined IRC (shored!~shored@user/shored) | |
10:32 | shored1 has left IRC (shored1!~shored@user/shored, Ping timeout: 264 seconds) | |
11:21 | <Hyperbyte> alkisg, rbd packages are not installed, but still get those errors. Weird.
| |
11:23 | <alkisg> Hyperbyte: do you also get them on the server itself, while booting it?
| |
11:24 | I've never seen that message in any of my setups. Did you start with the ubuntu server iso, or with the desktop iso?
| |
11:32 | Vercas has left IRC (Vercas!~Vercas@gateway/tor-sasl/vercas, Remote host closed the connection) | |
11:32 | Vercas4 has joined IRC (Vercas4!~Vercas@gateway/tor-sasl/vercas) | |
11:43 | Vercas4 has left IRC (Vercas4!~Vercas@gateway/tor-sasl/vercas, Remote host closed the connection) | |
11:44 | Vercas has joined IRC (Vercas!~Vercas@gateway/tor-sasl/vercas) | |
11:48 | <Hyperbyte> alkisg, desktop. I'll see what the server says.
| |
13:08 | woernie has left IRC (woernie!~werner@p5b296fbf.dip0.t-ipconnect.de, Ping timeout: 264 seconds) | |
13:08 | woernie has joined IRC (woernie!~werner@p5b296fbf.dip0.t-ipconnect.de) | |
13:56 | danboid has joined IRC (danboid!~dan@cpc127016-macc4-2-0-cust104.1-3.cable.virginm.net) | |
13:56 | <danboid> alkisg, What are the mount commands required to use apt under a chroot properly?
| |
14:01 | I'm not keen on the thought of using VM images for chroots. I can see why some might prefer that but I'd rather use a plain chroot dir
| |
14:09 | Is /proc enough?
| |
14:09 | mount -t proc proc /proc
| |
14:14 | Looks like should configure schroot if I want to use regular chroots
| |
14:15 | https://wiki.tolabaki.gr/w/LTSP_Fat_Client_Setup#Setup_schroot_for_better_chroot_management
| |
14:15 | alkisg, Have you used schroot?
| |
14:16 | or anyone else here?
| |
14:18 | <MUHWALT> https://askubuntu.com/questions/633645/use-apt-get-in-chroot-directory no experience doing that, but this was the first result on google
| |
14:19 | you could also try #ubuntu, since your question isn't ltsp specific
| |
14:30 | <Hyperbyte> danboid, why exactly are you using chroots instead of just use the server OS?
| |
14:31 | If your answer is because server has a different OS, or you want to run additional software on the server, etc....
| |
14:32 | You might want to consider doing what I usually do: put the entire LTSP server in a VM. You can just maintain/test your image/chroot on the VM console, you can even add another net booting VM to test clients and then your host server isn't interfering with the LTSP at all.
| |
14:32 | <danboid> Chroots seem like they should be cleaner to me. There are a few apps I do want to have uinstalled on the server but not in the client image and vice versa. Also, I like the idea of being able to host multiple LTSP images/chroots
| |
14:32 | <Hyperbyte> Right.
| |
14:32 | I'd put the entire thing in a VM.
| |
14:33 | Also makes upgrading Ubuntu releases or doing major updates easier. Just copy the VM, do your upgrade and if stuff breaks, boot the old VM.
| |
14:35 | <danboid> I'd have to work out how to host the home dirs from the host but I'm sure that wouldn't be too tricky so yes I probably could
| |
14:35 | <Hyperbyte> I'm currently updating one of the companies we manage to from Ubuntu 18.04 to 20.04. I've actually configured a different next-server in my dhcpd.conf (which runs on the host) for my PXE client VM. So I'm booting my PXE client VM for development purposes into 20.04, while everyone is still using 18.04.
| |
14:36 | I used to do what you do, for your exact reasons. Until I found out how much more simple and flexible it is to just virtualize the entire LTSP server. Good luck. :-)
| |
14:37 | <MUHWALT> I am also running chroots right now on ltsp5
| |
14:37 | and plan to not use chroots in 21
| |
14:37 | <danboid> MUHWALT, What do yo plan to do instead? VM images?
| |
14:37 | <MUHWALT> ltsp image /
| |
14:38 | just image the host OS
| |
14:38 | <danboid> What do you no like about using chroots?
| |
14:38 | Its just to make things simpler then?
| |
14:38 | <MUHWALT> Just an extra step to forget
| |
14:38 | lol
| |
14:38 | <danboid> OK
| |
14:38 | * MUHWALT logs onto server, apt-get install somethingorother, ltsp-update-image | |
14:39 | * MUHWALT wonders why it's not on the client | |
14:39 | <alkisg> danboid: some installers just won't work in chroots. And you'd have to file bugs there to make them work
| |
14:39 | <MUHWALT> I don't have a complicated setup, though. We just use LTSP for office desktops... 99% of what we do is webbased
| |
14:39 | <danboid> alkisg, OK that'd be a showstopper so I'll have to use VM images then
| |
14:40 | <MUHWALT> I think I have one person that uses GIMP regularly... and a few libreoffice users for when Google Workspace can't open the docx
| |
14:40 | <alkisg> danboid: for the most part, you should be able to do it by mounting /proc, /sys, /dev, /dev/pts, and /run
| |
14:40 | E.g. the last error that you've shown me, complained about /dev/pts not being there
| |
14:42 | There are a lot of utilities that help in maintaining chroot-like directories. You can netboot them with kvm, you can use schroot (which also has issues), lxc (which has other issues) etc
| |
14:43 | But they offer no real advantage over VMs. VMs are a whole lot simpler than chroots, that's why I didn't bother developing an `ltsp-chroot` -like tool in the new ltsp
| |
14:43 | There were so many "bug reports" for the old one, and we couldn't solve them, because they weren't in ltsp, but in the "other package postinsts, that didn't work properly in chroots"
| |
14:44 | <Hyperbyte> I still prefer VM-ing the entire server instead of just the client images :-)
| |
14:44 | <alkisg> You can do both
| |
14:45 | <Hyperbyte> Now you're just talking crazy!
| |
14:45 | Hehe
| |
14:46 | Hey alkisg, feature suggestion... will make it easier for people making scripts. It'd be great if you'd set some environment variable for LTSP clients, like export LTSP_CLIENT=true
| |
14:47 | <alkisg> test -d /run/ltsp/client
| |
14:47 | There's no reason to pollute the environment for that
| |
14:47 | <Hyperbyte> Currently I need to distinguish in scripts between server seat session, LTSP client session and server x2go session.
| |
14:47 | Ah
| |
14:47 | <danboid> Hyperbyte, So do you use SSHFS or NFS on the VM host for the home dirs or is that in your VM too? I have to keep the home dirs on the main server
| |
14:47 | <Hyperbyte> Well I can do that. :-)
| |
14:47 | <alkisg> In my case, I have 20 VMs inside virtualbox, and I symlink the one that I want to test, and boot ltsp clients from it, without even running ltsp image
| |
14:48 | <Hyperbyte> danboid, there's a few solutions. I actually have the home dirs on a partition on the host, which I mount inside my LTSP VM. And then just let LTSP use sshfs. But you could easily mount the home dirs over NFS and then serve via sshfs, or NFS directly on the clients... what you want really.
| |
14:49 | <danboid> I'll be using a 6 SSD ZFS pool for home dirs
| |
14:49 | <alkisg> danboid: is your ltsp server a vm already? If so, with what host?
| |
14:50 | <Hyperbyte> danboid, whatever works for you. Any storage will do, as long as you can get it mounted in your VM.
| |
14:50 | <danboid> Our current LTSP server is an ancient bare metal chrootless LTSP5 install of 16.04
| |
14:51 | <alkisg> danboid: I would do this: I would boot an ltsp client, and log in there as danboid. It would sshfs to my main ltsp server. I would run virtualbox, and launch my x86_64.vmdk VM, and maintain it graphically
| |
14:51 | Then I would close the VM and run: ssh main-server ltsp image x86_64
| |
14:52 | That way your server can be headless, and things can be veeeery easy to maintain
| |
14:52 | The VM will be in /home/danboid/virtualbox/x86_64, and then ltsp image will compress it and put it in /srv/ltsp/images/x86_64.img
| |
14:55 | <danboid> You lost me on the abstractions thee alkisg! It looks like I'll be configuring my LTSP client image under virt-manager / qemu at first, then I should be able to use to use qemu from the cli to update the image
| |
14:56 | <alkisg> If it's a desktop image, why are you using cli to maintain it?
| |
14:56 | Even by just using `debootstrap; apt install mate-desktop`, you are causing unknown paths that mate developers do not test
| |
14:57 | Sure it'll be good for us if you test everything and report bugs (i.e. outside ltsp) and have them fixed for everyone, but if you want the less amount of trouble, you should be using your software in the most common ways...
| |
14:58 | But anyway sure, qemu+cli is a whole lot better than chroot
| |
14:58 | <danboid> because I don't want to have to scp the image to my local machine to graphically boot into it. I'm not sure I'll be able to boot into it graphically remotely and it most cases I won't wanty to anyway, I just want to to change a file or add/remove a package
| |
14:58 | <alkisg> Why would you need to scp the image?
| |
14:58 | If you're on an ltsp client, you're already using sshfs to the server
| |
14:59 | <danboid> I don't work from an LTYSP client
| |
14:59 | <alkisg> Then use any remote desktop method, there are many
| |
15:00 | Another example: accepting the realvnc license for all users, requires that you run it under xorg. It can't be done via cli
| |
15:00 | And sure, that's the exception, but it shows the trend...
| |
15:02 | lucas_ has joined IRC (lucas_!~lucascast@192-140-51-192.static.oncabo.net.br) | |
15:02 | lucascastro has left IRC (lucascastro!~lucascast@177-185-130-132.dynamic.isotelco.net.br, Ping timeout: 245 seconds) | |
15:05 | <danboid> OK, I see what you're suggesting now. I've never tried sshfs but it would be very useful for that purpose
| |
15:06 | <alkisg> What are your clients using, NFS? You can still use NFS with the new ltsp, there's no reason to switch to sshfs for that
| |
15:07 | <danboid> We can use what we want for this new build, within reason. It defaults to sshfs doesn't it?
| |
15:08 | <alkisg> Yes
| |
15:08 | As did the old one
| |
15:09 | Anyway, set up the ltsp server, install the ltsp packages there without a desktop environment, and you can easily switch the ltsp image management method later
| |
15:10 | <danboid> Hyperbyte has got me wondering about trying to part containerize it but there's more chance of me messing it up if I try to get to clever
| |
15:12 | I'd imagine in my case it might be advantageous to containerise it all except the home dirs which would be on the host machine
| |
15:12 | <alkisg> Why, what are you afraid about home dirs?
| |
15:12 | <danboid> using LXD
| |
15:14 | We've got 6 4 TB SATA SSDs that are going to be used for the home dirs. Gonna use RAIDZ2 for that
| |
15:15 | <alkisg> That doesn't bother ltsp, you can still put everything in your rootfs without any qemu/vm/chroot whatsover if you like
| |
15:15 | <MUHWALT> raidz2 sounds slow and expensive :D
| |
15:16 | <danboid> Reliability and data integrity is more important than performance here
| |
15:16 | <alkisg> In raid, if the file system gets damaged, you lose everything
| |
15:16 | * alkisg prefers zfs + snapshots + rsync to other disk, than raid; it's more fault tolerance | |
15:17 | <MUHWALT> raid10 seems fine... losing 2 SSDs in the same set sounds unlikely
| |
15:17 | not sure about 6/z2, but rebuilding parity takes FOR-EVER on raid5
| |
15:18 | I haven't dove into zfs/btrfs yet
| |
15:18 | <danboid> Shouldn't you be using zfs_autobackup or znapzend instead of rsync alkisg?
| |
15:18 | <MUHWALT> just solid backups :)
| |
15:18 | <danboid> If its ZFS -> ZFS
| |
15:18 | Using zfs send manually is a bit tedious
| |
15:19 | <alkisg> danboid: yeah that's in our todo list :D
| |
15:26 | <danboid> Has anyone done much testing of 21.10 clients?
| |
15:26 | Using LTSP with Ubuntu 21.10
| |
15:26 | Well, I suppose a month or so at least
| |
15:26 | <alkisg> Not much. I booted an iso; it needs a workaround for rsync that I list in the github issues; that's all
| |
15:27 | * alkisg is on 20.04 currently | |
15:27 | <alkisg> *22.04
| |
15:28 | <danboid> I've never run a beta version of ubuntu, never even tried one
| |
15:29 | <alkisg> Eh, it's not much different than non-lts versions :P
| |
15:29 | <danboid> I've run debian sid
| |
15:29 | and... wait for it... Arch
| |
15:29 | Check me out
| |
15:29 | <alkisg> Haha, I have arch/manjaro and a few others in VMs, but I never got the time to run them in bare metal
| |
15:30 | * alkisg tests if 22.04 boots in ltsp client mode... | |
15:32 | <danboid> There's nothing exciting coming up in 22.04 other than it being the next LTS ad everything is a bit newer right? Sounds like ZFS won't be seeing any new features which is what I want to see - ZFS support in the Ububntu server installer
| |
15:33 | Like Proxmox VE has
| |
15:33 | <alkisg> Yeah I didn't see anything exciting either. It'll be the first time that I'll advice my schools to update from 20.04, rather than clean-install
| |
15:33 | <danboid> Makes it super easy to install to RAIDZ pools
| |
15:34 | Why?
| |
15:34 | <alkisg> In previous versions there were many changes, e.g. from gnome to mate, or from old ltsp to new ltsp
| |
15:34 | Now there are no major changes
| |
15:35 | I don't see much value in raid for /. A backup now and then, even with dd, to second disk, is enough for me. Raid for /home, sure
| |
15:35 | So I don't mind about the lack of zfs support in installers
| |
15:39 | <danboid> Being able to revert to recent zfs snapshot states via the grub history menu (aka boot environments as they are known in FreeBSD and Solaris) is the best feature to have come to Linux in forever IMO
| |
15:40 | zfsd / zsys still hasd a3 few kinks from what I here though, which is why its still marked experimental
| |
15:41 | Yeah its not a proper backup but its better than restoring fr backups when it is a option
| |
15:41 | Its 'instant'
| |
15:41 | <alkisg> Eh, when I have the need for rootfs snapshots, I put them in VMs and do it from Vbox. I very rarely need that though.
| |
15:42 | More than 10 years, with more than 1000 schools, the worst case was a new kernel that stopped them from booting, and they had to call me and I'd tell them to select "previous kernel"
| |
15:43 | <danboid> you can't deny its nice to have on baremetal eve if you rarely use it. Esp if you are doing dev/testing that requires bare metal boot for whatever reason
| |
15:43 | <alkisg> Of course I don't deny it's nice to have
| |
15:44 | When they do ship it by default, I'll use it. As long as it requires me to jump through hoops to get it, I ignore it :D
| |
15:44 | But, I'm a bit skeptical about the memory usage of ZFS, I'm not sure the average user will want it when they have e.g. 4 GB RAM
| |
15:48 | <danboid> It doesn't look like it will become the default any time soon, certainly not for 20.04
| |
15:49 | Has anyone else already set up LTSP with ZFS home dirs? I'll have to write a small pam script to create new datasets for LDAP users, if no-one's done this already
| |
15:50 | new datasets for users home dirs I nean
| |
15:52 | So opn successful LDAP login, pamexec checks it see if their home dir exists, if not a new dataset is created for that user
| |
15:52 | Should only be a few lines
| |
15:54 | Someone must've done this already, surely
| |
15:54 | I'll be sure to document it if not
| |
16:01 | vsuojanen has left IRC (vsuojanen!~vsuojanen@cable-hml-585682-65.dhcp.inet.fi, Ping timeout: 256 seconds) | |
16:03 | vsuojanen has joined IRC (vsuojanen!~vsuojanen@cable-hml-585682-65.dhcp.inet.fi) | |
16:21 | <alkisg> 22.04 seems to work fine as an ltsp client
| |
16:21 | The new ltsp is much more stable regarding distribution and program updates, than the old one... maybe it's also systemd that brings some of the stability
| |
16:21 | (api-wise)
| |
16:25 | ogra has left IRC (ogra!~ogra_@2a01:4f8:c0c:2271::1, Quit: Coyote finally caught me) | |
16:26 | ogra_ has joined IRC (ogra_!~ogra_@2a01:4f8:c0c:2271::1) | |
16:38 | <alkisg> !tag
| |
16:38 | <ltspbot> tag: tag: git tag -s v20.03 -m 'Version 20.03' && git push --tags
| |
17:04 | lucas_ has left IRC (lucas_!~lucascast@192-140-51-192.static.oncabo.net.br, Ping timeout: 250 seconds) | |
18:05 | lucascastro has joined IRC (lucascastro!~lucascast@45-167-143-6.netfacil.inf.br) | |
19:23 | danboid has left IRC (danboid!~dan@cpc127016-macc4-2-0-cust104.1-3.cable.virginm.net, Quit: Leaving) | |
19:29 | woernie has left IRC (woernie!~werner@p5b296fbf.dip0.t-ipconnect.de, Quit: No Ping reply in 180 seconds.) | |
19:30 | woernie has joined IRC (woernie!~werner@p5b296fbf.dip0.t-ipconnect.de) | |
19:37 | woernie has left IRC (woernie!~werner@p5b296fbf.dip0.t-ipconnect.de, Remote host closed the connection) | |
20:10 | lucascastro has left IRC (lucascastro!~lucascast@45-167-143-6.netfacil.inf.br, Ping timeout: 264 seconds) | |
21:59 | lucascastro has joined IRC (lucascastro!~lucascast@192-140-51-192.static.oncabo.net.br) | |
23:02 | ricotz has left IRC (ricotz!~ricotz@ubuntu/member/ricotz, Quit: Leaving) | |