IRC chat logs for #ltsp on irc.libera.chat (webchat)


Channel log from 22 January 2019   (all times are UTC)

01:18vagrantc has left IRC (vagrantc!~vagrant@unaffiliated/vagrantc, Quit: leaving)
03:45jgee has left IRC (jgee!~jgee@190.159.118.121, Quit: The Lounge - https://thelounge.github.io)
03:51jgee has joined IRC (jgee!~jgee@190.159.118.121)
06:45ricotz has joined IRC (ricotz!~ricotz@ubuntu/member/ricotz)
07:22
<fiesh>
afaik btrfs has writable snapshots
07:32kjackal has joined IRC (kjackal!~quassel@2a02:587:3101:f300:11c8:94ba:36fa:6264)
10:02nehemiah has joined IRC (nehemiah!~nehemiah@hs-user-138.wia.cz)
11:03Faith has joined IRC (Faith!~Paty_@unaffiliated/faith)
11:18woernie has joined IRC (woernie!~werner@p508675BE.dip0.t-ipconnect.de)
11:23
<woernie>
Hello, first thanks for this project. I'm pretty new to ltsp. I've set up an ltsp on Ubuntu 18.04 and I have 32bit clients and 64 bit clients
11:23
With ltsp-build-client --purge-chroot I've set up a 64bit client and a 32 bit client
11:24
the 64 bit clints connect good but not the 32bit I can I have both 32 bit and 64 bit clients?
11:24
<alkisg>
The easiest way is to have a single 32bit installation on the server for all clients with no chroots
11:24
Any reason not to do that?
11:25
<woernie>
I was thinking to have the full power of the server, but my thinking myght be wrong
11:28
Also to have the full power of fat clients
11:32
<alkisg>
It doesn't make much difference
11:32
Unless your clients have more than 8 GB RAM
11:32
How much RAM does your server and your clients have?
11:39
<woernie>
clients most 64bit have 1024-4096 the 32 bit client 512-1024 the server (It's a VM on proxmaox with 20GB RAM and couls have upto 60GB RAM)
11:41
there are around 10 64bit clients and 2 32bit clients
11:43
<alkisg>
woernie: are you going to have any thin clients?
11:43
Well anyway, even if you have 2 thin clients, it's still better to:
11:44
1) restrict the server ram to 8 gb
11:44
2) use just one single i386 chrootless installation anywhere
11:44
You lose 10% speed, you gain 20% ram for programs, no big deal
11:45
i386 installations only have issues if you use them with lots of ram, where a bug hits and the disk access gets 100 times slower!
11:45
So with 8 gb ram on the server (it doesn't need more), you'll be great with just 32bit installation]
11:46
An alternative would be to have 64bit chrootless on the server, and a tiny, thin 32bit chroot just for the 2 clients
11:46
But the server still wouldn't use the extra ram
11:46
*need
11:48
So anyway, all that said, you can configure 32bit vs 64bit chroots either with mac reservations in dhcp/dnsmasq,
11:49
or via ifcpu64 in the pxelinux level,
11:49
or with cpuid at the ipxe level (we'll probably use that one in ltsp6)
11:49
So in dnsmasq.conf you'd configure that the 2 i386 clients would get the i386 chroot, instead of the amd64 chroot
11:52
<woernie>
I've found that "ifcpu64" but I couldnt set it up to work. Is there a Documentation.
11:53
<alkisg>
Not in ltsp. There is documentation in the syslinux site.
11:53
<woernie>
Sorry I've have to go to a meeting right now will be back later
11:53
<alkisg>
np I need to leave too
11:53
Bye
12:07
<JuJUBee>
Has anybody ever run LTSP Server in a VM?
12:13adrianorg has left IRC (adrianorg!~adrianorg@177.156.56.117, Ping timeout: 240 seconds)
12:18
<Hyperbyte>
JuJUBee, sure.
12:19
In fact, I'd recommend it. Assuming you use the right software and know what you're doing.
12:19
It's really easy to test system upgrades and/or large updates, it's easy to make backups and you can have a complimentary VM PXE client for remote testing of the actual environment
12:21
When I was migrating to Ubuntu 18, I had two VM's running. I configured (via dhcp) select clients to use the test environment for Ubuntu 18 and the rest the working Ubuntu 16.
12:21
I've ran both fat and thin clients VM'd.
12:26
<JuJUBee>
Hyperbyte, thanks. How do you mount it in the VM and what virtualization software are you using? I have Virtualbox.
12:26
I created a folder on host and used vboxsf to mount it via fstab in vm.
12:26
I cannot create new users if I point their home dir to the mounted folder.
12:26
in the VM that is.
12:27
I should clarify I am talking about the users home dirs...
12:27
I already have the server running in vm fine
12:37
<alkisg>
I think Hyperbyte has /home inside the VM, not outside it...
12:38
You can also just mount the whole /home *partition* in the VM as a disk
12:46adrianorg has joined IRC (adrianorg!~adrianorg@177.156.56.117)
12:49
<JuJUBee>
alkisg, so I tried mounting a partition on host as /home on guest and I cannot create a user. Any thoughts on how to mount it? I used vboxsf to mount in fstab
12:50
When I asked in #vbox they say "You do *not* want shared folders for this. It's going to fail. Big time!"
12:50
Should I use nfs to mount as /home in vm?
12:56
<alkisg>
JuJUBee: create a vm disk, and mount it as /home
12:56
It's the same as mounting a partition, just easier
12:56
<JuJUBee>
OK, thanks. I will try that.
12:57
<alkisg>
Of course there's no failure when mounting partitions, I'm doing it all the time
12:57
Either you expressed the problem wrong, or they misunderstood, or you misunderstood, or someone that didn't know answered
12:59
<JuJUBee>
So how should I mount from the host to the guest for /home? nfs or vboxsf ? If its NFS don't the user accounts also have to exist on host?
12:59
<alkisg>
JuJUBee: ah you didn't understand
12:59
I proposed: create a vm disk, like /home/jujube/virtualbox vms/ltsp/home.vmdk
13:00
And attach it to the vm like a usual emulated disk
13:00
The vm will see it as a sata disk
13:01
This way it's usual ext4, no fancy file systems
13:01
<JuJUBee>
So the user files will still be inside a virtual environment not a physical?
13:01
<alkisg>
Yes
13:01
But you can exclude it from snapshots
13:02
<JuJUBee>
Ah
13:02
<alkisg>
And you can loop-mount it on the host whenver you need it
13:02
If it's "raw vmdk", you can just mount -o loop home.vmdk /mnt
13:02
It's like a partition, just within a file
13:04
<JuJUBee>
Can I mount it inside 2 VMs at the same time?
13:05
<alkisg>
No, it's not a networked file system
13:05
<JuJUBee>
ok
13:05
<alkisg>
If you use network file systems, you no longer use normal file systems, and you end up with issues
13:05
Like "file locks not working", or "wrong posix attributes" etc
13:05
<JuJUBee>
ok
13:05
<alkisg>
Sure, they "usually" work, but be prepared for malfunctions
13:06
If you decide to use a network file system, go for nfs
13:06
Put anonymous nfs on the host, so that the clients can access it without user accounts there
13:06
Of course it's less safe than secured nfsv4, but it'll be easier/faster
13:09
<JuJUBee>
The reason for multiple mounts is that I teach some web dev classes and wand the students websites to be separated from my gateway/classroom website server. Didnt want to have userdir running on main web server if possible.
13:10
<alkisg>
So you want your apache server to show a remote dir?
13:11
<JuJUBee>
Just for ~user accounts
13:11
<alkisg>
So, "yes"
13:11
<JuJUBee>
then yes
13:11
<alkisg>
Then you'd need to configure the apache web server to have nfs access to your VM
13:11
or something like that
13:12
<JuJUBee>
That is what I was thinking.
13:13
So it doesn't worry you to have user files in a VM? Maybe I am being over cautious?
13:13
<alkisg>
Oh personally I'm not using VMs in installations
13:13
Too many things can go wrong, for no benefit to my users
13:13
This is a good idea only for experienced sysadmins that know how to handle them
13:15
What are the benefits of a VM for you?
13:15
<JuJUBee>
I wanted the recover-ability of a vm (using a nightly backup)
13:15
for the server at least
13:16
<alkisg>
And you care about recovering the server instead of the user files?
13:16
I can reinstall a server in 30 mins, that doesn't worry me at all
13:16
If you said you'd wanted to snapshot user files, I'd respect that
13:16
<JuJUBee>
No, but nightly backup makes server restore a snap. I backup user files separately
13:17
<alkisg>
From 1000 installations, I think restore would help in 1 case
13:17
(maintaining 1000 schools here)
13:17
<JuJUBee>
WOW
13:17
<alkisg>
And in that case, it just took me 30 mins to reinstall
13:18
But if your sysadmin plays with the server each day, then sure, you'd need frequent backups
13:18
Otherwise remember that with VMs, now TWO file systems can break, either the host or the guest file system
13:18
And a ton of middleware between
13:20
<JuJUBee>
SO currently I have 2 physical servers, a gateway/firewall/web/database server and an LTSP server. I wanted to keep this scenario but wanted to do it with only one physical box. I inherited an IBM server with 32 cores and 512GB Ram
13:21
<alkisg>
Sure, a monster server _should_ be utilized with VMs
13:21
My schools have regular i5 machines as servers, with e.g. 8 GB RAM
13:21
<JuJUBee>
My current server is an HP quad core with 8G ram working nicely
13:22
My gateway/firewall.... is 13 year old dual core with 8G ram and starting to get flaky.
13:22
Figured it was time to change
13:25
<alkisg>
How many disks does your server have?
13:25
<JuJUBee>
4 at the moment 1TB RAID 5
13:26
<alkisg>
Anyway, I think in your case I'd go with regular LTSP in a VM, with "regular" nfs home, and just export the nfs read only to the web server
13:26
!nfs
13:26
<ltsp>
nfs: to enable NFS home directories for localapps and fat clients, install nfs-kernel-server on your server, nfs-common on your client (don't forget ltsp-update-image), and put this in lts.conf: FSTAB_1="server:/home /home nfs defaults,nolock 0 0"
13:26
<alkisg>
The user data would be inside the VM, snapshotted along with everything
13:27
<JuJUBee>
I also like the vm disk approach. Separate vmdk for /home if I am going to keep everything in vm.
13:28adrianor1 has joined IRC (adrianor1!~adrianorg@179.177.208.103.dynamic.adsl.gvt.net.br)
13:28
<JuJUBee>
My host has /home as 2.2TB so I can just place the vmdk there
13:29
That way I could have separate snapshots of the home dirs and the rest of the server.
13:31adrianorg has left IRC (adrianorg!~adrianorg@177.156.56.117, Ping timeout: 250 seconds)
16:54woernie has left IRC (woernie!~werner@p508675BE.dip0.t-ipconnect.de, Remote host closed the connection)
18:49vagrantc has joined IRC (vagrantc!~vagrant@unaffiliated/vagrantc)
19:47josefig has left IRC (josefig!~jose@unaffiliated/josefig, Ping timeout: 245 seconds)
19:49josefig has joined IRC (josefig!~jose@unaffiliated/josefig)
19:59Faith has left IRC (Faith!~Paty_@unaffiliated/faith, Quit: Leaving)
20:36
<||cw>
JuJUBee: I've been using this for my vmhosts lately, on zfs. I like it better than proxmox http://www.ubuntuboss.com/ubuntu-server-18-04-as-a-hypervisor-using-kvm-and-kimchi-for-vm-management/
20:37
I also do 2 small mirrored SSDs for the host OS and the disks in a raidz
21:41* Hyperbyte is using the good old libvirt with qemu-kvm
21:42
<Hyperbyte>
I create LVM partitions on the host which I assign directly to the VM's as disks. That way there's no filesystem overhead from the host and no filesystem that can break on the host.
21:43
The only thing that can complicate things is that you have a partition table within a partition table, but believe it or not, Linux can actually mount specific partitions from an entire disk written on an LVM partition.
21:44
<vagrantc>
or a partition table within an lvm which has an lvm partition on it ... once that caused me some weirdness... long enough ago that i forget the details
21:44
but yeah, i've used libvirt for quite some years now
21:45
with lvm backed devices almost exclusively
21:48spaced0ut has left IRC (spaced0ut!~spaced0ut@unaffiliated/spaced0ut, Quit: Leaving)
22:15kjackal has left IRC (kjackal!~quassel@2a02:587:3101:f300:11c8:94ba:36fa:6264, Ping timeout: 252 seconds)
22:37ricotz has left IRC (ricotz!~ricotz@ubuntu/member/ricotz, Remote host closed the connection)
22:52
<||cw>
Hyperbyte: I can still connect to VMs using ssh+virt-manager. the bonus is the server is headless but vm local consoles are still just a couple clicks away
22:55
you can make a zvol in zfs and give that to the VM and not have to deal with lvm or partitions in partitions. far more dynamic and flexible
22:56
then you can still snap the zvol, clone it, whatever