|01:18||vagrantc has left IRC (vagrantc!~vagrant@unaffiliated/vagrantc, Quit: leaving)|
|03:45||jgee has left IRC (email@example.com, Quit: The Lounge - https://thelounge.github.io)|
|03:51||jgee has joined IRC (firstname.lastname@example.org)|
|06:45||ricotz has joined IRC (ricotz!~ricotz@ubuntu/member/ricotz)|
afaik btrfs has writable snapshots
|07:32||kjackal has joined IRC (kjackal!~quassel@2a02:587:3101:f300:11c8:94ba:36fa:6264)|
|10:02||nehemiah has joined IRC (email@example.com)|
|11:03||Faith has joined IRC (Faith!~Paty_@unaffiliated/faith)|
|11:18||woernie has joined IRC (woernie!~werner@p508675BE.dip0.t-ipconnect.de)|
Hello, first thanks for this project. I'm pretty new to ltsp. I've set up an ltsp on Ubuntu 18.04 and I have 32bit clients and 64 bit clients
With ltsp-build-client --purge-chroot I've set up a 64bit client and a 32 bit client
the 64 bit clints connect good but not the 32bit I can I have both 32 bit and 64 bit clients?
The easiest way is to have a single 32bit installation on the server for all clients with no chroots
Any reason not to do that?
I was thinking to have the full power of the server, but my thinking myght be wrong
Also to have the full power of fat clients
It doesn't make much difference
Unless your clients have more than 8 GB RAM
How much RAM does your server and your clients have?
clients most 64bit have 1024-4096 the 32 bit client 512-1024 the server (It's a VM on proxmaox with 20GB RAM and couls have upto 60GB RAM)
there are around 10 64bit clients and 2 32bit clients
woernie: are you going to have any thin clients?
Well anyway, even if you have 2 thin clients, it's still better to:
1) restrict the server ram to 8 gb
2) use just one single i386 chrootless installation anywhere
You lose 10% speed, you gain 20% ram for programs, no big deal
i386 installations only have issues if you use them with lots of ram, where a bug hits and the disk access gets 100 times slower!
So with 8 gb ram on the server (it doesn't need more), you'll be great with just 32bit installation]
An alternative would be to have 64bit chrootless on the server, and a tiny, thin 32bit chroot just for the 2 clients
But the server still wouldn't use the extra ram
So anyway, all that said, you can configure 32bit vs 64bit chroots either with mac reservations in dhcp/dnsmasq,
or via ifcpu64 in the pxelinux level,
or with cpuid at the ipxe level (we'll probably use that one in ltsp6)
So in dnsmasq.conf you'd configure that the 2 i386 clients would get the i386 chroot, instead of the amd64 chroot
I've found that "ifcpu64" but I couldnt set it up to work. Is there a Documentation.
Not in ltsp. There is documentation in the syslinux site.
Sorry I've have to go to a meeting right now will be back later
np I need to leave too
Has anybody ever run LTSP Server in a VM?
|12:13||adrianorg has left IRC (firstname.lastname@example.org, Ping timeout: 240 seconds)|
In fact, I'd recommend it. Assuming you use the right software and know what you're doing.
It's really easy to test system upgrades and/or large updates, it's easy to make backups and you can have a complimentary VM PXE client for remote testing of the actual environment
When I was migrating to Ubuntu 18, I had two VM's running. I configured (via dhcp) select clients to use the test environment for Ubuntu 18 and the rest the working Ubuntu 16.
I've ran both fat and thin clients VM'd.
Hyperbyte, thanks. How do you mount it in the VM and what virtualization software are you using? I have Virtualbox.
I created a folder on host and used vboxsf to mount it via fstab in vm.
I cannot create new users if I point their home dir to the mounted folder.
in the VM that is.
I should clarify I am talking about the users home dirs...
I already have the server running in vm fine
I think Hyperbyte has /home inside the VM, not outside it...
You can also just mount the whole /home *partition* in the VM as a disk
|12:46||adrianorg has joined IRC (email@example.com)|
alkisg, so I tried mounting a partition on host as /home on guest and I cannot create a user. Any thoughts on how to mount it? I used vboxsf to mount in fstab
When I asked in #vbox they say "You do *not* want shared folders for this. It's going to fail. Big time!"
Should I use nfs to mount as /home in vm?
JuJUBee: create a vm disk, and mount it as /home
It's the same as mounting a partition, just easier
OK, thanks. I will try that.
Of course there's no failure when mounting partitions, I'm doing it all the time
Either you expressed the problem wrong, or they misunderstood, or you misunderstood, or someone that didn't know answered
So how should I mount from the host to the guest for /home? nfs or vboxsf ? If its NFS don't the user accounts also have to exist on host?
JuJUBee: ah you didn't understand
I proposed: create a vm disk, like /home/jujube/virtualbox vms/ltsp/home.vmdk
And attach it to the vm like a usual emulated disk
The vm will see it as a sata disk
This way it's usual ext4, no fancy file systems
So the user files will still be inside a virtual environment not a physical?
But you can exclude it from snapshots
And you can loop-mount it on the host whenver you need it
If it's "raw vmdk", you can just mount -o loop home.vmdk /mnt
It's like a partition, just within a file
Can I mount it inside 2 VMs at the same time?
No, it's not a networked file system
If you use network file systems, you no longer use normal file systems, and you end up with issues
Like "file locks not working", or "wrong posix attributes" etc
Sure, they "usually" work, but be prepared for malfunctions
If you decide to use a network file system, go for nfs
Put anonymous nfs on the host, so that the clients can access it without user accounts there
Of course it's less safe than secured nfsv4, but it'll be easier/faster
The reason for multiple mounts is that I teach some web dev classes and wand the students websites to be separated from my gateway/classroom website server. Didnt want to have userdir running on main web server if possible.
So you want your apache server to show a remote dir?
Just for ~user accounts
Then you'd need to configure the apache web server to have nfs access to your VM
or something like that
That is what I was thinking.
So it doesn't worry you to have user files in a VM? Maybe I am being over cautious?
Oh personally I'm not using VMs in installations
Too many things can go wrong, for no benefit to my users
This is a good idea only for experienced sysadmins that know how to handle them
What are the benefits of a VM for you?
I wanted the recover-ability of a vm (using a nightly backup)
for the server at least
And you care about recovering the server instead of the user files?
I can reinstall a server in 30 mins, that doesn't worry me at all
If you said you'd wanted to snapshot user files, I'd respect that
No, but nightly backup makes server restore a snap. I backup user files separately
From 1000 installations, I think restore would help in 1 case
(maintaining 1000 schools here)
And in that case, it just took me 30 mins to reinstall
But if your sysadmin plays with the server each day, then sure, you'd need frequent backups
Otherwise remember that with VMs, now TWO file systems can break, either the host or the guest file system
And a ton of middleware between
SO currently I have 2 physical servers, a gateway/firewall/web/database server and an LTSP server. I wanted to keep this scenario but wanted to do it with only one physical box. I inherited an IBM server with 32 cores and 512GB Ram
Sure, a monster server _should_ be utilized with VMs
My schools have regular i5 machines as servers, with e.g. 8 GB RAM
My current server is an HP quad core with 8G ram working nicely
My gateway/firewall.... is 13 year old dual core with 8G ram and starting to get flaky.
Figured it was time to change
How many disks does your server have?
4 at the moment 1TB RAID 5
Anyway, I think in your case I'd go with regular LTSP in a VM, with "regular" nfs home, and just export the nfs read only to the web server
nfs: to enable NFS home directories for localapps and fat clients, install nfs-kernel-server on your server, nfs-common on your client (don't forget ltsp-update-image), and put this in lts.conf: FSTAB_1="server:/home /home nfs defaults,nolock 0 0"
The user data would be inside the VM, snapshotted along with everything
I also like the vm disk approach. Separate vmdk for /home if I am going to keep everything in vm.
|13:28||adrianor1 has joined IRC (firstname.lastname@example.org)|
My host has /home as 2.2TB so I can just place the vmdk there
That way I could have separate snapshots of the home dirs and the rest of the server.
|13:31||adrianorg has left IRC (email@example.com, Ping timeout: 250 seconds)|
|16:54||woernie has left IRC (woernie!~werner@p508675BE.dip0.t-ipconnect.de, Remote host closed the connection)|
|18:49||vagrantc has joined IRC (vagrantc!~vagrant@unaffiliated/vagrantc)|
|19:47||josefig has left IRC (josefig!~jose@unaffiliated/josefig, Ping timeout: 245 seconds)|
|19:49||josefig has joined IRC (josefig!~jose@unaffiliated/josefig)|
|19:59||Faith has left IRC (Faith!~Paty_@unaffiliated/faith, Quit: Leaving)|
JuJUBee: I've been using this for my vmhosts lately, on zfs. I like it better than proxmox http://www.ubuntuboss.com/ubuntu-server-18-04-as-a-hypervisor-using-kvm-and-kimchi-for-vm-management/
I also do 2 small mirrored SSDs for the host OS and the disks in a raidz
|21:41||* Hyperbyte is using the good old libvirt with qemu-kvm|
I create LVM partitions on the host which I assign directly to the VM's as disks. That way there's no filesystem overhead from the host and no filesystem that can break on the host.
The only thing that can complicate things is that you have a partition table within a partition table, but believe it or not, Linux can actually mount specific partitions from an entire disk written on an LVM partition.
or a partition table within an lvm which has an lvm partition on it ... once that caused me some weirdness... long enough ago that i forget the details
but yeah, i've used libvirt for quite some years now
with lvm backed devices almost exclusively
|21:48||spaced0ut has left IRC (spaced0ut!~spaced0ut@unaffiliated/spaced0ut, Quit: Leaving)|
|22:15||kjackal has left IRC (kjackal!~quassel@2a02:587:3101:f300:11c8:94ba:36fa:6264, Ping timeout: 252 seconds)|
|22:37||ricotz has left IRC (ricotz!~ricotz@ubuntu/member/ricotz, Remote host closed the connection)|
Hyperbyte: I can still connect to VMs using ssh+virt-manager. the bonus is the server is headless but vm local consoles are still just a couple clicks away
you can make a zvol in zfs and give that to the VM and not have to deal with lvm or partitions in partitions. far more dynamic and flexible
then you can still snap the zvol, clone it, whatever