All my builds are in Virtual Box. I have one builder with multiple chroots. I also have an armhf builder (that I need to resurrect) as I need to get the RPi3+ booting.
I've been using the packages built this way from you for a long time so it works good I know that. I have no problem with VirtualBox at all, although I probably do have some bias to qemu. packer doesn't discriminate which builder one wants to use, it makes it trivial to add some json and use another builder with the same scripts to provision the box, it has made it very easy to compare build time across the three that I have set up so far. In fact i started with VirtualBox which is the default provider, then Docker then Qemu. Qemu seemed to compile couple hours faster when building so I went with that due to the time. I know a couple things though, compliing in a VM is painful and you have patience.
I also have some armhf boxes and had the idea of doing this build in docker on them but it's back to pthsem error?
I'm very interested in exactly how you've got this setup and launching.
packer provides all the setup for this, docker is more lightweight like a chroot than a vm so when the build is finished it's tagged and stored locally in the docker image repository. packer builds locally so it can't be done on a remote docker instance. packer can export docker images but I wanted them tagged locally for now.
it's just another 'builder' under packer and using the same provisioning scripts wrapping the buildscripts, with some if statements to do a few things differently in a container rather than a vm.
I will clean up the scripts and upload to git because I do think docker as build platform would be good if it wasn't for pthsem failing, snapshot and cloning of image is easy. I could not reproduce the pthsem error under VirtualBox or Qemu.
QEmu == Ewww, I can agree with that. Not sure if you're referring to automated 'modeling'/testing of distributed lmce network or the exact specifics. All my 'live' testing has so far been done using VMWare or VirtualBox and snapshot systems to install/test/revert/repeat.
Yes that is what I meant, in vagrant to automatically fire up a LinuxMCE network with vagrant up, bringing up a core or hybrid and even model netboot with 20 machines seems very useful. This can also be done in VirtualBox which is the default provider for vagrant, but at the moment I am using the libvirt vagrant plugin (qemu).
Add 'NUM_JOBS=XX' to /etc/lmce-build/builder.custom.conf to set the number of cores used during the build.
This is exported through the environment when I execute, either automatically by nproc command or set manually but some builds still only use one core, I know this is a tricky problem to solve because of some builds failing.
Using the above will avoid building anything that fails multi-core builds.
^^
This is something that needs to be looked into going forward, our current build system is outdated, but it's not as simple as in many projects due to the database system we rely on.
The build system may be outdated but I cannot criticise it, I have built distribution from source and know how difficult it is. So much changes in different Ubuntu versions it's hard to keep up with that alone, and the amount of packages LinuxMCE glues together is perplexing. Yes the database is a hurdle for me at the moment and something that's a requirement to learn to make any progress I can see, have been reading about sqlCVS but am still very unfamiliar with packages and what needs to be done to make one.
Most everything is sacred but the knx/eib stuff has to work. Everything we do here is supported and enabled by someone that relies on knx and that must be maintained. Oh... and VDR too
I can probably help test VDR but being MythTV is what I have used in forever that is what I have running at the moment because I couldn't get VDR to work. I guess that is just a matter of being familiar with MythTV and not knowing anything about VDR. And being lazy!
Right now database changes have to be made by one of a few devs, some that haven't been seen for a while. Changes in mysql forced updates in sqlCVS that seem to have broken anonymous commits/approvals. I can work with you to get things input if necessary.
I have been reading about how this works on the wiki but to be honest I haven't done many changes to the database yet through not knowing enough. I have experience with making rpm, ebuild and some deb packaging but how this relates to the database is something I am still learning. I appreciate your continued help in all of this so thank you very much, it has really helped me understand so much more about the system.
Official builds are all produced on one machine for all i386 and X86_64 builds and all official armhf builds have been made on my armhf builder. Essentially all our build scripts cater to this primary builder. I've added lots of speed-ups and I skip many steps in the build process in my chroot environments but that depends on not destroying the environments and knowing how to reset those steps to occur, none of these things are documented anywhere.
I can understand that, the contrast with packer is faithful build from provisioning a base OS image for each branch and arch all the way to deb so these steps you take and all that knowledge are missed. Those little scripts and hacks I have had to take may be useful to you and it's the same for me. I notice by doing build this way this is documented in code.
The firewall is severely broken and you're best option is to disable it entirely.
Ouch, I have had to in Vagrant but my running network is behind another firewall so do not notice this.
Things are pretty quiet but you might try to join #linuxmce-devel on freenode irc. I try to get on daily and if I'm around then it can be easier to converse and 'brain dump'
I have tried to get on IRC but had trouble using the service recently, one of the reasons I hit the forum and bug tracker. Will try again, the forums are great but it would be good to chat about the subject with less latency.
Keep having fun!
Cheers