General > Feature requests & roadmap

Development and maintenance, backwards compatibility, git branches. etc

(1/5) > >>

Gavlee:
Hello

After taking the plunge and trying to build LinuxMCE after being a consumer for too long. One of the things I notice is the problem of keeping backward compatibility while working on newer Ubuntu branches.

Would it be feasible to add some branches in git like ubuntu-14.04, ubuntu-16.04, ubuntu-18.04 for example, to allow maintenance on these older Ubuntu versions without holding up current developments?

I know some things have do be done by conditionals in the code for each branch and there is the database to consider but some things in the code repo can only be done with another branch from what I can tell.

Anyone have thoughts on this?

Cheers.

phenigma:
Do you have any specific examples?  I'm not against this but we do try to maintain compatibility between versions as much as possible.  I work in private branches that get pushed to master once tested across the various versions we have.  There are times when things change enough that code diverges greatly (CEC Adaptor is one I can specifically identify as unable to remain backwards compatible as the libraries have updated.

J.

Gavlee:
First off sorry for rambling on, this is almost brain dump off stuff I've been thinking about..

I can give you an example though thinking about it more it could be achieved with 2 branches. I know more branches adds complexity and maintenance by having to backport between them so having one branch say 'pre-bionic' and just use master for bionic onwards would suffice in this particular case maybe.

A little background on my current LinuxMCE build and to how this came about.

The way I have build LinuxMCE so far is through packer and vagrant, I am used to building in chroot and have no problem with that at all but I thought to extend this further by utilising new container systems so I can move OS easier. This led me back to packer and vagrant.
So far I have been able to build LinuxMCE in VirtualBox, Docker and qemu all through single provisioning scripts wrapping the LinuxMCE build scripts in packer, in theory this could be used to build LinuxMCE in the 'cloud' by whatever provider packer supports.
Granted I have had to take a lot of shortcuts and do a lot of horrible hacks at the moment but I have had a few succesful builds so far.

Docker is probably the fastest method I have tried due to the least overhead, however compiling pthsem fails in the container. VirtualBox while working, is unusable in my opinion for the size of LinuxMCE. Qemu does work but takes a fair few hours to do a complete build on a quad core under kvm but does work great for compiling and providing a functional test environment in vagrant. To model a LinuxMCE network system, even OpenGL with virgl and networking between systems, netboot, disked md etc. I have found this invaluable tool to test LinuxMCE more. I have used lxc in the past but have not tested this path yet but will do so as these are lightweight like a chroot. VMWare and several others are possible.

So after that rambling explanation, it's taking hours to build LinuxMCE under qemu, before I can look at fixing why pthsem fails in docker, I started looking for ways to reduce build time.
On bionic and greater includes debhelper 11. by changing the debhelper requirement in the deb package control and bumping the compat file to 10 most any build that uses the debhelpers should utilise parallel building. I notice when compiling LinuxMCE, on a lot of the builds there is only a single core being utilised so by bumping the debhelper requirements on bionic and onwards, it should speed up build times for a lot of packages.

There are obviously some packages that will need fixing due to this change too, some packages will fail when make is invoked as parallel. some linking methods are different, I think --as-needed flag, so some libs will need juggling around with the link ordering, this is where I stopped, I reverted it because I knew it would break trusty and time better spent elsewhere.

So anyway, back to the original answer a change like this would break pre bionic builds and the only way I see is another branch. Knowing how LinuxMCE deals with this would be useful to move foward.

I can understand the develpoment model and trying not to deviate too much from master across all the branches and having to test across them all.

In the script on packer/vagrant would be trivial to select which build os and then specify which target git branch to build, at the moment I am just feeding in my own tree and have experimented through ways getting the sources into the container and evaluating which methods are faster or portable across providers.

This has been a long journey the last few months but I would like to work on this path more it has really helped me being able to test LinuxMCE across the different versions without having a dedicated box and not touching my working system on 14.04. this came about by looking at ways to make the build faster because it's a little painful inside a vm right now, I would rather take longer and have the build self contained like this though because it has many benefits being able to copy the dev environment around, working on a foreign OS and bringing up the image with only a few commands.

Another option was to look at way to remove pthsem with more updated software that depended on it but it looks like it's used by some core things I cannot test due to lack of knx/eib hardware so that isn't an option currently for me.

Just thinking of ways to improve LinuxMCE, I know extra branches equals more effort and time is the enemy but I don't want to break backwards compatilbility either, I think you guys and girls have done a stellar job and the current system works great. I hope you not think I'm poking holes at LinuxMCE just I think with packer/vagrant it is how one would set up machines manually so by me wrapping this, eventually those wiki pages "building LinuxMCE" could be redundant. It could be codified and at the end of a couple packer commands out pops a (HUGE) machine image with dev environment and built deb packages, and from then even various machine images, like core, hybrid etc. can be made. At the moment I am looking at ways to get my database changes in using this method but I am not going to lie I am struggling with LinuxMCE as you can well imagine.

I would like to know more about how you work and how this all fits because I do not want to interfere with current build methods, so I am trying to make this an extension if you like, a wrapper around the buildscripts. I would welcome and help with this I don't want to waste your time the only thing stopping me uploading at the moment is it being so rough and cutting corners but I hope to work these out.

If you can make sense of this rambling posts and weird bug reports there is some method to my madness I hope :)

Cheers.

P.S. While I think of it, I have just managed to install a hybrid in vagrant with the debs from deb.linuxmce.org, I got access the vm by disabling the firewall in /etc/pluto.conf with a hack in the Vagrantfile but this is hardly optimal so wondered if there was a way to enable ssh access by the cli after apt install lmce-mybrid. the firewall comes up and blocks access so i had to disable the firewall because I don't know how to enable programmatically, the outside ssh access in LinuxMCE settings to stick. Phew. Thanks!

phenigma:

--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---The way I have build LinuxMCE so far is through packer and vagrant

--- End quote ---

All my builds are in Virtual Box.  I have one builder with multiple chroots.  I also have an armhf builder (that I need to resurrect) as I need to get the RPi3+ booting.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---Docker is probably the fastest method I have tried due to the least overhead

--- End quote ---

I'm very interested in exactly how you've got this setup and launching.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---however compiling pthsem fails in the container.

--- End quote ---

Mhm.  Yup.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---VirtualBox while working, is unusable in my opinion for the size of LinuxMCE. Qemu does work but takes a fair few hours to do a complete build on a quad core under kvm but does work great for compiling and providing a functional test environment in vagrant. To model a LinuxMCE network system, even OpenGL with virgl and networking between systems, netboot, disked md etc. I have found this invaluable tool to test LinuxMCE more. I have used lxc in the past but have not tested this path yet but will do so as these are lightweight like a chroot. VMWare and several others are possible.

--- End quote ---

QEmu == Ewww, I can agree with that.  Not sure if you're referring to automated 'modeling'/testing of distributed lmce network or the exact specifics.  All my 'live' testing has so far been done using VMWare or VirtualBox and snapshot systems to install/test/revert/repeat.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---parallel building

--- End quote ---

Add 'NUM_JOBS=XX' to /etc/lmce-build/builder.custom.conf to set the number of cores used during the build.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---There are obviously some packages that will need fixing due to this change too, some packages will fail when make is invoked as parallel. some linking methods are different, I think --as-needed flag, so some libs will need juggling around with the link ordering, this is where I stopped, I reverted it because I knew it would break trusty and time better spent elsewhere.

--- End quote ---

Using the above will avoid building anything that fails multi-core builds.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---So anyway, back to the original answer a change like this would break pre bionic builds and the only way I see is another branch. Knowing how LinuxMCE deals with this would be useful to move foward.

--- End quote ---

This is something that needs to be looked into going forward, our current build system is outdated, but it's not as simple as in many projects due to the database system we rely on.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---Another option was to look at way to remove pthsem with more updated software that depended on it but it looks like it's used by some core things I cannot test due to lack of knx/eib hardware so that isn't an option currently for me.

--- End quote ---

Most everything is sacred but the knx/eib stuff has to work.  Everything we do here is supported and enabled by someone that relies on knx and that must be maintained.  Oh... and VDR too ;)


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---can be made. At the moment I am looking at ways to get my database changes in using this method but I am not going to lie I am struggling with LinuxMCE as you can well imagine.

--- End quote ---

Right now database changes have to be made by one of a few devs, some that haven't been seen for a while.  Changes in mysql forced updates in sqlCVS that seem to have broken anonymous commits/approvals.  I can work with you to get things input if necessary.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---I would like to know more about how you work and how this all fits because I do not want to interfere with current build methods, so I am trying to make this an extension if you like, a wrapper around the buildscripts. I would welcome and help with this I don't want to waste your time the only thing stopping me uploading at the moment is it being so rough and cutting corners but I hope to work these out.

--- End quote ---

Official builds are all produced on one machine for all i386 and X86_64 builds and all official armhf builds have been made on my armhf builder.  Essentially all our build scripts cater to this primary builder.  I've added lots of speed-ups and I skip many steps in the build process in my chroot environments but that depends on not destroying the environments and knowing how to reset those steps to occur, none of these things are documented anywhere.


--- Quote from: Gavlee on November 27, 2018, 12:13:41 pm ---P.S. While I think of it, I have just managed to install a hybrid in vagrant with the debs from deb.linuxmce.org, I got access the vm by disabling the firewall in /etc/pluto.conf with a hack in the Vagrantfile but this is hardly optimal so wondered if there was a way to enable ssh access by the cli after apt install lmce-mybrid. the firewall comes up and blocks access so i had to disable the firewall because I don't know how to enable programmatically, the outside ssh access in LinuxMCE settings to stick. Phew. Thanks!

--- End quote ---

The firewall is severely broken and you're best option is to disable it entirely.

Things are pretty quiet but you might try to join #linuxmce-devel on freenode irc.  I try to get on daily and if I'm around then it can be easier to converse and 'brain dump' ;)

Keep having fun!

J.

Gavlee:

--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---All my builds are in Virtual Box.  I have one builder with multiple chroots.  I also have an armhf builder (that I need to resurrect) as I need to get the RPi3+ booting.

--- End quote ---

I've been using the packages built this way from you for a long time so it works good I know that. I have no problem with VirtualBox at all, although I probably do have some bias to qemu. packer doesn't discriminate which builder one wants to use, it makes it trivial to add some json and use another builder with the same scripts to provision the box, it has made it very easy to compare build time across the three that I have set up so far. In fact i started with VirtualBox which is the default provider, then Docker then Qemu. Qemu seemed to compile couple hours faster when building so I went with that due to the time. I know a couple things though, compliing in a VM is painful and you have patience.

I also have some armhf boxes and had the idea of doing this build in docker on them but it's back to pthsem error?


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---I'm very interested in exactly how you've got this setup and launching.

--- End quote ---

packer provides all the setup for this, docker is more lightweight like a chroot than a vm so when the build is finished it's tagged and stored locally in the docker image repository. packer builds locally so it can't be done on a remote docker instance. packer can export docker images but I wanted them tagged locally for now.

it's just another 'builder' under packer and using the same provisioning scripts wrapping the buildscripts, with some if statements to do a few things differently in a container rather than a vm.

I will clean up the scripts and upload to git because I do think docker as build platform would be good if it wasn't for pthsem failing, snapshot and cloning of image is easy. I could not reproduce the pthsem error under VirtualBox or Qemu.


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---QEmu == Ewww, I can agree with that.  Not sure if you're referring to automated 'modeling'/testing of distributed lmce network or the exact specifics.  All my 'live' testing has so far been done using VMWare or VirtualBox and snapshot systems to install/test/revert/repeat.

--- End quote ---

Yes that is what I meant, in vagrant to automatically fire up a LinuxMCE network with vagrant up, bringing up a core or hybrid and even model netboot with 20 machines seems very useful. This can also be done in VirtualBox which is the default provider for vagrant, but at the moment I am using the libvirt vagrant plugin (qemu).


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---Add 'NUM_JOBS=XX' to /etc/lmce-build/builder.custom.conf to set the number of cores used during the build.

--- End quote ---

This is exported through the environment when I execute, either automatically by nproc command or set manually but some builds still only use one core, I know this is a tricky problem to solve because of some builds failing.


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---Using the above will avoid building anything that fails multi-core builds.

--- End quote ---

^^


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---This is something that needs to be looked into going forward, our current build system is outdated, but it's not as simple as in many projects due to the database system we rely on.

--- End quote ---

The build system may be outdated but I cannot criticise it, I have built distribution from source and know how difficult it is. So much changes in different Ubuntu versions it's hard to keep up with that alone, and the amount of packages LinuxMCE glues together is perplexing. Yes the database is a hurdle for me at the moment and something that's a requirement to learn to make any progress I can see, have been reading about sqlCVS but am still very unfamiliar with packages and what needs to be done to make one.


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---Most everything is sacred but the knx/eib stuff has to work.  Everything we do here is supported and enabled by someone that relies on knx and that must be maintained.  Oh... and VDR too ;)

--- End quote ---

I can probably help test VDR but being MythTV is what I have used in forever that is what I have running at the moment because I couldn't get VDR to work. I guess that is just a matter of being familiar with MythTV and not knowing anything about VDR. And being lazy!


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---Right now database changes have to be made by one of a few devs, some that haven't been seen for a while.  Changes in mysql forced updates in sqlCVS that seem to have broken anonymous commits/approvals.  I can work with you to get things input if necessary.

--- End quote ---

I have been reading about how this works on the wiki but to be honest I haven't done many changes to the database yet through not knowing enough. I have experience with making rpm, ebuild and some deb packaging but how this relates to the database is something I am still learning. I appreciate your continued help in all of this so thank you very much, it has  really helped me understand so much more about the system.


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---Official builds are all produced on one machine for all i386 and X86_64 builds and all official armhf builds have been made on my armhf builder.  Essentially all our build scripts cater to this primary builder.  I've added lots of speed-ups and I skip many steps in the build process in my chroot environments but that depends on not destroying the environments and knowing how to reset those steps to occur, none of these things are documented anywhere.

--- End quote ---

I can understand that, the contrast with packer is faithful build from provisioning a base OS image for each branch and arch all the way to deb so these steps you take and all that knowledge are missed. Those little scripts and hacks I have had to take may be useful to you and it's the same for me. I notice by doing build this way this is documented in code.


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---The firewall is severely broken and you're best option is to disable it entirely.

--- End quote ---

Ouch, I have had to in Vagrant but my running network is behind another firewall so do not notice this.


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---Things are pretty quiet but you might try to join #linuxmce-devel on freenode irc.  I try to get on daily and if I'm around then it can be easier to converse and 'brain dump' ;)

--- End quote ---

I have tried to get on IRC but had trouble using the service recently, one of the reasons I hit the forum and bug tracker. Will try again, the forums are great but it would be good to chat about the subject with less latency.


--- Quote from: phenigma on November 28, 2018, 07:05:27 am ---Keep having fun!

--- End quote ---

Cheers :)

Navigation

[0] Message Index

[#] Next page

Sitemap 
Go to full version