Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Gavlee

Pages: [1] 2
Very cool!!!  Some of these issues may be fixed from recent additions.  Be sure to grab any updates!  The majority of the system is building on ubuntu bionic and debian buster now too but ruby/gsd is dead and I'm not sure it can be resurrected.  Great to see xenial installed!  I don't remember when the last time I tried that was.  Awesome stuff!

Kind words, thank you!

When you go to push anything to git we should do it in a new branch so we can test the changes before dropping them in master.

What I have been doing is making bug or feature branches for every error or feature, then making a testing branch and cherry picking all the commits from those branches into it and compiling from that branch. Will be trying to clean up the rough edges and upload the fixes but having trouble keeping track of everything at the moment!

There are lots of web clients for IRC that you can also use.  Personally I use Quassel because I can run a single instance on my dcerouter and then use clients on any computer or android phone I have.

That's a good idea, I have no problem with IRC itself really, I used to use it a lot on freenode with screen and irssi. The issue I have is some time ago I asked for a cloak, all was well then. However, I didn't log in for a few months and now my identity has been taken and I cannot log in with my old nick. So the fact I cannot hide my IP and don't have my old login is a little bit of a pain and every time I connect I get booted for having a taken username! Have been using xmpp for a while now just wondered if that could be a solution if any of you guys or girls have an account.

ok, I've pushed everything I've had from the last year into the repo and brought bionic up to about the same point as xenial (build side anyways).  Check as I've touched a ton of tickets and commented on some things you've been commenting on as well. 

Thanks so much for the hard work and advice you have given, I will be pulling all the updates and doing rebuilds on top of those changes.

The master branch of build-scripts and the master branch of linuxmce are building on ubuntu 1204/1404/1604/1804 and raspbian wheezy/jessie/buster.  There are still some missing database incompatibilities for 1804 and buster but both are building.  GSD is broken post xenial/jessie due to ruby upgrades.  I can't get ruby 1.8 to build on the newer releases and the GSD implementation is completely broken with newer versions of ruby.

Yes I notice the ruby problem, I gather all the ruby code is stored in the database?
I found the github for the ruby1.8 but didn't get that far yet to try anything.

The pluto-database-settings package that sets up the database users and passwords is number one priority for installs from xenial onwards.  I haven't been able to setup any test systems yet, just builders.  Really need testing on that package as it sets up the database users and passwords.  Clearly it isn't working properly from the issues identified on gitlab.  If we can get that working then testing installs will be much easier!

From my test install the packages installed with no errors from post-install scripts only the lmce-admin login not working because of the password issue, have mention this on the gitlab but will need to test again and have a look at the pluto-database-settings package.

We should try to connect and chat at some point.

I agree.
Been trying to get on freenode through IRC gateway but still don't have a nick and get booted all the ones I try.

PS. I've been doing the builds for the packages in docker recently, I'm not sure if there was a bug in docker causing the pthsem package to fail but I haven't seen that error since upgrading to bionic on my build host. Everything is much faster compiling in docker compared to a VM and that is much time saved. I am finding Docker is great for building LinuxMCE and will be uploading the scripts when I can figure out how best to get the ugly hacks put into git patches.

another shot of the desktop

Hello, back again.

Some screenshots of my building and testing on xenial.

Built under packer and tested Qemu + virgl. (having trouble testing in vagrant so used qemu but almost there)

Was surprised to see the setup tutorial videos came up and played great but for some reason those screen captures were blank from qemu screendump. so used spectacle instead.

Am I brave enough to install it on my home server...?  :-\

Some notes:

chan-sccp have fixed some in the buildscripts and linuxmce but the /etc/asterisk/sccp.conf file still collides with lmce-asterisk on install
my vdr patches compile but probably break things, not tested yet
tried to fix most of the compile errors in autotagger, advanced_ip_camera/onvif and buildscripts so the build completes through
Switching from the orbiter and kde works great! :)

will be uploading it to git when I can


PS Having trouble getting on IRC . Does anybody use XMPP by any chance please? it's lonely!

Users / Re: Possible Q Orbitor Issues
« on: January 11, 2019, 04:22:50 am »
One more thing.

The QOrbiter was added on the core first, before running the QOrbiter on the android device?

Maybe you could try on the android device, deleting settings left over to remove any chance of conflict, then delete the QOrbiters on the core and quick reload the router. Once all the settings are nuked try loading the QOrbiter again without doing anything on the core.

I could be wrong but adding the QObiter first on the core may confuse things, should the QOrbiter not be automatically detected by the core when it first runs? I think that's how it should be but I'm not 100% about that.

Users / Re: Possible Q Orbitor Issues
« on: January 08, 2019, 10:37:48 am »
From the information provided, my guess is it's either a network configuration problem or something to do with the Android versions. I haven't tried QOrbiter on Android 8 yet, though I don't see why there would be a network problem like this if the QOrbiter works otherwise, It has worked for me from android 4.x ish.

I have had a similar problem with the QOrbiter running and showing the main screen but not any devices like lights etc. I do not remember what the problem was but I remember my network config was different then, most things seems to work in the QOrbiter for me at the moment but I have the bridge connected to the internal LMCE network not off my router box like yourself.

If I were you I would run pcap or wireshark or some packet sniffer on the device and core if you can, also check the pfsense/firewall logs while the QOrbiter is connecting. Doing that may show if any traffic is being sent and not receiving a reply and may help diagnose the problem.

Only other thing I can think of is removing the settings files QOrbiter creates (on the android device) and starting over.

Can't think of much else at the moment.

Users / Re: Possible Q Orbitor Issues
« on: January 04, 2019, 03:47:47 pm »
I do not have a /var/www/lmce-admin/skins at all.

Getting puzzled how you have this set up :)
Can I ask more about the network config please, by the sounds of it you have this working before? What changed?

The QOrbiter is hanging off a wifi access point on the pfsense box and also on the internal network? Please clarify because this seems a bit contradictory if the Qorbiter is trying to access the core going through the 'external' IP address, say 192.168.1.x.
What is providing the DHCP to the QOrbiter, the pfsense box or the lmce core?

There are probably a lot of different ways to set this up.
The way I have seen to do this are as follows:

1. wifi off your pfsense box
you will need rules and nat set up properly, maybe even port forwarding, if you get this to work please post how, i tried this method and think i hit the same issues as you did but that was a while ago.

2. wifi access point/bridge in the internal network
 how I have currently, some docs on lmce wiki and elsewhere how to set up a wifi bridge

3. VPN
I have had this working on and off but the default config needs some small tweaks I think, which is scattered all over the forums and web.
this require set up VPN in android or the device connecting to a interface that has the udp 500 port forwarded to the lmce ext interface. the android device will appear as on the internal network 192.168.80.x once connected through the encrypted tunnel. as a bonus one can have this on 3g/4g roaming and access your lmce network from outside by QOrbiter / SIP/ lmce-admin / orbiter etc.

Probably more ways but getting this crystal clear what you plan to do may allow more help. The VPN route is best and most secure IMHO but needs some love in the default config, especially for android.

HTH some

Users / Re: Possible Q Orbitor Issues
« on: January 02, 2019, 02:05:32 pm »

I use the QOrbiter on android on a phone (LineageOS 14.1 - Android 7.1) and a tablet, also android 7. All I did was install the apk provided by the download link in the lmce-admin. I haven't needed to mess with skins or anything like that, I think they are provided over the network. There is an option in the Qorbiter settings screen for something to do with skins but I have never altered it.

Please could you provide more info about your network, for example what subnet the QOrbiter devices are on, if they are in the same network as the internal - 192.168.80.x or external. From what I remember having a similar problem before it was communication issue between the core and qorbiter, maybe a firewall or routing issue? What android version are these devices?

As it stands for me right now I can control lights, execute scenarios and view cameras etc which is what I mainly use it for.

Edit: forgot to mention this is on 14.04

All my builds are in Virtual Box.  I have one builder with multiple chroots.  I also have an armhf builder (that I need to resurrect) as I need to get the RPi3+ booting.

I've been using the packages built this way from you for a long time so it works good I know that. I have no problem with VirtualBox at all, although I probably do have some bias to qemu. packer doesn't discriminate which builder one wants to use, it makes it trivial to add some json and use another builder with the same scripts to provision the box, it has made it very easy to compare build time across the three that I have set up so far. In fact i started with VirtualBox which is the default provider, then Docker then Qemu. Qemu seemed to compile couple hours faster when building so I went with that due to the time. I know a couple things though, compliing in a VM is painful and you have patience.

I also have some armhf boxes and had the idea of doing this build in docker on them but it's back to pthsem error?

I'm very interested in exactly how you've got this setup and launching.

packer provides all the setup for this, docker is more lightweight like a chroot than a vm so when the build is finished it's tagged and stored locally in the docker image repository. packer builds locally so it can't be done on a remote docker instance. packer can export docker images but I wanted them tagged locally for now.

it's just another 'builder' under packer and using the same provisioning scripts wrapping the buildscripts, with some if statements to do a few things differently in a container rather than a vm.

I will clean up the scripts and upload to git because I do think docker as build platform would be good if it wasn't for pthsem failing, snapshot and cloning of image is easy. I could not reproduce the pthsem error under VirtualBox or Qemu.

QEmu == Ewww, I can agree with that.  Not sure if you're referring to automated 'modeling'/testing of distributed lmce network or the exact specifics.  All my 'live' testing has so far been done using VMWare or VirtualBox and snapshot systems to install/test/revert/repeat.

Yes that is what I meant, in vagrant to automatically fire up a LinuxMCE network with vagrant up, bringing up a core or hybrid and even model netboot with 20 machines seems very useful. This can also be done in VirtualBox which is the default provider for vagrant, but at the moment I am using the libvirt vagrant plugin (qemu).

Add 'NUM_JOBS=XX' to /etc/lmce-build/builder.custom.conf to set the number of cores used during the build.

This is exported through the environment when I execute, either automatically by nproc command or set manually but some builds still only use one core, I know this is a tricky problem to solve because of some builds failing.

Using the above will avoid building anything that fails multi-core builds.


This is something that needs to be looked into going forward, our current build system is outdated, but it's not as simple as in many projects due to the database system we rely on.

The build system may be outdated but I cannot criticise it, I have built distribution from source and know how difficult it is. So much changes in different Ubuntu versions it's hard to keep up with that alone, and the amount of packages LinuxMCE glues together is perplexing. Yes the database is a hurdle for me at the moment and something that's a requirement to learn to make any progress I can see, have been reading about sqlCVS but am still very unfamiliar with packages and what needs to be done to make one.

Most everything is sacred but the knx/eib stuff has to work.  Everything we do here is supported and enabled by someone that relies on knx and that must be maintained.  Oh... and VDR too ;)

I can probably help test VDR but being MythTV is what I have used in forever that is what I have running at the moment because I couldn't get VDR to work. I guess that is just a matter of being familiar with MythTV and not knowing anything about VDR. And being lazy!

Right now database changes have to be made by one of a few devs, some that haven't been seen for a while.  Changes in mysql forced updates in sqlCVS that seem to have broken anonymous commits/approvals.  I can work with you to get things input if necessary.

I have been reading about how this works on the wiki but to be honest I haven't done many changes to the database yet through not knowing enough. I have experience with making rpm, ebuild and some deb packaging but how this relates to the database is something I am still learning. I appreciate your continued help in all of this so thank you very much, it has  really helped me understand so much more about the system.

Official builds are all produced on one machine for all i386 and X86_64 builds and all official armhf builds have been made on my armhf builder.  Essentially all our build scripts cater to this primary builder.  I've added lots of speed-ups and I skip many steps in the build process in my chroot environments but that depends on not destroying the environments and knowing how to reset those steps to occur, none of these things are documented anywhere.

I can understand that, the contrast with packer is faithful build from provisioning a base OS image for each branch and arch all the way to deb so these steps you take and all that knowledge are missed. Those little scripts and hacks I have had to take may be useful to you and it's the same for me. I notice by doing build this way this is documented in code.

The firewall is severely broken and you're best option is to disable it entirely.

Ouch, I have had to in Vagrant but my running network is behind another firewall so do not notice this.

Things are pretty quiet but you might try to join #linuxmce-devel on freenode irc.  I try to get on daily and if I'm around then it can be easier to converse and 'brain dump' ;)

I have tried to get on IRC but had trouble using the service recently, one of the reasons I hit the forum and bug tracker. Will try again, the forums are great but it would be good to chat about the subject with less latency.

Keep having fun!

Cheers :)

First off sorry for rambling on, this is almost brain dump off stuff I've been thinking about..

I can give you an example though thinking about it more it could be achieved with 2 branches. I know more branches adds complexity and maintenance by having to backport between them so having one branch say 'pre-bionic' and just use master for bionic onwards would suffice in this particular case maybe.

A little background on my current LinuxMCE build and to how this came about.

The way I have build LinuxMCE so far is through packer and vagrant, I am used to building in chroot and have no problem with that at all but I thought to extend this further by utilising new container systems so I can move OS easier. This led me back to packer and vagrant.
So far I have been able to build LinuxMCE in VirtualBox, Docker and qemu all through single provisioning scripts wrapping the LinuxMCE build scripts in packer, in theory this could be used to build LinuxMCE in the 'cloud' by whatever provider packer supports.
Granted I have had to take a lot of shortcuts and do a lot of horrible hacks at the moment but I have had a few succesful builds so far.

Docker is probably the fastest method I have tried due to the least overhead, however compiling pthsem fails in the container. VirtualBox while working, is unusable in my opinion for the size of LinuxMCE. Qemu does work but takes a fair few hours to do a complete build on a quad core under kvm but does work great for compiling and providing a functional test environment in vagrant. To model a LinuxMCE network system, even OpenGL with virgl and networking between systems, netboot, disked md etc. I have found this invaluable tool to test LinuxMCE more. I have used lxc in the past but have not tested this path yet but will do so as these are lightweight like a chroot. VMWare and several others are possible.

So after that rambling explanation, it's taking hours to build LinuxMCE under qemu, before I can look at fixing why pthsem fails in docker, I started looking for ways to reduce build time.
On bionic and greater includes debhelper 11. by changing the debhelper requirement in the deb package control and bumping the compat file to 10 most any build that uses the debhelpers should utilise parallel building. I notice when compiling LinuxMCE, on a lot of the builds there is only a single core being utilised so by bumping the debhelper requirements on bionic and onwards, it should speed up build times for a lot of packages.

There are obviously some packages that will need fixing due to this change too, some packages will fail when make is invoked as parallel. some linking methods are different, I think --as-needed flag, so some libs will need juggling around with the link ordering, this is where I stopped, I reverted it because I knew it would break trusty and time better spent elsewhere.

So anyway, back to the original answer a change like this would break pre bionic builds and the only way I see is another branch. Knowing how LinuxMCE deals with this would be useful to move foward.

I can understand the develpoment model and trying not to deviate too much from master across all the branches and having to test across them all.

In the script on packer/vagrant would be trivial to select which build os and then specify which target git branch to build, at the moment I am just feeding in my own tree and have experimented through ways getting the sources into the container and evaluating which methods are faster or portable across providers.

This has been a long journey the last few months but I would like to work on this path more it has really helped me being able to test LinuxMCE across the different versions without having a dedicated box and not touching my working system on 14.04. this came about by looking at ways to make the build faster because it's a little painful inside a vm right now, I would rather take longer and have the build self contained like this though because it has many benefits being able to copy the dev environment around, working on a foreign OS and bringing up the image with only a few commands.

Another option was to look at way to remove pthsem with more updated software that depended on it but it looks like it's used by some core things I cannot test due to lack of knx/eib hardware so that isn't an option currently for me.

Just thinking of ways to improve LinuxMCE, I know extra branches equals more effort and time is the enemy but I don't want to break backwards compatilbility either, I think you guys and girls have done a stellar job and the current system works great. I hope you not think I'm poking holes at LinuxMCE just I think with packer/vagrant it is how one would set up machines manually so by me wrapping this, eventually those wiki pages "building LinuxMCE" could be redundant. It could be codified and at the end of a couple packer commands out pops a (HUGE) machine image with dev environment and built deb packages, and from then even various machine images, like core, hybrid etc. can be made. At the moment I am looking at ways to get my database changes in using this method but I am not going to lie I am struggling with LinuxMCE as you can well imagine.

I would like to know more about how you work and how this all fits because I do not want to interfere with current build methods, so I am trying to make this an extension if you like, a wrapper around the buildscripts. I would welcome and help with this I don't want to waste your time the only thing stopping me uploading at the moment is it being so rough and cutting corners but I hope to work these out.

If you can make sense of this rambling posts and weird bug reports there is some method to my madness I hope :)


P.S. While I think of it, I have just managed to install a hybrid in vagrant with the debs from, I got access the vm by disabling the firewall in /etc/pluto.conf with a hack in the Vagrantfile but this is hardly optimal so wondered if there was a way to enable ssh access by the cli after apt install lmce-mybrid. the firewall comes up and blocks access so i had to disable the firewall because I don't know how to enable programmatically, the outside ssh access in LinuxMCE settings to stick. Phew. Thanks!


After taking the plunge and trying to build LinuxMCE after being a consumer for too long. One of the things I notice is the problem of keeping backward compatibility while working on newer Ubuntu branches.

Would it be feasible to add some branches in git like ubuntu-14.04, ubuntu-16.04, ubuntu-18.04 for example, to allow maintenance on these older Ubuntu versions without holding up current developments?

I know some things have do be done by conditionals in the code for each branch and there is the database to consider but some things in the code repo can only be done with another branch from what I can tell.

Anyone have thoughts on this?


Users / Re: Adding zwave lights
« on: November 13, 2018, 03:22:46 am »
Good you got it working.

Would like to add to what Garbui mentioned, the first time I added a zwave plus device a while back, a hardwired plug, I was amazed to see how it fixed the connection issues with other devices because it acts as a repeater. I've added a few more devices since then and the network seems fine so wholeheartedly agree adding more devices are better, even one if it can act as a repeater in the mesh.

USB cable are a lot cheaper though, it's good to see how issues like this can be tackled in multiple ways, didn't know having a second controller was at all possible.


Users / Re: Adding zwave lights
« on: October 30, 2018, 01:24:59 pm »
Well it looks like the controller is working and the devices are there but the red don't look good.

Shouldn't need to reboot if you killall ZWave, think I tried something like that before but ZWave died so rebooted to make sure, maybe I sent it the wrong signal. thanks for the tip pointman87.

Users / Re: disk md
« on: October 27, 2018, 03:35:30 pm »
Yes a few things have changed from under, I should be careful because saying easy fix nearly always ends up a lot of workarounds.

The network setting not being in /etc/network/interfaces is a quick fix on the MD really, just 2 lines in the file. Though doing this for a lot of MD is impractical IMHO. Looking at the problem for a little while, I think it could be patched in the because there should be a network up when the package is installed, so inserting the interface for the currently default route is one way to work around it.

Not had a chance to look at the firstboot scripts in length yet, I assume they are for when the machine boots next and doing the setup for the MD.  On the next boot the network has gone so I assume postinst is the only place to do it, even though it feels like a bit of a hack.

I have a patch to do this in the postinst anyway but not tested it yet, going off topic here my time has been spent working on some extra build scripts for LinuxMCE to provision a bare machine into a LinuxMCE development environment and produce various container/vm images. I did this so allows me to test the current patches I have submitted and try to start developing some features I would like to have, ok dreaming a lot here :)

I nearly didn't see the post on the next page about a new wiki page but I started on one locally anyway, the workarounds for network maybe should have a new ticket on gitlab rather than being in the wiki if this can be fixed in the code, I don't mind doing both.

Thanks for support.

Users / Re: Adding zwave lights
« on: October 23, 2018, 02:41:39 pm »
That's it!

I have the same, the lights are controlled by the parent ZWave device here.

Here is a paste of one of my lights in Wizard > Devices > Lights

(some dimmer)
Device #: 170
Device Template #: 38
Controlled by: ZWave

In the LMCE web admin and the top menu Automation > Advanced ZWave will show more info.

Sometimes in the past I have had to delete all those devices you pasted after messing around to force re-reading, then remove the usb stick and rebooted. you will have to add everything again but if things go crazy that is a sure fire way to start over after resetting all the devices but only do that as a last resort. Things work for me here now so I haven't done that for a while, sometimes it takes some time for Zwave settings to stick I think I just had to keep trying until I got it working.

Edit: if you want lots of info you can also do this

tail -f /var/log/pluto/${your_parent_ZWave_device_number}_ZWave.log

Or connect to the screen session that is running for the device; screen -ls
that can be of great help in finding problems.

Developers / Trouble pushing to Gitlab in my userspace repo
« on: October 22, 2018, 04:33:36 am »

Sorry to be a pain.

I have made a test repo in my userspace on the Gitlab, I tried pushing a few test commits to it to see about getting started but  it looks like they are not showing up. The first two commits seemed to push fine from the command line but nothing is showing up when I look at the repo through Gitlab. I tried even making a commit through the web using Gitlab but the commit I made creating a new file did not complete saying the branch diverged. Do I need to commit on a branch other than master or something, or do I need elevated privileges - I am currently a Reporter.

Not sure what to do now any help greatly appreciated.



Edit: The repo here should have two commits
The reason I did a new repo and not a fork: I am in the middle of writing a wrapper around the buildscripts to try to make building LinuxMCE on foreign operating system easier. A new repo looks like the easiest and less invasive way to do it that is all.

Edit 2: The commits I made in the test repo are now visible, so thank you very much for the privilege and sorry for the bother.

Pages: [1] 2