Recent Posts

Pages: 1 ... 8 9 [10]
91
Users / Re: hdmi cec control player
« Last post by CentralMedia on December 01, 2018, 01:19:22 am »
Have a Raspberry PI as an LNMCE MD, which supposed to be doing the control of a sony tv

Also have KODI on a raspberry PI, openelec build, also connected to that same TV, which i was planning to control for now via the CEC Player device on the LNMCE MD. Going to revisit the work I was doing controlling KODI via jason. Had it working to an extent, but then lost a lot of hardware, so it's starting from scratch.

Next I have a firestick on that TV, which is the next device to control via CEC Player.

Is it, that having the two LNMCE MD and KODI on the same tv, both on a PI, can cause a conflict?
92
Users / Re: hdmi cec control player
« Last post by phenigma on December 01, 2018, 01:09:27 am »
I noticed your using Kodi, if you're using HDMI CEC control on Kodi and it's the same PC that you're trying to do CEC control with then the CEC library software may conflict.  Can you been more specific about which system is doing what?  I'm a little confused about what you have and where.


Sent from my Pixel using Tapatalk

93
Users / Re: hdmi cec control player
« Last post by CentralMedia on December 01, 2018, 12:24:27 am »
Have set the pipes are as normal, MD connection to tv and port. HDMI 3, the av player, connection on HDMI 2.

Do I also have to set the HDMI-CEC TV (#2304) to control your CEC enabled tv, to the HDMI3 input? Did that, made no difference

The control by for the TV, only seeing the living room tv, my closet system, closet hybrid and the raspberry MD, which one, it's set to the raspberry MD?

When I do a reload water, the TV does switch to the input it is on

    11/30/18 19:20:45.750             Parameter 98(PK_Device_Pipes):  <0xb60c1b40>
08      11/30/18 19:20:45.750           Received Message from 10 (Media Plug-in / Closet/Storage Space) to 106 (KDL 40W600B / Living Room/Family Room), type 1 id 91 Command:Input Select, retry none, parameters: <0xb60c1b40>
08      11/30/18 19:20:45.750             Parameter 71(PK_Command_Input(HDMI 3)): 930 <0xb60c1b40>
08      11/30/18 19:20:45.714           Received Message from 73 (OnScreen Orbiter / Living Room/Family Room) to 10 (Media Plug-
94
Users / Re: Adding zwave lights
« Last post by phenigma on November 30, 2018, 03:40:53 am »
I have a lock but haven't successfully paired it with the open zwave library we have.  I'm going to update that but it's a matter of time available.  There 6 in 1 causes me problems where it floods the zwave network after LMCE requests confirmations.  Temp is reported alternately in F and C causing it to report different values everything failing the confirmation.
J.

Sent from my Pixel using Tapatalk

95
Users / Re: hdmi cec control player
« Last post by phenigma on November 30, 2018, 03:35:37 am »
The AV pipes are not automatically created for these devices.  For input switching to work you will also have to connect the AV pipes in the connection wizard.
J.

Sent from my Pixel using Tapatalk

96
Users / Re: Adding zwave lights
« Last post by CentralMedia on November 30, 2018, 03:34:22 am »
Any one has experience with the zwave locks?

Also

Aeotec Multisensor 6, Z-Wave Plus 6-in1 motion, temperature, humidity, light, UV, vibration sensor
97
Users / Re: hdmi cec control player
« Last post by CentralMedia on November 30, 2018, 02:46:32 am »
When creating the device, I set it to control via HDMI CEC, but in the control by section, only seeing closet and my raspberry pi MD. What are the correct options for control a tv and switching the inputs.

Got the player, a kodi player being controlled by HDMI CEC, but not switching input when i click the scenario.
98
All my builds are in Virtual Box.  I have one builder with multiple chroots.  I also have an armhf builder (that I need to resurrect) as I need to get the RPi3+ booting.

I've been using the packages built this way from you for a long time so it works good I know that. I have no problem with VirtualBox at all, although I probably do have some bias to qemu. packer doesn't discriminate which builder one wants to use, it makes it trivial to add some json and use another builder with the same scripts to provision the box, it has made it very easy to compare build time across the three that I have set up so far. In fact i started with VirtualBox which is the default provider, then Docker then Qemu. Qemu seemed to compile couple hours faster when building so I went with that due to the time. I know a couple things though, compliing in a VM is painful and you have patience.

I also have some armhf boxes and had the idea of doing this build in docker on them but it's back to pthsem error?

I'm very interested in exactly how you've got this setup and launching.

packer provides all the setup for this, docker is more lightweight like a chroot than a vm so when the build is finished it's tagged and stored locally in the docker image repository. packer builds locally so it can't be done on a remote docker instance. packer can export docker images but I wanted them tagged locally for now.

it's just another 'builder' under packer and using the same provisioning scripts wrapping the buildscripts, with some if statements to do a few things differently in a container rather than a vm.

I will clean up the scripts and upload to git because I do think docker as build platform would be good if it wasn't for pthsem failing, snapshot and cloning of image is easy. I could not reproduce the pthsem error under VirtualBox or Qemu.

QEmu == Ewww, I can agree with that.  Not sure if you're referring to automated 'modeling'/testing of distributed lmce network or the exact specifics.  All my 'live' testing has so far been done using VMWare or VirtualBox and snapshot systems to install/test/revert/repeat.

Yes that is what I meant, in vagrant to automatically fire up a LinuxMCE network with vagrant up, bringing up a core or hybrid and even model netboot with 20 machines seems very useful. This can also be done in VirtualBox which is the default provider for vagrant, but at the moment I am using the libvirt vagrant plugin (qemu).

Add 'NUM_JOBS=XX' to /etc/lmce-build/builder.custom.conf to set the number of cores used during the build.

This is exported through the environment when I execute, either automatically by nproc command or set manually but some builds still only use one core, I know this is a tricky problem to solve because of some builds failing.

Using the above will avoid building anything that fails multi-core builds.

^^

This is something that needs to be looked into going forward, our current build system is outdated, but it's not as simple as in many projects due to the database system we rely on.

The build system may be outdated but I cannot criticise it, I have built distribution from source and know how difficult it is. So much changes in different Ubuntu versions it's hard to keep up with that alone, and the amount of packages LinuxMCE glues together is perplexing. Yes the database is a hurdle for me at the moment and something that's a requirement to learn to make any progress I can see, have been reading about sqlCVS but am still very unfamiliar with packages and what needs to be done to make one.

Most everything is sacred but the knx/eib stuff has to work.  Everything we do here is supported and enabled by someone that relies on knx and that must be maintained.  Oh... and VDR too ;)

I can probably help test VDR but being MythTV is what I have used in forever that is what I have running at the moment because I couldn't get VDR to work. I guess that is just a matter of being familiar with MythTV and not knowing anything about VDR. And being lazy!

Right now database changes have to be made by one of a few devs, some that haven't been seen for a while.  Changes in mysql forced updates in sqlCVS that seem to have broken anonymous commits/approvals.  I can work with you to get things input if necessary.

I have been reading about how this works on the wiki but to be honest I haven't done many changes to the database yet through not knowing enough. I have experience with making rpm, ebuild and some deb packaging but how this relates to the database is something I am still learning. I appreciate your continued help in all of this so thank you very much, it has  really helped me understand so much more about the system.

Official builds are all produced on one machine for all i386 and X86_64 builds and all official armhf builds have been made on my armhf builder.  Essentially all our build scripts cater to this primary builder.  I've added lots of speed-ups and I skip many steps in the build process in my chroot environments but that depends on not destroying the environments and knowing how to reset those steps to occur, none of these things are documented anywhere.

I can understand that, the contrast with packer is faithful build from provisioning a base OS image for each branch and arch all the way to deb so these steps you take and all that knowledge are missed. Those little scripts and hacks I have had to take may be useful to you and it's the same for me. I notice by doing build this way this is documented in code.

The firewall is severely broken and you're best option is to disable it entirely.

Ouch, I have had to in Vagrant but my running network is behind another firewall so do not notice this.

Things are pretty quiet but you might try to join #linuxmce-devel on freenode irc.  I try to get on daily and if I'm around then it can be easier to converse and 'brain dump' ;)

I have tried to get on IRC but had trouble using the service recently, one of the reasons I hit the forum and bug tracker. Will try again, the forums are great but it would be good to chat about the subject with less latency.

Keep having fun!

Cheers :)
99
The way I have build LinuxMCE so far is through packer and vagrant

All my builds are in Virtual Box.  I have one builder with multiple chroots.  I also have an armhf builder (that I need to resurrect) as I need to get the RPi3+ booting.

Docker is probably the fastest method I have tried due to the least overhead

I'm very interested in exactly how you've got this setup and launching.

however compiling pthsem fails in the container.

Mhm.  Yup.

VirtualBox while working, is unusable in my opinion for the size of LinuxMCE. Qemu does work but takes a fair few hours to do a complete build on a quad core under kvm but does work great for compiling and providing a functional test environment in vagrant. To model a LinuxMCE network system, even OpenGL with virgl and networking between systems, netboot, disked md etc. I have found this invaluable tool to test LinuxMCE more. I have used lxc in the past but have not tested this path yet but will do so as these are lightweight like a chroot. VMWare and several others are possible.

QEmu == Ewww, I can agree with that.  Not sure if you're referring to automated 'modeling'/testing of distributed lmce network or the exact specifics.  All my 'live' testing has so far been done using VMWare or VirtualBox and snapshot systems to install/test/revert/repeat.

parallel building

Add 'NUM_JOBS=XX' to /etc/lmce-build/builder.custom.conf to set the number of cores used during the build.

There are obviously some packages that will need fixing due to this change too, some packages will fail when make is invoked as parallel. some linking methods are different, I think --as-needed flag, so some libs will need juggling around with the link ordering, this is where I stopped, I reverted it because I knew it would break trusty and time better spent elsewhere.

Using the above will avoid building anything that fails multi-core builds.

So anyway, back to the original answer a change like this would break pre bionic builds and the only way I see is another branch. Knowing how LinuxMCE deals with this would be useful to move foward.

This is something that needs to be looked into going forward, our current build system is outdated, but it's not as simple as in many projects due to the database system we rely on.

Another option was to look at way to remove pthsem with more updated software that depended on it but it looks like it's used by some core things I cannot test due to lack of knx/eib hardware so that isn't an option currently for me.

Most everything is sacred but the knx/eib stuff has to work.  Everything we do here is supported and enabled by someone that relies on knx and that must be maintained.  Oh... and VDR too ;)

can be made. At the moment I am looking at ways to get my database changes in using this method but I am not going to lie I am struggling with LinuxMCE as you can well imagine.

Right now database changes have to be made by one of a few devs, some that haven't been seen for a while.  Changes in mysql forced updates in sqlCVS that seem to have broken anonymous commits/approvals.  I can work with you to get things input if necessary.

I would like to know more about how you work and how this all fits because I do not want to interfere with current build methods, so I am trying to make this an extension if you like, a wrapper around the buildscripts. I would welcome and help with this I don't want to waste your time the only thing stopping me uploading at the moment is it being so rough and cutting corners but I hope to work these out.

Official builds are all produced on one machine for all i386 and X86_64 builds and all official armhf builds have been made on my armhf builder.  Essentially all our build scripts cater to this primary builder.  I've added lots of speed-ups and I skip many steps in the build process in my chroot environments but that depends on not destroying the environments and knowing how to reset those steps to occur, none of these things are documented anywhere.

P.S. While I think of it, I have just managed to install a hybrid in vagrant with the debs from deb.linuxmce.org, I got access the vm by disabling the firewall in /etc/pluto.conf with a hack in the Vagrantfile but this is hardly optimal so wondered if there was a way to enable ssh access by the cli after apt install lmce-mybrid. the firewall comes up and blocks access so i had to disable the firewall because I don't know how to enable programmatically, the outside ssh access in LinuxMCE settings to stick. Phew. Thanks!

The firewall is severely broken and you're best option is to disable it entirely.

Things are pretty quiet but you might try to join #linuxmce-devel on freenode irc.  I try to get on daily and if I'm around then it can be easier to converse and 'brain dump' ;)

Keep having fun!

J.
100
First off sorry for rambling on, this is almost brain dump off stuff I've been thinking about..

I can give you an example though thinking about it more it could be achieved with 2 branches. I know more branches adds complexity and maintenance by having to backport between them so having one branch say 'pre-bionic' and just use master for bionic onwards would suffice in this particular case maybe.

A little background on my current LinuxMCE build and to how this came about.

The way I have build LinuxMCE so far is through packer and vagrant, I am used to building in chroot and have no problem with that at all but I thought to extend this further by utilising new container systems so I can move OS easier. This led me back to packer and vagrant.
So far I have been able to build LinuxMCE in VirtualBox, Docker and qemu all through single provisioning scripts wrapping the LinuxMCE build scripts in packer, in theory this could be used to build LinuxMCE in the 'cloud' by whatever provider packer supports.
Granted I have had to take a lot of shortcuts and do a lot of horrible hacks at the moment but I have had a few succesful builds so far.

Docker is probably the fastest method I have tried due to the least overhead, however compiling pthsem fails in the container. VirtualBox while working, is unusable in my opinion for the size of LinuxMCE. Qemu does work but takes a fair few hours to do a complete build on a quad core under kvm but does work great for compiling and providing a functional test environment in vagrant. To model a LinuxMCE network system, even OpenGL with virgl and networking between systems, netboot, disked md etc. I have found this invaluable tool to test LinuxMCE more. I have used lxc in the past but have not tested this path yet but will do so as these are lightweight like a chroot. VMWare and several others are possible.

So after that rambling explanation, it's taking hours to build LinuxMCE under qemu, before I can look at fixing why pthsem fails in docker, I started looking for ways to reduce build time.
On bionic and greater includes debhelper 11. by changing the debhelper requirement in the deb package control and bumping the compat file to 10 most any build that uses the debhelpers should utilise parallel building. I notice when compiling LinuxMCE, on a lot of the builds there is only a single core being utilised so by bumping the debhelper requirements on bionic and onwards, it should speed up build times for a lot of packages.

There are obviously some packages that will need fixing due to this change too, some packages will fail when make is invoked as parallel. some linking methods are different, I think --as-needed flag, so some libs will need juggling around with the link ordering, this is where I stopped, I reverted it because I knew it would break trusty and time better spent elsewhere.

So anyway, back to the original answer a change like this would break pre bionic builds and the only way I see is another branch. Knowing how LinuxMCE deals with this would be useful to move foward.

I can understand the develpoment model and trying not to deviate too much from master across all the branches and having to test across them all.

In the script on packer/vagrant would be trivial to select which build os and then specify which target git branch to build, at the moment I am just feeding in my own tree and have experimented through ways getting the sources into the container and evaluating which methods are faster or portable across providers.

This has been a long journey the last few months but I would like to work on this path more it has really helped me being able to test LinuxMCE across the different versions without having a dedicated box and not touching my working system on 14.04. this came about by looking at ways to make the build faster because it's a little painful inside a vm right now, I would rather take longer and have the build self contained like this though because it has many benefits being able to copy the dev environment around, working on a foreign OS and bringing up the image with only a few commands.

Another option was to look at way to remove pthsem with more updated software that depended on it but it looks like it's used by some core things I cannot test due to lack of knx/eib hardware so that isn't an option currently for me.

Just thinking of ways to improve LinuxMCE, I know extra branches equals more effort and time is the enemy but I don't want to break backwards compatilbility either, I think you guys and girls have done a stellar job and the current system works great. I hope you not think I'm poking holes at LinuxMCE just I think with packer/vagrant it is how one would set up machines manually so by me wrapping this, eventually those wiki pages "building LinuxMCE" could be redundant. It could be codified and at the end of a couple packer commands out pops a (HUGE) machine image with dev environment and built deb packages, and from then even various machine images, like core, hybrid etc. can be made. At the moment I am looking at ways to get my database changes in using this method but I am not going to lie I am struggling with LinuxMCE as you can well imagine.

I would like to know more about how you work and how this all fits because I do not want to interfere with current build methods, so I am trying to make this an extension if you like, a wrapper around the buildscripts. I would welcome and help with this I don't want to waste your time the only thing stopping me uploading at the moment is it being so rough and cutting corners but I hope to work these out.

If you can make sense of this rambling posts and weird bug reports there is some method to my madness I hope :)

Cheers.

P.S. While I think of it, I have just managed to install a hybrid in vagrant with the debs from deb.linuxmce.org, I got access the vm by disabling the firewall in /etc/pluto.conf with a hack in the Vagrantfile but this is hardly optimal so wondered if there was a way to enable ssh access by the cli after apt install lmce-mybrid. the firewall comes up and blocks access so i had to disable the firewall because I don't know how to enable programmatically, the outside ssh access in LinuxMCE settings to stick. Phew. Thanks!
Pages: 1 ... 8 9 [10]