Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - mkbrown69

Pages: 1 ... 10 11 [12] 13 14
Users / Thom, Our Thoughts are with you...
« on: March 29, 2012, 02:56:09 am »
and I am in a critical care ward on death watch for my father.



I think I can safely say that the LMCE communities' thoughts are with you and your loved ones.  May you cherish and celebrate the memories of the good times you've spent with your father, and I hope that those memories help bring comfort and peace to you both.  May your faith (or the Force ;-) grant you the strength, courage and peace of mind to face what comes.

Our Thoughts and best wishes are with you.  You are not alone.


Installation issues / OT: Thom, our thoughts are with you...
« on: March 29, 2012, 02:51:38 am »
and I am in a critical care ward on death watch for my father.



I think I can safely say that the LMCE communities' thoughts are with you and your loved ones.  May you cherish and celebrate the memories of the good times you've spent with your father, and I hope that those memories help bring comfort and peace to you both.  May your faith (or the Force ;-) grant you the strength, courage and peace of mind to face what comes.

Our Thoughts and best wishes are with you.  You are not alone.


Users / Re: Myth, VDR, or no TV at all...
« on: March 23, 2012, 06:58:34 pm »
Kicking off the comments discussion, I'm using MythTV in my production set-up.  I'm still playing around with LMCE, so it's running in a separate test environment.  When I'm done testing, and I can sync up the various MythTV levels and do a database export from prod and import to LMCE, I'll flip over to LMCE as the prod environment.  Don't have a lot of free cycles right now with work being busy, family life, and a Z/OS course on the side, so I'm not sure when the cut-over will happen.

My Master back-end is running MythTV 0.24.2, and has two PVR-1600 cards in it.  The analog inputs are connected to the CATV line (we still have analog on CATV in Canada), and the two digital inputs are hooked to a home-made 4 bay bow-tie antenna, picking up the local ATSC channels.

I have a disk-less slave back-end/Front-end running MiniMyth, with two PVR-150's.  One is hooked to CATV, the other is hooked to the Rogers STB.  I have three other MiniMyth disk-less front-ends running.

I'm excited about the release of 0.25, as I see the services API as a stable means to accomplish remote control of Myth.  0.24 came out with 'events', which 0.25 is extending on, which could be great integration points into LMCE DCE events.  0.25 also has AirPlay/AirVideo support baked in, the new OSD message class (send CID events discreetly while watching media), plus a whole lot of other bells and whistles.

Have a good one!


Users / Myth, VDR, or no TV at all...
« on: March 23, 2012, 06:24:57 pm »
With the discussion about MythTV 0.25 and it's new services API, I'm curious as to how much each PVR back-end is used.  So, if you're presently using one of the 2 PVR back-ends that are supported in LMCE (either in LMCE or separately), why don't you cast a vote and comment if you want.  Since we don't have a 'popularity contest' capability in LMCE, this could be interesting to see how many people use the TV functionality, and what are the levels of usage for each PVR backend.



Users / Re: climate question
« on: March 15, 2012, 08:10:30 pm »

Keep in mind, X10 is not the most robust PLC communications protocol out there, and it's not a closed loop system (meaning that commands are merely tossed onto the wire and are not acknowledged).  So, if your heaters are on, and the off command doesn't make it through, things could get quite toasty!  You'll want to make sure you embed retry logic into your events to resend the command periodically, or use a technology that does support ACK of commands and query status commands (INSTEON, UPB, Z-Wave, etc).

Most of the home automation lists I'm on recommend that you NOT control anything with X10 that can burn your house down if the off commands don't make it through; similarly, they don't recommend it for cases where a life could depend on it (your reptile cage, fish tank, etc).



Users / Re: physical address extension
« on: March 02, 2012, 05:28:53 am »
It has been my experience that doing what you suggest is a bad idea, I tried it once and had very undesirable results.


It's been running smoothly as a 64-bit kernel on a Debian 32-bit userland for 3 years now.  I've been running KVM based virtuals on the same system for almost 2 years now.  I was suggesting the 64-bit kernel as it would help you make use of your extra memory, which is one of the reasons I upgraded to 64-bit myself.  The trick is to use the 64-bit packaged kernel from the 32-bit architecture stream, which Debian makes it easy to do. 

Now that I've dug a little deeper in apt on my core, I'm seeing that Ubuntu (10.04) only packages i386 and generic (which equates to i686), plus a generic-pae.  They also have their server kernel, but I'm not sure it's 64-bit (I would presume not), and you'd lose drivers for consumer hardware.  So, to get a 64-bit kernel on 32-bit architecture stream, it would take getting the 64-bit streams packages and installing on 32-bit, which would likely have serious breakage.

So, the easiest approach on Ubuntu would be to install the generic-pae kernel if it exists in 8.10 (apt-cache search linux-image or linux-kernel |grep pae).  Don't install the server kernel; it'll be missing a lot of drivers for commodity hardware.  Whether or not you lose hardware acceleration would depend on your graphics card hardware and the quality of the drivers.  You will have a minor performance hit for the memory paging, but it should be negligible.

One thing to consider is what the box is doing when the memory is being used.  'free' and 'top' will tell you a lot there.  If you have lots cached and you're not swapping, you're in a good place.  Modern kernels will use free memory for the file system cache, and release it to apps as needed.  Here's a funny one for ya; Windows can run faster virtualized on Linux than directly on the bare metal, because of the Linux FS Cache.  It helps crank up the I/O speeds, where Windows typically has some trouble.

My comments are given as someone who administers Linux systems on many different hardware platforms, architectures and hypervisors at work.  Hope it helps!


Users / Re: physical address extension
« on: March 01, 2012, 06:38:17 pm »
Or, you could install a 64-bit kernel.  My virtual host started life many moons ago as a Debian 3 32-bit system running on a Pentium 166.  It's received a few hardware and in-place OS upgrades along the way, and is running a 64-bit kernel with a 32-bit userland.  It runs fine, and I'm running a mixture of 64 and 32-bit virtuals.

So, another option for you to consider...



Users / Re: 'Blank' passwords
« on: February 23, 2012, 05:59:45 pm »

Look into the package dbconfig-common. It's the means for creating database users in a manageable way using package mechanisms.

From the apt description...

Description: common framework for packaging database applications This package presents a policy and implementation for  managing various databases used by applications included in Debian packages.
 It can:
  - support MySQL, PostgreSQL, and sqlite based applications;
  - create or remove databases and database users;
  - access local or remote databases;
  - upgrade/modify databases when upstream changes database structure;
  - generate config files in many formats with the database info;
  - import configs from packages previously managing databases on their own;
  - prompt users with a set of normalized, pre-translated questions;
  - handle failures gracefully, with an option to retry;
  - do all the hard work automatically;
  - work for package maintainers with little effort on their part;
  - work for local admins with little effort on their part;
  - comply with an agreed upon set of standards for behavior;
  - do absolutely nothing if that is the whim of the local admin;
  - perform all operations from within the standard flow of package management (no additional skill is required of the local admin).

That's probably the best way forward.  It's what Debian and MythBuntu uses for MythTV/MySQL database management.  I too would like to see the security on the DB users tightened up, but I'm busy with a z/OS course for work which is eating up my spare time...

Hope that helps!


Installation issues / Re: Core as a VM on a mac OSX server
« on: February 15, 2012, 06:20:02 pm »

While I'm not "WhateverFits", I would recommend NOT using VMWare player.  Because it sits on top of an existing OS, there's a lot of overhead involved in trying to run LMCE that way.  ESXi would be a better fit, as it's a bare-metal hypervisor (a Type 1 hypervisor).  Much less overhead involved.

Personally, I'm running an existing Linux Server as a KVM virtual host.  I have two NICs on the physical box, bridged to separate physical and virtualized networks.  I have a br_ext, which is my existing production environment, and a br_int, which is the LMCE .80.0/24 network.  My LMCE test instance is a virtual with 2vCPU's and 1.7G of RAM.  LMCE owns the internal network on it's virtual eth1, bridged to br_int, and it is dual-homed (meaning that eth0 in LMCE is attached to br_ext, connected to my WAN router and the rest of world).  I have a wireless router on the external net, and a separate one on the internal net, letting me flip between the existing non-LMCE production network and my LMCE testing network.  As I complete some migration and Proof-of-concept activities, I'm then migrating services into the LMCE environment while not affecting my existing prod environment.  When I'm fully ready, I'll be able to cut-over to LMCE as the production environment with minimal downtime, as all the backend stuff will have been completed.  It's not at all obvious that I'm a SysAdmin by day, eh?   ;)

Hopefully this gives you some food for thought.


Plus, the only system that I DO have with an nVidia card can't work with my Hauppauge cards (1600s), due to a (known) hardware conflict.

I have nVidia working with 2 HVR-1600's.  You just need to change the vmalloc parameter in the kernel boot line.  Google "nvidia hvr-1600 boot vmalloc" or look at and

I'd set mine to 256M.



Users / Re: 1wire support
« on: February 07, 2012, 10:33:12 pm »

To check for it from command-line, on the core issue

dpkg --get-selections | grep libowfs

If you get nothing from that, issue

sudo apt-get install libowfs27



Users / Re: Insteon 2412U
« on: February 03, 2012, 06:36:16 pm »
From what I can tell scenes are setup directly through LinuxMCE. It takes the place of the insteon software. So scenes will be defined within the web interface of LinuxMCE.


I'm more familiar with using Insteon via Misterhouse, as that's my present production environment.  With Insteon, you can have on device scenes, where multiple devices are linked and the scene is executed with one command that is acted on by all devices simultaneously.  There were past comments on the LMCE forums that scenes for Insteon devices were simply LMCE sending individual commands to each device, which puts more traffic on the Insteon network in the case of complex scenes.  So, my inquiry was more along the lines of "is that still the case?"  I had seen comments about someone writing a C device driver, but nothing appeared in the forums or trac to that extent.

Having scene support in LMCE where the scenes are downloaded to the devices and then executed by a single command would ensure a faster response time, less traffic for the Insteon network, and would give a fail-safe capability in the case that the LMCE core was down, so that lighting works the same with or without automation (major WAF sticking point).

Not sure if you're aware of this little Insteon gem or not...

Thanks for the reply, and hope that helps!


Users / Re: Insteon 2412U
« on: February 03, 2012, 04:09:21 am »

Since you're appearantly familiar with the PLM code, could you tell me/us if it presently supports Insteon scenes?  If so, would it support creating a scene in LinuxMCE that gets pushed into the devices, or merely triggering a scene that already exists in the devices (e.g. Manually linked scenes)?



Users / Re: 1wire support
« on: January 23, 2012, 09:27:41 pm »

You're not alone ;-)

I've been wondering too, mostly because I'm looking to implement 1-wire, but on a remote host.  OWFS supports that, but it looks like the 2161 Template was done to configure owserver for a serial port adapter like a 9097U.  I've been wondering if I should create a new 1-wire template, or modify the 2161 to support the other methods.

Would someone more knowledgeable about the 1-wire template like to suggest a course of action for USB and remote 1-wire support?



Feature requests & roadmap / Re: Monitoring
« on: January 23, 2012, 02:07:40 am »

What I think Thom is getting at is that if you're going to champion Cacti integration, you'll have to extend Cacti a bit to send DCE events, so that DataLogger can record them, and the rest of LMCE can act on them.  So, instead of Cacti sending an e-mail or SMS when a threshold is exceeded, it would use MessageSend to emit an LMCE event which Datalogger would record, and that the system could act on if a scenario was defined.  Like so...

Power usage goes over 5kw, Cacti generates a MessageSend to LMCE, Datalogger records the threshold exceeded event, and an "excessive power use" scenario is triggered, which broadcasts a message to all orbitors, and a "chi-Ching" sound is played over all audio devices.

So, it means some work extending Cacti, some work extending Datalogger, and some work figuring out events.  Plus, doing all that in such a way that someone else can extend your work to graph other things (soil moisture readings, humidity, Squid proxy results for domains, etc).  It would also probably involve integrating Cacti's web GUI into the LMCE web GUI, and use LMCE defined users.

I agree with you that Cacti and rrd's are great for recording some types of data, like multiple temperature sensors.  Assuming OWFS as a sensing infrastructure, it samples by default every 10 seconds per sensor.  So, that works out to 8640 samples per sensor per day.  Over time, those samples can be averaged because the further away they are from the present, the less need there is for precision (which is what RRD's do).   Cacti is also great at dealing with all the noise that syslog can generate.  Same with other types of repetitively sampled data, where the need for precision becomes less important over time.

Where I think Datalogger integration could really shine is for correlating events from multiple subsystems into a "timeline of events".  It's the concept of federated data; one overseer of other more specialized reporting/monitoring systems. It could be useful for debugging complex scenerios (this motion event triggered that lighting event when this other condition existed) as well as for security auditing (think alarm system event log), telephony logging, MythTV events, etc.  It would be especially nice if the Datalogger events could be viewed from an orbitor.  What would be really cool is heuristics to mine the Datalogger for patterns.  Think a "vacation mode" which operates lights based on the inhabitants past behaviours.

Just my $0.02 CDN as someone who's presently dealing with Federating configuration and compliance management systems at work (in addition to other things).


Pages: 1 ... 10 11 [12] 13 14