Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - mkbrown69

Pages: 1 ... 10 11 [12] 13
Installation issues / Re: Installation hang on second step. Please help))
« on: October 26, 2011, 07:33:59 pm »
i start install again   on monitor   file /etc/apt/sources.list   and LaunchManager.progress.log
when shown
        Downloading radvd - IPv6 Router Advertisement Daemon...
        Failed to get radvd - IPv6 Router Advertisement Daemon

the lucid  entry is addeded  on sources.list
i edited this fille and remove lucid line .
and now installation is finished .  and show  Sara

radvd is also affecting 10.04 installs, may be an upstream issue.,11988.msg84827.html#msg84827,11988.msg84895.html#msg84895

Hope that helps!


Installation issues / Re: New 1004 Installer Testing
« on: October 24, 2011, 07:06:35 pm »
oops maybe spoke too soon.
apt-get apdate/upgrade today on the virtual core and I'm getting an error installing radvd, error parsing config file.


I had the same issue.  I solved it by copying radvd.conf from /usr/share/doc/radvd/examples/simple-radvd.conf to /etc/radvd.conf, and then doing apt-get install to resume the upgrade.  I would concur with Murdock as it's likely an upstream issue.  The radvd.conf file that was installed by apt had two blank lines in it, which is why apt barfed an error on trying to restart the service.


P.S.  I'm not typing this from home; The path is from memory, so it'll get you in the ballpark if it's not quite right! ;-)

P.P.S.S. I'm running LMCE in a KVM based VM, installed by the new installer.  Core installed fine, I'm having issues with MD's.  Neither has a keyboard attached normally.  One is a Intel Atom 330 with Intel graphics (X fails to configure), the second is an M2NPV-VM using component out 720p (which doesn't initialize).  I expect the M2NPV-VM is fine, and I just need to use the keyboard to pick the output and resolution.  I'll see if I can work it out with the Atom/Intel graphics, and submit a ticket/patches when I get some free time.  Not much of that when you have kids ;-)

Developers / Re: The Vision
« on: October 12, 2011, 05:51:32 am »
Just some thoughts to share with respect to golgoj4's post...

What is LinuxMCE now, and what is it meant to become?  What doesn't it do that it should, what does it do that it shouldn't, and what will it never do?  That's where Vision comes into play... Vision provides scope and direction, which are really important when developers (and developer's time) are limited.  It also affects decisions about architecture and infrastructure, frameworks and integration points...

For example, the requests forum is a real smorgasbord of people's wants and desires for LinuxMCE.  What makes it in, and what doesn't?  The usual answer is, those that write the code can put it in; but does it always make sense to put some something in, just because you can?  Many of the requests are for portal and app server functionality, like the stuff Amahi does.  Is that a space LinuxMCE wants to or should be playing in?  If so, what framework(s) get used/re-used to provide that functionality to maximize on the features delivered vs time invested by developers.  Another example would be Dianemo using MythTV and saying that it provides everything that VDR could do.  Does that mean VDR support is a duplication of effort?  Should it be deprecated/removed for 10.04?  (Note: I'm not saying that, but with a common vision and direction, these kinds of things have to be examined and decided upon).  Vision provides scope and direction, which influence quality, time and effort.

I bring this up only because one of my roles in my day job is to encourage the DBA's and app admins I support to consider the full life-cycle of the services they support, both for the here and now and for the end state, because they do effect one another.  The end state (the vision) effects the here and now, and the attitudes towards the here and now (scope and direction, and consequently time, effort, and quality) effects the end state.  I do this as someone who inherits the "lost turds"; the services that were rushed out to meet the immediate needs of the here and now, with no consideration for the long-term life-cycle needs of that vital service.  Specifically, how to keep it alive and well, and evolve it as needed.

LinuxMCE is different from most projects or applications, and not just because of it's complexity.  When fully integrated into a home environment, it becomes a daily use tool, one that people can literally grow up with; in some cases, children may become teenagers or adults, not remembering a time before LinuxMCE was a part of their lives.  That's why Vision is so important... As a collection of OpenSource components, LinuxMCE's foundations have a life of their own, which will drive the evolution of LinuxMCE to a large extent.  As the LinuxMCE project itself grows and evolves, more people will come about to try to help out in some way.  With a shared collective vision and direction, it becomes easier to rally the troops and make strategic choices; without, it may be like herding cats...

I'm going to duck back out now and continue to examine the plumbing in 10.04... still got a lot to figure out before I can hope to make a contribution, but I hope to make a dent in posde's points 1 & 2 eventually...


Users / Re: LMCE newb... what do I need?
« on: September 09, 2011, 06:44:03 pm »
any good docs you could point us to.
Currently I am running KVM on a proxmox ve server
I do have an 80gb ssd I could add for mysql use any hints on that


I think I'll write up a wiki page of tips and tricks for virtualization that are specific to LMCE, so it doesn't get lost in this thread.  Mostly, it's paying attention to a whole bunch of little details, where the aggregate cause a significant performance boost.

Proxmox VE is a Debian-based distro, with KVM and their own perl-based management GUI.  I haven't used it myself other than quickly trying it out, but I believe it uses LVM as an option with local storage.  If that's the case, then what you might want to do is carve off a 20G LVM slice of the SSD as your LMCE boot/root disk, and mount other spindle-based slices as /var/log, /tmp and /home.  That way, you're placing your I/O workload onto hardware that will lend itself well to the types of I/O that will be hitting it.  I think you'll find Orbiter Regens will just fly!  It'll take some work to do it that way; you'll either have to install from scratch to the SSD, doing a manual partitioning of the virtual disks, or restore a clonezilla backup to the SSD, mount the spindle slices to temporary mount points and rsync the data over (and then delete the original copy), then pivot the disks into their final mount points.  There are some SSD specific optimizations that can be done in the guest OS, plus the disk I/O schedulers need to be disabled for the virtual disks.

Basically, the kernel disk drivers assume that it's writing to real disks, so it queues up and re-orders operations in order to take advantage of where the heads are over the disk platters.  The host OS is already doing this, so we don't need the guest OS doing it also, because it will simply be working at odds to what the host (who actually controls the access to the disks) is doing.  So, we add "elevator=noop" to the kernel boot parameters of the guest.  Where to do it varies between grub and grub2, so I'll leave that as an exercise for the wiki.  You can change it on the fly with echo noop > /sys/block/[s,v]d[a-z]/queue/scheduler.  You can also cat that file to see the one in brackets that is presently selected.  You can do that change on a per-disk basis in /etc/rc.local, by echoing the appropriate scheduler to the appropriate disk.  SSD's like the deadline scheduler better, as it round-robins all the processes seeking I/O time.

You also want to configure the device on the virtualisation host as a "Virtual disk" rather than an emulated IDE or SCSI, as that will leverage the paravirtualized VirtIO drivers.  You'll also want to configure the network adapter as a VirtIO device as well, as the drivers for both network and disk are included in the 8.10 and 10.04 kernels.  VirtIO gives a huge performance boost (near native, 95~99% to the physical hardware), as the hypervisor is not having to emulate various hardware registers in software.  Networking between VM's using VirtIO is simply a memory-to-memory copy, and that occurs at orders of magnitude faster in RAM than on wirespeed.

There are some other parameters I've put in my libvirt config files to disable caching, mount point options in the guest OS's to optimize for the underlying slices, and some other application specific tweaks, so I'll go through my stuff at home and make a proper wiki page for a virtual LMCE core.  I've got some half baked ideas for  some infrastructure work on LMCE that will do some auto-detection of the underlying core and md hardware (physical and virtual), and will make optimizations based on what it finds.  It's something I'm already working on at the day job, so I'll need to work up a proof-of-concept at home for how I can abuse it for use in LMCE.  It's going to take a while to get there, as I have to poke around under the hood in LMCE to see how things are working presently, and how (and when) this new infrastructure could be integrated non-disruptively.

Hope that helps!


Users / Re: LMCE newb... what do I need?
« on: September 08, 2011, 09:46:25 pm »
I have found vm's work fine for testing for both core and MD's but as far as a production system It don't cut the mustard.

Do you guys have stock in your electric companies ;)
You can run lmce on a 35 watt core

just my 2cents


In my day job, 95% percent of what I work on is virtualized, and I work with 4 different hypervisors on three different hardware platforms.  The stuff that doesn't get virtualized is the stuff that will keep a 32-core box running flat out all the time on it's own. VM's run fine when the OS is tweaked to use paravirtualized drivers and some of the default behaviours of it's I/O are changed.  It's also helpful to have an end-to-end understanding of the hosting hardware platform, the hypervisor and it's various schedulers, and the underlying infrastructure like SAN and network, plus where you need to tweak under the hood to optimize for the workload.  At work we regularly get 80-100 VM's onto a big honkin server, and the client's don't know the OS instance is virtualized.

As for my home system, I'm running LMCE plus an average of 5 other VM's on a 45W Dual core CPU.  LMCE (8.10) actually places the highest load on the system, in part do the the age of the virtualized drivers and the kernel itself, plus inefficiencies in ext3 filesystems running in virtuals.  10.04 with ext4 file systems and VirtIO drivers plays a lot nicer in a virtualized environment.  I'm actually trying to avoid having to upgrade the CPU, but if I end up running Windows as a virtual I'll have no choice; Windows (even 7) takes up wayyyy more resources than an equivalent Linux install.  Energy efficient CPU's in Socket AM3 are getting harder to find unless you special order them from NewEgg or something like that...

LMCE Core (1vCPU, 1.7G RAM)
Zarafa Mail Server (1vCPU 1.5G RAM)
Misterhouse Home Automation (1vCPU 512M RAM)
Astaro VPN endpoint (1vCPU 512M RAM)
Ubuntu Virtual Hosted Desktop (1vCPU 784M RAM)
LMCE MD and other test VM's (1 vCPU and various RAM sizes, keeping under 5G total to leave a gig for the host OS)

Plus MythTV and other external network services that are running on the host OS.

At this point in time, I'm more I/O bound than CPU bound, but less so since I added a 40G SSD which I've carved up through LVM and presented as separate disks to instances running MySQL.  It gets mounted inside the guest at /var/lib/mysql, and the db files sit on it.  I'm seeing ~1400 IOPS using the VirtIO drivers in the guest to the LVM'd SSD, compared to ~100 IOPS on my RAID-1 set on the host, using the Oracle Orion test tool.  Disks are better for sequential I/O (like media files and logging), and SSD's are better for Random I/O (like OS drives and databases).  Using Orion on the raw SSD block device from the host nets me ~25K IOPS (that's avoiding the FS and the FS cache).

One thing that tends to foul up most people using VM's is throwing more vCPU's at an instance in order to improve performance.  More often than not, more vCPU's will hobble you, as the hypervisor has to find the free number of cores available at the same time as you have vCPU's configured, before it will schedule workload on the cores.  So, on a dual core host, a VM with 2 vCPU's would need both cores free _at the same time_ before the hypervisor's scheduler will dispatch the guest onto the cores.  So, host processes will be in competition with the guest for CPU time, as those usually get dispatched individually onto cores.  If you have more than one 2vCPU guest, then they start to get into contention for CPU time, with each other and the other guests.  If they have heavy I/O, then the host is competing with the guests for CPU time in order to perform the I/O, and all are starved out as a result.  Then people complain that virtualization sucks...  ;)

Food for thought...


Users / Re: LMCE newb... what do I need?
« on: September 08, 2011, 07:04:16 pm »
I have a 20-bay chassis with (10) 1TB drives in 2 RAID sets with a global hotspare for a total of 7TB of usable storage running FreeNAS.

I run a 2-node vSphere cluster with twin IBM xSeries 3500's connected via iSCSI from FreeNAS, each with 8GB of RAM.  I've been thinking about cutting my cluster down to a single node and either using the other one for a core server or getting rid of it altogether (it's very loud and consumes alot of power).

I also run a decent Dell PC as an Untangled router...

All of this is connected to a 48-port managed gig switch capable of VLAN tagging (which currently separates my storage network from my internet accessible network, as well as my ESXi management networks... routed via router on a stick with a Cisco 2651XM router)

If you're looking to shrink down your infrastructure, you might want to check out, and specifically the PDF  Basically, an ESXi host running a Solaris-based NAS/SAN appliance.  You can use RDM, or if your hardware supports VT-d or IOMMU, you can pass through the HBA into the guest.  Assuming you're running ZFS, you may even be able to export your ZFS pools from FreeNAS, and import them to the Solaris-based products.  You definitely can not do vMotion if you go the All-in-one approach, or if you do any PCI pass-through of Tuner cards or other host-based devices.

LMCE runs fine as a VM, as I'm running it as a KVM VM on my Linux virtualisation host.  I'm going to play with a virtual MD for some testing and infrastructure work I'm looking at doing.  I've got an Athlon x2 5050e 2.6Ghz dual core running 6 VM's on 6GB of memory, and my load averages are 0.5 -1.5 the majority of the time (all Linux VM's).  I'll have to bump it up CPU wise if I implement the Win7 Virtual Hosted Desktop I'm thinking of doing... (I like having my own private cloud).

I have two physical NIC's, as recommended by the LMCE configuration architecture.  eth0 is the existing home "production" network, and eth1 is the LMCE managed network.  I've bridged them internally, eth0 to br_ext, and eth1 to br_int, and I have VM's configured to attach to the appropriate network (think vSwitched internal networks bridged to external physical networks).  My mail server VM, Misterhouse VM, and host-based DNS/DHCP/NFS/tftp/MythTV services are connected to the br_ext network, for my existing prod environment, and the LMCE dcerouter/core is dual-homed for external access, but it owns the br_int network.  You could implement something similar using vSwitches and partition your switch or use VLAN tagging to separate the network environments.  It's best to let LMCE manage DNS and DHCP for the LMCE network (the net), so that the PnP and auto-configuration stuff works automagically.  If you want to get really fancy, your WAN connection could connect to the Untangle box running as a VM by a physical NIC, have a second virtual interface connected to a vSwitch, which your internal environment gets fed from.  I'll have to start a user page with a picture to show my network architecture... I have a few friends looking to do the same...

I've settled on Insteon for my home automation protocol but don't know what all I need to get started using insteon with LMCE.  Do I need to get an Insteon starter pack with access points (bridges) or a controller to connect to LMCE?  Or can I simply purchase insteon compatible switches/dimmers and connect the PLC to LMCE via USB?

I'm running Insteon at home myself, under Misterhouse presently.  There are some GSD drivers in LMCE, but I haven't been able to get them to work reliably with my setup.  Someone is working on proper C++ drivers with support for Insteon Groups, scenes, and Link management, so I'll try again when those drivers are ready.  You'll need a PLM (USB or Serial) to attach to the computer, and you can use dual-band devices on different legs or phases to bridge the Insteon signals between your A and B legs (rather than Access Points).  The USB PLM merely has a FTDI Serial to USB chip built in, so it'll show up as a serial port on /dev/ttyUSBx.  It's important to get a good quality Insteon network going first; there are lots of "signal suckers" for Insteon/X10 PLC signals (like UPS's, PC Power Supplies, phone chargers, etc).  You'll likely need to put those on signal filters to improve the reliability of the Insteon network.  Dual-band devices are also useful for network integrity; at minimum, you'll need two, but they can be helpful on more problematic circuits, so more than two won't be a waste.  Four seems to be a common number.  If you have Arc-Fault circuit breakers, Insteon PLC signals won't pass through the breaker, so dual-band devices are useful for bridging the comms from the rest of the house onto the arc-fault protected circuit.

Hope that helps!


Developers / Re: So, what comes next after 8.10 GA?
« on: September 07, 2011, 04:23:26 am »
You want to know if there is an upgrade path?

I think it will be a reinstall as there is no manpower to focus on an upgrade.
But it is possible to do it, but it is a bit of a hassle and you are a bit on your own.


My question wasn't so much about _if_ there is an upgrade path (because I know there isn't right now).  It was more about is there a desire or intention to offer an upgrade path from 8.10, or will 8.10 be a frozen point in time implementation, and that all hands will be on deck working on a _clean_ 10.04 based install.

I'm just interested in knowing what the intentions are of the core devs, so I can work on some stuff that could be used in 8.10, but will more likely come to fruition in 10.04.  But there's no point in working on 8.10 if the core devs want a clean break from the legacy OS to head off application stack issues, for example.  I'm running my LMCE in a hybrid virtualized/physical test environment, so it's no skin off my nose to blow it away and rebuild on 10.04 if there's no intention of offering an in-place upgrade path.  I also don't want to waste the devs time by filing bug reports/patches that they may have no intention of addressing.

Thanks for your time!


Developers / So, what comes next after 8.10 GA?
« on: September 06, 2011, 04:27:23 am »
This is a followup to my post on the user's forum...,11889.msg83738.html#msg83738

I'm wondering if there's a roadmap/game plan as to what happens next one 8.10 goes out the door...

In the immediate near future, is the plan for an in-place upgrade from 8.10 to 10.04, or will it be "a blow it away and reinstall" like the transition from 7.10 to 8.10?  Given that I have young kids and a family with a busy extracurricular schedule, I'm looking for some guidance as to where I can best direct my limited time and efforts to provide some useful results.



Feature requests & roadmap / Re: Monitoring
« on: September 06, 2011, 04:04:35 am »
Not to throw gas on a fire, but here's an example of Cacti power graphs and temperature graphs feeding into Misterhouse home automation .  Marc's been around MH and Linux for a while...

As Hari suggests, you might want to check out the data logger functionality that's presently built-in to LMCE, and compare / contrast that against MRTG/Cacti.  It might be beneficial to have a requirements and capability analysis of the three, since logging and graphing can form a crucial part of the system and will likely have a data warehousing function for many people who want historical data for a reason.  I'm speaking as someone who's been using Misterhouse myself since 2004 to record internal house temperatures, and I have the RRD's to prove it!

I've used SAR, CollectD, nmon/topaz/ganglia, RRDtool with Perl, Tivoli, and so on...  There are lots of great tools out there; in this case, I think what's most important is that the logging, data warehousing, and graphing are all like Lego blocks.  Parts that can be re-used as common infrastructure consistently throughout the LMCE architecture.

My $0.02 CDN before HST...


Users / Re: OMG, why do you make this so difficult?
« on: September 06, 2011, 03:36:22 am »
I'm going to contribute some thoughts for discussion, from the viewpoint of someone looking in from the periphery of the project.  I think that one factor in the recurrence of people coming forth offering their services as a project manager type may stem from the appearance that the project doesn't have a sense of direction.  Some end users might see some of the threads in the forums, and the resultant activities in Trac and SVN, as a sign that the project is simply a bunch of people throwing code they like into a domotics system, and that the project is a barely contained anarchy, ruled  by a Darwinistic approach of "he who writes the code wins".  

Now, some may find that offensive, but it's not intended to be;  it's intended to portray the possible viewpoints of someone who's seen the various YouTube videos, have come on by the community and started looking around.  I've been following the project for couple of years now, so I've seen more of an evolution of the project and the dynamics involved, and I know things are more complex than that, but I'm trying to convey the views of newbies coming across the project.  If someone is going to commit their time to rolling out LMCE to control their home, their safe and comfortable castle, they may feel more comfortable knowing that it's going somewhere, and is going to be around for the foreseeable future.  So, if they're excited enough about the system and the project, they volunteer their services so that they can guarantee that both will be around.  

Because of this perception that the project lacks direction, it appears that there may be a deficiency in the project's communication plans, one that can be easily remedied by the same refresh activities going on with the wiki and the website.  Maybe an "officially documented" steering committee and processes can be put into the wiki, detailing who's governing the project, and what the goals are for the next release.  This may exist informally already, like discussions in IRC, but if it were to be put into the wiki with some kind of roadmap, that may also help those who wish to contribute.  Something along the lines of pain points, features intended to be incorporated into the next release, platform and infrastructure changes or directions, that kind of thing.  That way, those who wish to contribute can work in the same direction as the core devs, rather than working on code which may be at odds with the planned directions of the project.

My other concern is a process concern, which may help with the perception that the project is a barely contained anarchy, ruled  by a Darwinistic approach of "he who writes the code wins".  For example, right now there's a somewhat animated thread in the feature request forum on the topic of monitoring.  This has been repeated in the past in other threads, where animated and heated discussions about what technology/projects/code should be implemented, and he who writes the code wins.  While it's obviously necessary to have code written to get features implemented, my concern is less of the "what" and more of the "how" things make it into the project.  I, like many others, have my favorite projects/tools, etc., which I will defend and debate vigorously.  My concern, and recommendation, is that there be a process for getting code or functionality into LMCE, one that takes into account a requirements analysis for the immediate need, consideration for how it can be extended upon or repurposed for other needs, and a wholistic approach to how it fits into the overall LMCE architecture.  Thom's concerns about duct-taping on code and hacks are valid, and should extend to other modules/frameworks that go into the architecture.

Thom's made comments in the past about the huge size of the LMCE codebase, and all the challenges that have been faced since LMCE came about from PlutoHome.  I've had similar issues at my workplace, re-platforming other group's "lost turds" where some project had engineered a solution carte blanche,  without giving due consideration to existing infrastructure, supportability or lifecycle management. It's not a fun place to be in, and I expect that the devs don't want a repeat of the PlutoHome re-platforming experience some time down the road.  That's why I am suggesting a steering committee and a process for vetting new and current code/functionality, plus an organizational approach that seems to work well...

Maybe some or all of the core devs can take a similar approach to that of the kernel dev team (given that LMCE is in some ways as complex as the Kernel!).  I'm suggesting subsystem maintainers... Those devs, deeply knowledgeable about the subsystems under their purview,  can shepherd those of us who wish to contribute code or functionality to the project, as they would know all the aspects of those subsystems, and can recommend ways to implement new code and functionality without duplicating effort or technology.  This would also give new contributors a contact with whom they could learn from (think mentoring), and eventually spread the development load across a broader base of developers, all working towards a common set of goals. It may also take some of the pressure off of Thom and some of the others who have carried the weight of LMCE on their shoulders, freeing them up to look at the bigger architecture and feature sets.

None of this is meant to ruffle feathers or be construed as criticism.  Rather, take it as an indicator of your success, of a job well done in showing people what a domotic system can be capable of.  People want to contribute; the challenge is communicating where the help is most needed, and harnessing people's enthusiasm and skills.

Speaking for myself, I'm not a programmer; I couldn't C++ my way out of a wet paper bag, so I won't be coding any wonderful new features anytime soon.  That doesn't mean I don't plan to contribute to the project.  My day job has me sys-admining Linux systems running on pizza boxes to mainframes, and a variety of hypervisors to boot.  So, I'm going to bring the mentality of someone charged with maintaing a highly-available, secure, stable, and supportable hosting environment to any contributions I make to LMCE.  If there are pain points that are taxing the core devs, I'd need to know what they are before I could even hope to help.  I'll post to that effect in a separate thread on the dev forum.

Hopefully this provides some food for thought as LMCE goes through it's growing pains approaching 8.10 gold...


Users / Re: Help with Insteon, please! (merkur2k and/or Aviator?)
« on: June 27, 2011, 06:10:48 am »
Thanks for the quick response!

I'm having enough issues with 10.04 that I think I'll blow it away and re-install with 8.10, and then try again with template 1932.  I've tried it with 10.04, and all I get is mangled data.  I'm also having trouble getting 1-wire working, so I'm hoping both will work better on 8.10

Thanks for your help on this!


Users / Help with Insteon, please! (merkur2k and/or Aviator?)
« on: June 26, 2011, 04:45:52 am »
Hi Folks!

I'm hoping merkur2k and Aviator are monitoring... I've got a 10.04 install running, that I'm trying to migrate over my 2412S PLM from Misterhouse to LMCE for control.  I've looked through Trac tickets 1115 (Insteon PLM work) and 1112 (Insteon PnP detection), and tried implementing both.

I've set up an interface using Template 2103, and I put the PLM detection script from Trac#1112 in /usr/pluto/pnp (which I also symlinked from to, to correspond with the Wiki instructions for Insteon).  So far, I'm not having any joy...

From the logs, I get this...

Code: [Select]
1 06/25/11 21:57:09 /usr/pluto/bin/ 54 (spawning-device) 17376 Dev: 54; Already Running list: 15,16,18,19,29,30,37,21,22,23,26,32,27,51,49,53,
1 06/25/11 21:57:09 /usr/pluto/bin/ 54 (spawning-device) device: 54 ip: localhost cmd_line: Insteon_PLM_DCE
0 06/25/11 21:57:09 54 (spawning-device) Entering 54
========== NEW LOG SECTION ==========
1 06/25/11 21:57:09 54 (spawning-device) Starting... 1
1 06/25/11 21:57:09 54 (spawning-device) Found /usr/pluto/bin/Insteon_PLM_DCE
05 06/25/11 21:57:09.867 Connection for client socket reported NEED RELOAD IP=, device 54 last error 2 <0xb77db6d0>
05 06/25/11 21:57:09.867 The router must be reloaded before this device is fully functional <0xb77db6d0>
05 06/25/11 21:57:09.870 void ClientSocket::Disconnect() on this socket: 0x807e0a0 (m_Socket: 5) <0xb77db6d0>
05 06/25/11 21:57:09.877 Connection for client socket reported NEED RELOAD IP=, device 54 last error 2 <0xb77db6d0>
05 06/25/11 22:03:54.274 Got a reload command from 0  <0xb63d8b70>
05 06/25/11 22:03:54.535 void ClientSocket::Disconnect() on this socket: 0x807e310 (m_Socket: 7) <0xb77db6d0>
Return code: 2
2 06/25/11 22:03:55 54 (spawning-device) Device requests restart... count=1/50 dev=54
Sat Jun 25 22:03:55 EDT 2011 Restart
========== NEW LOG SECTION ==========
1 06/25/11 22:04:05 54 (spawning-device) Starting... 1
1 06/25/11 22:04:07 54 (spawning-device) Found /usr/pluto/bin/Insteon_PLM_DCE
05 06/25/11 22:04:09.027 Connect() failed, Error Code 111 (Connection refused)) <0xb77f36d0>
05 06/25/11 22:04:10.028 Connect() failed, Error Code 111 (Connection refused)) <0xb77f36d0>
05 06/25/11 22:04:11.029 Connect() failed, Error Code 111 (Connection refused)) <0xb77f36d0>
05 06/25/11 22:04:12.031 Connect() failed, Error Code 111 (Connection refused)) <0xb77f36d0>
05 06/25/11 22:04:13.033 Connect() failed, Error Code 111 (Connection refused)) <0xb77f36d0>
05 06/25/11 22:04:14.034 Connect() failed, Error Code 111 (Connection refused)) <0xb77f36d0>
05 06/25/11 22:04:15.039 Connect() failed, Error Code 111 (Connection refused)) <0xb77f36d0>
1 06/25/11 22:34:31 /usr/pluto/bin/ 54 (spawning-device) 14314 Dev: 54; Already Running list: 15,16,18,19,29,30,49,
1 06/25/11 22:34:31 /usr/pluto/bin/ 54 (spawning-device) device: 54 ip: localhost cmd_line: Insteon_PLM_DCE
0 06/25/11 22:34:31 54 (spawning-device) Entering 54
========== NEW LOG SECTION ==========
1 06/25/11 22:34:31 54 (spawning-device) Starting... 1
1 06/25/11 22:34:31 54 (spawning-device) Found /usr/pluto/bin/Insteon_PLM_DCE
05 06/25/11 22:37:58.951 Got a reload command from 0  <0xb63c9b70>
05 06/25/11 22:37:59.632 void ClientSocket::Disconnect() on this socket: 0x99f5310 (m_Socket: 7) <0xb77cc6d0>
Return code: 2
2 06/25/11 22:38:00 54 (spawning-device) Device requests restart... count=1/50 dev=54
Sat Jun 25 22:38:00 EDT 2011 Restart
========== NEW LOG SECTION ==========
1 06/25/11 22:38:09 54 (spawning-device) Starting... 1
1 06/25/11 22:38:09 54 (spawning-device) Found /usr/pluto/bin/Insteon_PLM_DCE
05 06/25/11 22:38:11.036 Connect() failed, Error Code 111 (Connection refused)) <0xb77396d0>

I'm presuming merkur2k has only recently started working on the DCE device for the PLM, given the recent date on the trac ticket.  Could you help me out with this?  I can provide testing info for you...



Users / Upgrade path from 8.10?
« on: March 11, 2011, 03:46:06 am »
Good day folks!

Are there plans for an LinuxMCE upgrade path from 8.10, or will it be a "blow it away and re-install" type upgrade?

Just curious, as I'd like to get an idea of how much effort I should put into the 8.10 environment I'm presently kicking the tires on vs the 10.04 environment I plan on setting up.

Thanks for your time!


Feature requests & roadmap / Re: Grocery List
« on: May 10, 2010, 04:27:31 am »
You might want to tie into the whole "Landing Page Thread".  There's some discussion about using some CMS software to manage some additional "household" type content.  There's a Recipe module for Drupal here: and there's some discussion on that site as to how to implement a shopping list.  That might give you a starting point...

Hope that helps!


Users / Sambahelper, unix and pluto users, and home dirs.
« on: January 03, 2010, 05:49:00 am »
Hi Folks!

First of all, Happy New Year to all!  All the best to ya for 2010!

I'm plodding along with my setup and testing LinuxMCE in KVM, and trying to understand some things so I can properly merge my existing environment with my future LinuxMCE environment.  One of the things I'm wanting to leverage is the NIS (Network Information Services) for centralized user management, as I'm wanting to have Unix user logins/homes available.  Here's how NIS and the various user types appear to be configured in LinuxMCE based on poking around my systems:

UIDs 0-999: root and Normal Linux service accounts, generally no shell/login for service accounts
UIDs 1000-9999: Unix User accounts, shell and login, merged into NIS passwd maps.
UIDs 10000+: Pluto User accounts, no shell or login, merged into NIS passwd maps.

So, generally it all makes sense, and is a normal Linux environment.  The pluto users don't get login capability, and their homes will never fill up with Unix stuff like dotfiles.  Now, here's the problem or what I don't understand.  A user gets created on the core called "sambahelper" that gets the next available UID in the Unix users range, and it gets pushed into the NIS passwd map.  That same "sambahelper" also gets created on MD's using the first available UID, which may not be the same as the one on the core, or in NIS, as it appears that the user is created using the useradd command without the -u switch.  

Here's what my install looks like right now...

CORE: /etc/passwd
mkbrown:x:1000:1000:Michael Brown,,,:/home/mkbrown:/bin/bash
sambahelper:x:1001:1001:Pluto Samba Share Helper:/tmp:/bin/false

MD: /etc/passwd
sambahelper:x:1000:1000:Pluto Samba Share Helper:/tmp:/bin/false

NIS: (ypcat passwd)
mkbrown:x:1000:1000:Michael Brown,,,:/home/mkbrown:/bin/bash
sambahelper:x:1001:1001:Pluto Samba Share Helper:/tmp:/bin/false

So, when I ssh into an MD as mkbrown, I end up with the following:
sambahelper@moon31:~$ id
uid=1000(sambahelper) gid=1000(mkbrown) groups=4(adm),20(dialout),24(cdrom),46(plugdev),112(lpadmin),119(admin),120(sambashare),1000(mkbrown)

Would it not be better to have this "sambahelper" user set to a lower UID, like 999, so it's not pushed into NIS?  This could likely cause a lot of problems with file permissions, but I'm not familiar with the role of the "sambahelper" user...  Should "sambahelper" even be in NIS?  Seems more like a service account to me... Should I file a bug report to set it to something like UID 999?

I also noticed that the mythtv user was not getting the same UID on the MD as was on the core, but was sharing the same home directory?  Problem?  Bug report?

Now, a question about home directories.  The pluto user homes (user_#) are obviously for Samba shares for Windows users, and are primarily for media storage (especially for personal media when you enter your PIN into an Orbiter/MD.  If I create Unix users and want to avoid duplication, could I add my Unix user to the pluto_user's group (or vice versa) and symlink the various media folders to the other home directory?  Would this break anything?  I know Thom doesn't want us fighting the system, so I'd like to figure out how to work with it without duplicating effort or damaging anything!

Thanks for your time!


Pages: 1 ... 10 11 [12] 13