Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - tschak909

Pages: [1] 2 3 ... 370
Installation issues / Re: 1604 install
« on: July 30, 2017, 08:35:07 pm »
It's not, there still are some definite issues, especially in the area of diskless media director bootstrapping. I haven't been able to work on it for the last two months, as I've had to concentrate on other matters, but I will be back soon.

We really need people to jump in and help fix issues found with 1604 and to get it firmed up.


Installation issues / Re: 1604 install
« on: July 30, 2017, 06:27:25 am »
We definitely need help to try and resolve this issue. I have had to concentrate on other things so I haven't been able to concentrate on fixing 16.04 at present.


Installation issues / Re: Upgrading from 10.04 to 16.04
« on: July 30, 2017, 06:26:30 am »
you could try upgrading to 14.04 by changing the package repositories to trusty in /etc/apt/sources.list, doing an apt-get update and apt-get dist-upgrade.

I would suggest backing up the system disk, before doing this.

16.04 is still being debugged, so it shouldn't be used, at present.


Users / Added: Support for creating ZFS storage pools/filesystems.
« on: May 21, 2017, 04:47:38 am »
Hi all,

I've recently added support for Oracle's ZFS filesystem to LinuxMCE. The support is integrated in the same place as the RAID setup in the web admin, and is selected as another type of RAID template. ZFS offers a wide range of benefits over the traditional md software RAID that has been present with LinuxMCE since the beginning, namely:

* The underlying block storage system and filesystem know about each other, and keep track of what's actually stored. This means, no lengthy array reconstruction times!
* The filesystem natively supports compression to increase disk space.
* The filesystem supports snapshotting for backing up, restoring, or transferring pool contents.
* For RAID-Z levels, the underlying parity information is kept with each block, rather than with each stripe, avoiding the RAID pairty hole problem.
* In addition to modes that correspond to RAID-0 (pooling), RAID-1 (mirroring), and RAID-5 (RAID-Z1), there are modes to provide a double parity RAID (akin to RAID-6), and triple parity RAID (akin to what would be in essence, RAID-7)
* The ability to mix and match all of these modes, together, in various configurations to create storage pools that best suit performance and reliability requirements.

It's not all roses, there are some drawbacks, most notably fragmentation due to the use of copy-on-write for everything, but ZFS mitigates this using very clever algorithms, so in practice with media storage, this should be of minor consequence. I will provide a way to defragment the resulting pools via the web admin, when I can.

As before with traditional RAID, you select your disks, and select Create Array to create the resulting storage pools. Pluto Storage Devices will immediately appropriate the storage, and mark it for use (if use automatically is selected).

What versions?
Currently, Trusty (14.04), and Xenial (16.04) and later have this functionality. I had done the necessary work for Precise (12.04) but for some reason, it's not installing correctly (due to a sysv-rc version mismatch, which is VERY strange!), so the linked dependency to the zfs tools from the lmce-core package was removed until I can determine what the problem is.

The following ZFS Pool types are available:

* ZFS Pool - Akin to RAID-0, but with blocks dynamically striped across the device for better overall performance.
* ZFS Mirror - RAID-1 like functionality, again with dynamic striping
* ZFS Raid Z1 - Single Parity RAID (RAID-5 like)
* ZFS Raid Z2 - Double Parity RAID (RAID-6 like)
* ZFS Raid Z3 - Triple Parity RAID (RAID-7 like)

What's done:

* Array creation (with or without hot spares)
* Deletion
* integration into pluto storage devices

What's to be done:

* Array growing (there will be no deletion of members from pools, as this isn't possible)
* Hybrid pools (mixtures of the above storage pools, needs a lot of new UI in the web admin)
* snapshotting (need to think of a proper way to present this to the user)
* Better status reporting (hooking further into zed)

Getting it:

New installs of 14.04 and later include the necessary ZFS support. If you are already running 14.04, then an apt-get upgrade should pull in the new ZFS packages, as well as the new device templates for the ZFS storage pools. After this point, you should be able to create new storage pools.

A note:

You may want to wait until posde pushes my latest updates (as of 2017-05-21), before trying, as I did a LOT of bugfixing over the last day. I will tell in this thread when those changes have been pushed into the repo.


Installation issues / Re: Installing Diskless MDs
« on: April 02, 2017, 05:54:43 pm »
Looks like you had a kernel mismatch happen. The easiest ways to resolve this are:

* Delete the MD in the web admin, and let it reinstall.
* symlink the default initrd and vmlinuz image to your media director's image.


Developers / Re: VLC Player status - multi-room sync checked in
« on: March 27, 2017, 08:29:21 pm »
I have checked in the code, and prepared a merge request to master...but polly needs to finish working on gitlab before it can go through... as soon as it goes through, and we get a package, i'll make an announcement that it can be selected.


Developers / VLC Player status - multi-room sync checked in
« on: March 27, 2017, 09:42:01 am »
It's almost there. While video looks to be very well in sync, I am not recording the audio because there is approximately a 40 to 80 millisecond difference between media directors, which must still be addressed.. possibly through a processing delay... But as you can see, it works, even with DVD media (and menus!)

With this feature, this means that VLC Player is ready for beta testing.

Installation issues / Re: update problems/mistakes made(solved)
« on: March 13, 2017, 11:50:19 pm »
We are trying as hard as we can to keep LMCE going, there are only a few of us who can work on it in our free time. If you know other interested hackers programmers (changed by posde), it sure would help. :)


Users / Re: biosdevname causes issues during diskless boot.
« on: February 27, 2017, 07:23:07 pm »
Yup. That seems to line up...


Installation issues / Re: PnP Install Broken for Phoenix USB Solo Mic
« on: February 27, 2017, 04:08:49 am »
Okay, I'll get one of my microphones back out and diagnose/debug.

Thanks for what you've done, so far. :)


Users / Re: biosdevname causes issues during diskless boot.
« on: February 27, 2017, 12:56:30 am »
This is espeically relevant for installs done via Ubuntu Server (in my case trusty i386)

If you read the page I linked, it shows how the ethernet device names are changed, depending on how they are located in the box.

LOM (Lan-on-Motherboard, devices built onto the motherboard) devices get a device name like emX.
PCI devices get a device name like ethX. (other distributions actually go much further than this, and mix in manufacturer specific names a la BSD).

If you have a LOM device, like I have, on a few machines, then the ethernet device is em1 (not em0), and somewhere in the initrd, a kernel panic happens. (You see this after the IP-Config messages).

removing the biosdevname package, and rebuilding the initrd solves the problem.


Users / biosdevname causes issues during diskless boot.
« on: February 26, 2017, 01:26:30 am »
A ticket for this is here:

When linuxmce is installed via trusty-i386 server, biosdevname is installed, which is a set of initrd scripts that attempt to name various devices, including ethernet devices, by their relative hardware position as reported by the bios:
The facility is described here:
In our case, any embedded (LOM) ethernet devices get called emX instead of ethX, which blows up our initrd, and causes a kernel panic shortly after IP-Config successfully completes.

What I suggest, is a two pronged approach to fixing our initrd:

(1) remove biosdevname during lmce install, with a corresponding initrd update. for the short term
(2) do more testing and auditing of biosdevname to see how consistent the naming is, and adapt our scripts accordingly, for the long term.

Users / Re: Diskless Workstation installation boot and ACPI=on or off
« on: February 25, 2017, 07:47:15 pm »
Does armhf use the default diskless PXE kernel?


Users / Diskless Workstation installation boot and ACPI=on or off
« on: February 25, 2017, 10:06:36 am »
Hello everyone,

acpi=off was added to the Diskless default boot, during the early days of Pluto 2.0, because of hardware that contained faulty DSDT tables, which did not properly initialize the I/O controller hardware. Since the BIOS in these early machines (circa 2004-2007) initialized the I/O controllers and embedded hardware to a point where the linux kernel could work around the bugs, it was a way to get the non-working machines to boot properly.

With the decreased use of legacy BIOS in the x86 world, and the emphasis on decreasing POST and boot times, firmware engineers moved the initialization of critical I/O subsystems to the operating system, utilizing ACPI to discover, enumerate, and provide the needed data to bring up the I/O controllers and other embedded devices. If ACPI is not turned on, these devices are typically in a non-working state, causing devices to not be discovered, to kernel panics while the kernel brings itself up.

It would be beneficial to hear from everyone, as to the effects of turning this attribute on or off while booting the diskless workstation, does it stop kernel panics? do the NICs initialize properly?

There is a ticket open for this issue:

Please let us know,

Installation issues / Re: PnP Install Broken for Phoenix USB Solo Mic
« on: February 25, 2017, 09:36:07 am »
try it.


Pages: [1] 2 3 ... 370