If audio is your primary concern, I'd suggest the squeezebox. From what I've seen, they work excellently with linuxmce and on their own.
http://wiki.linuxmce.org/index.php/Use_network_audio_players_for_a_whole-house_music_solution
They also handle syncing automatically on their own, though, if they're all just used as output devices with LinuxMCE, they might just be speakers for the Media_Player and suffer from the same sync issue. I do know that, using the slimserver software, they work like a charm:
http://wiki.linuxmce.org/index.php/Upgrading_SlimServer
jhammond
In bigger installations we use Squeezeboxes and CAT5 matrix switches to achieve this. 4x8 or 8x16 is the most used combination for us. This allows us to route any or all of 4 or 8 inputs to any of 8 or 16 outs. Each output then would drive a zone of a multi-Zone amp or localised in-ceiling amplifiers in each room. We use either rs232 or IP controllable Matrixes. This allows us to route one source to all outputs/zones and gives us perfect sync too. Conversely we can control the Matrix to allow for say several zones on the ground floor to receive the same source while each zone upstairs receives separate sources.
Matrixes are expensive but they do provide an enormous amount of flexibility and do keep video/audio perfectly in sync. We'd rather do all this internally, and digitally, inside the system...but for now this seems a little out of reach.
In some installs we also use Matrixes for switching video sources too... but thats another story.
All the best
Andrew
Have you guys thought about the suggestion I made of revamping the xine/DCE into 2 separate xine DCE devices - client and server? In another topic http://forum.linuxmce.org/index.php?topic=7657.0 this would increase the flexibility of the system, make it more modular, but most importantly ensure that all video and audio was always perfectly in sync and not need to go through that process of starting from the beginning and jumping to the right spot each time you split/moved the stream....
We're always thinking about streaming media and keeping it in sync ;-)
...and yes i have thought about that discussion. But I am still of the belief that, and this is particularly true of video, without special hardware it will be near impossible to achieve full quality and sync that scales from low-res up to full 1080p (Blueray quality) playback. Audio is definitely more achievable overall as even the highest quality streams are not stressful to any of the software we already have access to...so I guess it would audio where this might be worth some effort.
However for now we can get 'off the shelf' audio/video switching with full control from inside the system that delivers full quality for both audio & video... and the costs scales nicely for both Forum members who want to build down to a tight budget and for those who have a more 'money is less of an object' approach.
All the best
Andrew
Andrew - I understand your concern about the scalability. At the same time, a few points...
Whatever else is true, the method I propose would keep all media (audio/video/hd video) vastly closer to in-sync than the current method of relying on 2 or more streams happening to coincide based on them both being started based on the 1 second timestamp markers! Especially, with varying amounts of buffer sizes used in the playback hardware/software. Coincidental is somewhat the key word here
The other really critical point here is I am talking about a fundamental change in the concept of how you deliver media. We currently "stream" using a reliable TCP session. The alternative approach is using "real time" communications, and that is often misunderstood by people. Generally, people use a "reliable" TCP session, and deliver media into a remote buffer, and consume the data non-real-time for a specific purpose. ie to combat packet loss and variations in available bandwidth over, potentially, a very long, routed network path where these parameters are not guaranteed... eg the Internet. The buffer deals with bandwidth variations and TCP retransmit deals with packet loss.
Real Time communications usually uses UDP and 0 buffers for a simple reason, the traffic is delivered "just in time" to be consumed, ie real time. There is no point suffering the extra over head of "realiable" communications through a TCP connection because if a packet is lost, it becomes useless anyway, as its time has been an gone, so these packets simply get dropped. There is no point using a buffer, because the data is consumed immediately so the buffer would be permanently starved, plus the buffer introduces unnecessary latency on the playback.
The concepts involved in real-time communications are critical and necessary to forms of real time communications such as digital voice and video circuits, when these are interactive (telephone calls and video conferences) latency beyond the simple propagation time through the length of the circuit, is intollerable. Consequently, the technologies used are QoS/CoS/ToS marking/enforcing, prioritisation, Low Latency Queuing, Strict Priority Queues, allocated queue bandwidth, UDP and other real time protocols, etc. And obviously all this works very well indeed. In fact an analogy in video delivery could be terestrial TV broadcasts... we don't worry about the transmission time or retransmits on the RF broadcast, do we
it is "real time" in the truest sense!
But is all that necessary? No! Remember, they are trying to get audio/video real time streams to be reliable over very long distances, through very many network segments, routers, etc... sometimes even over the vaguaries of the Internet, and are still able to achieve this reasonably well. We are talking about a single, local, layer 2, switched network, direct connected and with a single subnet segment.
We don't need QoS/etc, queuing, prioritisation, etc. We certainly don't need buffers when sending, say, a broadcasted/multicasted UDP real time stream. None of this is even an issue until the tx-rings of the NIC start to experience network congestion, which with real time streams, even on a 100M ethernet will take some doing (many simultaneous, different streams). And we need to remember that under the same networking conditions, the current approach wouldn't work either, no matter how big the buffer... indeed with the additional TCP overheads and retransmits, you get even more congestion.
What is the upshot? Well, transmitting a real time stream on the Internal network, to be played real-time, without buffers means that 1) there is sub-millisecond propagation latency as it is a local, switched network, 2) the serialisation latency is identical for all recipients, 3) there is no additional latency caused deliberately by buffering, yet delivery is still completely reliable, due to it being a local, switched network, right up until the network is saturated/congested, at which point our current approach would also fail! Result - even high bandwidth video would always be in-sync, across multiple MDs to within a handful of milliseconds, compared with in-sync to within a second or so!!
The option is definitely there, and far from unconventional in the wider technology landscape... it just requires us not to be scared of real-time, unbuffered media streams on a local LAN. BTW, I'm not saying that the communication necessarily needs to be over UDP. In a local LAN environment, like we have, TCP effectively almost behaves like UDP for real-time traffic anyway... we just aren't using the reliability/retransmit features of the protocol. Only the ACK remains different, and there are plenty of real-time technologies out there that still use TCP without being overly concerned about the ACK latency, particularly, on a LAN.
Is this not something we should seriously be discussing, if the Xine libraries have this ability? Are we only turned off because of the unfamiliar real-time territory?