Author Topic: VIdeo from IP Cameras  (Read 2056 times)

fido

  • Regular Poster
  • **
  • Posts: 27
    • View Profile
VIdeo from IP Cameras
« on: March 30, 2009, 10:33:51 pm »
Hi, one of the tasks that i am interested in looking at is updating the functionality of the ip camera support from static refreshed jpegs to full streaming video ( ideally up to 30fps if supported by the camera ).

I am a complete newbie to the codebase but am a software engineer by trade,  although its been a long while since i did anything in c++.

I have no idea as to whats involved in achieved this functionality and no understanding as to why it hasnt been implemented to date....i guess there are very good reasons for it though.

That said i do think it is the type of task that would help me get my teeth into the code.

So i guess i am trying to start a conversation on this task and asking for for any pointers as to the areas of the codebase i should be looking at, have there been any attempts at this in the past...is there a deal breaker that simply makes this unachieveable etc etc. 

Any thoughts would be appreciated.

Thanks


tschak909

  • LinuxMCE God
  • ****
  • Posts: 5501
  • DOES work for LinuxMCE.
    • View Profile
Re: VIdeo from IP Cameras
« Reply #1 on: March 30, 2009, 11:25:44 pm »
This becomes an interesting task because of the design of the system.

Please understand that the architecture of this system is _MASSIVE_ and you'll need to spend quite a bit of lead time to truly understand what is going on inside.

In the case of "video" .. We implement a DCE command called Get Video Frame. This command is fired when a DesignObjType Broadcast Video is present on an active screen to the target device. Get Video Frame sends back an image of JPEG type, along with length, and the Orbiter processes this into a complete frame.

You must understand that this system is highly retargetable across devices, there is more than just the on-screen orbiter being displayed on your TV, there are cell phones, tablets, PDAs, IP phone displays, and even web pages rendering a proxy generated orbiter, so a lowest common denominator approach was taken.

Any approach will have to be wired into Orbiter as a starting point, as well as possibly taking on a new DCE message type that can encapsulate a stream, which will be decoded by Orbiter.

In short, you've got a lot of digging to do. :) Spend some time studying the code base, as well as our Developers guide, and please understand that Pluto thought of just about everything, so chances are, you will not need to reinvent the wheel.

-Thom

colinjones

  • Alumni
  • LinuxMCE God
  • *
  • Posts: 3003
    • View Profile
Re: VIdeo from IP Cameras
« Reply #2 on: March 30, 2009, 11:48:19 pm »
Like Thom said, big system! I would suggest getting your head around the DCE concept to begin with. Thom mentioned it, but it isn't necessarily obvious what this is at first glance. There is a basic diagram on the wiki somewhere. But essentially, LMCE all hangs off a messaging architecture.

The DCERouter is the "hub" if you like, that all messages (commands, events, etc) are switched through. Practically all functionality is implemented as "DCE Devices" (that you can see in the web admin). Each "device" is a software "module" that can open TCP connections with the DCERouter, and use them to send/receive DCE commands and events with other DCE Devices. A device isn't necessarily hardware, for instance the functionality that actually plays audio and video media is the Xine Player device. This is a piece of software that can receive DCE commands from the rest of LMCE and uses the xine libraries to play the media.

The OnScreen Orbiter is also a DCE device, and can communicate with the others as well. So it uses the Media Plugin (a house-wide embedded DCE device within the DCERouter) to create a house-wide media stream object, that can then be used when sending a command to a Xine Player device, so that Xine Player plays that stream. As this is a house-wide object, you can then instruct the media stream to be moved to another Xine Player on another Media Director, and playback will move and resume in the new entertainment area. This is all done with the DCE system.

Many devices are simply conventional GPL applications/packages, that have been wrapped with a DCE communication layer. In this way, the LMCE system can implement almost "virtual" commands like Power ON/OFF, and send them to any area, or the whole house - LMCE doesn't actually need to know how to turn the power on or off on a specific amplifier, TV, or lamp, when the DCE Power ON command arrives at the DCE device responsible for controlling that piece of equipment, the DCE device receives and interprets that command, and takes the appropriate action to execute it. Thus LMCE is a "framework" system as well. So for your application, you would want to know how to build a DCE device around the functionality you want to implement and determine which DCE commands are appropriate for you to implement.

hth!

digilifellc

  • Regular Poster
  • **
  • Posts: 31
  • Technology. Simple. Hidden.
    • View Profile
    • Digital Lifestyles, LLC
Re: VIdeo from IP Cameras
« Reply #3 on: March 31, 2009, 03:41:20 am »
Realizing that I am amongst coding legends and gods, I bow humbly to ask a couple of simple questions about this:

Does streaming TV shows and movies use the same methods as colinjones described, as with IP camera streaming? Can something that mimics TV/Movie streaming be implemented with the IP cameras to capture more than a single frame at a time? Possibly concurrently, so as not to exclude cellphones?

If it's not possible or practical, I digress.

Regards
What stays the same for every project no matter who is the client?

                                      The Home!

tschak909

  • LinuxMCE God
  • ****
  • Posts: 5501
  • DOES work for LinuxMCE.
    • View Profile
Re: VIdeo from IP Cameras
« Reply #4 on: March 31, 2009, 03:45:28 am »
No. remember, this system needs to be able to send to non-on-screen orbiters.

Orbiter has to be extended to handle a new message type, and extend the Broadcast Video designobj type to introspect for a new streaming command, if it is not available, then single get video frame commands should be issued as before.

-Thom

jondecker76

  • Alumni
  • wants to work for LinuxMCE
  • *
  • Posts: 763
    • View Profile
Re: VIdeo from IP Cameras
« Reply #5 on: March 31, 2009, 07:59:31 am »
I have also been looking into this (among other Motion Wrapper and IP camera improvements). I think sticking with MJPEG would be the way to go initially, as it is very easy to decode in software and its supported by many cameras. I think we can make another command, Get_Video_Stream (a compliment to Get_Video_Frame). The implementation of Get_Video_Stream on a device level will intercept the MJPEG stream from the camera, strip its boundary markers (as there really isn't a tight standard and it is implemented differently from manufacturer to manufacturer), and resend them out with a standardized boundary marker. Then, as Thom suggested, in Orbiter we can either extend the Broadcast Video deisgnobj, or even make a separate MJPEG designobj. Then to make it work across the various platforms (pseudo code)-- if(implements_get_video_stream && is_onscreen_orbiter) { use_get_video_stream } else { use_get_video_frame } (this would mean that any device that implements get_video_stream would also have to implement get_video_frame to provide this compatibility)

I already have a working MJPEG decoder in Ruby for one of my IP cams as my camera didn't have a "snapshot" to static jpg functionality, so I parse the MJPEG stream in real time to construct a jpg image which is returned via Get_Video_Frame. Works quite well too

On a side note, if we provide such an implementation, motion wrapper could be used directly to serve the MJPEG stream

tschak909

  • LinuxMCE God
  • ****
  • Posts: 5501
  • DOES work for LinuxMCE.
    • View Profile
Re: VIdeo from IP Cameras
« Reply #6 on: March 31, 2009, 08:19:40 am »
Do not make another designobjtype, there is no reason to. The issue here is doing on-the-fly transcoding to the appropriate format, there are a LOT of potential stream formats, and I see a clusterfuck in the making, here.

-Thom

jondecker76

  • Alumni
  • wants to work for LinuxMCE
  • *
  • Posts: 763
    • View Profile
Re: VIdeo from IP Cameras
« Reply #7 on: March 31, 2009, 09:30:35 am »
on the same note, it would be a waste terrible waste (in terms of processor and memory usage) to transcode from MJPEG or any other format on the fly to something that the broadcast video designobj can use (think about the overhead for someone who has 8 cameras... transcode them all on the fly?? On top of everything else the core has to do...  I think not) . Not to mention that people with the in depth knowledge needed for video transcoding are not that easy to find. Another dowside is that latency would have to be introduced as a side effect of the transcoding. This is where a new designobj type to handle MJPEG streams makes (a lot of) sense...  The stream is already there and there is almost no overhead to decode it.
« Last Edit: March 31, 2009, 09:32:40 am by jondecker76 »

tschak909

  • LinuxMCE God
  • ****
  • Posts: 5501
  • DOES work for LinuxMCE.
    • View Profile
Re: VIdeo from IP Cameras
« Reply #8 on: March 31, 2009, 09:46:03 am »
no, i'm not talking about transcoding to something the broadcast video designobj type can use...

I'm talking about extending the broadcast video designobj type to look for the existance of a given command on the target device to start a stream, if it's there, use it, otherwise send get video frame commands.

But this is all contingent on adding support for different cameras to deal with different stream formats, and we would need to augment orbiter to be able to deal with them, do we really want to go down this road?

-Thom

bulek

  • Administrator
  • wants to work for LinuxMCE
  • *****
  • Posts: 890
  • Living with LMCE
    • View Profile
Re: VIdeo from IP Cameras
« Reply #9 on: March 31, 2009, 09:57:43 am »
Hi,

a long time ago I talked about this with Chris (Pluto developer). The idea was to extend Orbiter to be able to show mjpeg streams as well. Motion by it self can generate mjpeg stream (on certain URL), so main changes are needed on Orbiter side to be able to show mjpeg stream (that could come either from Motion wrapper or IP camera directly, or even from public traffic cams)....

But for a start, Motion wrapper is currently really inefficient (it uses deprecated motion API ), so it sends signal to motion, motion then saves snapshots to disk (for all cameras if I'm not mistaken) and then reads snapshot from disk and return content as Video Frame... So changing Motion wrapper to newer API could also enhance current experience...

Regards,

Bulek.
Thanks in advance,

regards,

Bulek.

fido

  • Regular Poster
  • **
  • Posts: 27
    • View Profile
Re: VIdeo from IP Cameras
« Reply #10 on: March 31, 2009, 02:42:04 pm »
thanks everyone for all your comments...i am in the process of trying to review in the code all the various objects etc you are all referring to.  As soon as i get a bit more understanding behind these i will post back.

In general though i think you have all oulined a couple of clear directions to proceed. I now need to get some grounding!

colinjones

  • Alumni
  • LinuxMCE God
  • *
  • Posts: 3003
    • View Profile
Re: VIdeo from IP Cameras
« Reply #11 on: March 31, 2009, 03:07:18 pm »
thanks everyone for all your comments...i am in the process of trying to review in the code all the various objects etc you are all referring to.  As soon as i get a bit more understanding behind these i will post back.

In general though i think you have all oulined a couple of clear directions to proceed. I now need to get some grounding!

Good hunting! And welcome!

syphr42

  • Making baby steps
  • Posts: 3
    • View Profile
Re: VIdeo from IP Cameras
« Reply #12 on: April 04, 2009, 04:39:47 pm »
I know you are talking about IP cameras, but you may want to look at what was done here: http://forum.linuxmce.org/index.php?topic=7799.0 Looks like some work was done to either improve the Motion wrapper or create something new for V4L analog captures to get 25 FPS. Not sure if this helps, but it can't hurt.