1. There needs to be a kernel module that can expose all 16 video ports to the system as /dev/videoX. If this can't be done, game over. Stop. Do Not Pass Go. Do not collect $200.
2. either the devices need to be funneled through Motion Wrapper, or a new driver needs to be written that can grab frames from these video devices. I do not like Motion, and I prefer entirely to have separate PIR motion sensors, which are much more accurate.
If there is to be a seperate driver, it needs to be able to respond to the Get Video Frame (DCE) command, This command is sent to any video camera, and it expects to receive a single JPEG image, which Orbiter will display in a given spot on the screen. You can see lots of examples of how this happens, and LMCE doesn't care where the image comes from, so long as it gets funneled back when asked.
There would be essentially two device templates, an Interface, and a video camera for each port. The video camera templates would be not much more than just placeholders for some port/channel information that the Interface would read and use to map the hardware ports.
The Interface's ReceivedCommandForChild() method would catch the Get Video Frame command, look at the message and see which camera device it was for, then grab that camera device's Device Data, which would tell it which port, etc..., grab the frame, and send it back in the reply.