Image Composite Editor - equivalent program for live video? RRS feed

  • Question

  • A thousand pardons if this has already been addressed, but is there a program similar to ICE that will stitch multiple video streams into one panoramic video stream, much like our brains do with the input from our eyes in real time?
    Friday, March 25, 2016 6:55 PM

All replies

  • I don't know about doing it in real time, but I suppose you could use ICE to find the camera parameters from still shots ahead of time and use the projection settings from the resulting .spj file to retroject and blend the two video streams, as long as the camera's haven't moved since the initial calibration.

    Seven years ago at Microsoft Research TechFest, there was a research project where multiple mobile users' Qik video streams were stitched into a larger video, which is sort of like what you're describing.

    In general, though, most people don't shoot multiple videos which are near enough each other in space to stitch panoramically.

    In fact, a perfect panorama is captured with the focal point of the camera's lens at the exact same point in space for all input shots which is physically impossible for two video cameras.

    If you have enough photos of an environment to build a 3D model from the photos, you can register multiple videos taken in that environment to the model, even if they weren't close enough to overlap each other and this is really the sweet spot for me.

    If you have many cameras whose fields of view overlap enough for you to do stereo vision from each pair of lenses (this gives you a depth map which each lens' pixels can be projected onto and then the resulting 3D model viewed from a virtual viewpoint central to the lens' positions), then you can build things like GoPro's Odyssey mount used in Google's Jump but I don't know that even their processing tool works in realtime. 

    Lytro's Immerge sphere operates on similar principles, but with many more lenses.

    I'd be interested in knowing what video you have recorded where the cameras are close enough to each other to stitch panoramically.

    I would also point out that even our eyes and brains are unable to line up our two distinct viewpoints on all objects simultaneously. 
    We can only converge our eyes on a single distance at a time, so if I hold my hand at arm's length, if I focus on my hand, I will see a double image of the world beyond my hand and if I focus on the world beyond (or nearer than) my hand, I will see a double image of my hand.

    Tuesday, March 29, 2016 9:23 AM