Two panos with depth maps?

  • Is it, or could it be, possible to generate stereo views from two images, each with its own depth map, in such a way that each view gets pixels from both images, according to what is visible from the current viewpoint? This is a simple case of compositing using a z-buffer. It would largely alleviate the problem of "missing pixels" that results from rendering stereo pairs for a single image + depth map.
    This process could be extended to more than two source images and made more efficient by tiling. But for the moment I would just like to play stereo pairs that have individual depth maps.

  • Hi,

    I thought about exactly the same already - BUT - do you have any example data? (stereo pano image + stereo depthmaps).

    And note that stereo-panos are technically 'fake' - they just trick our brains. I have no idea if they would still work fine or provide an advantage with additionally extra depth separation...

    Best regards,
    Klaus

  • Hi Klaus
    I'm sorry to say I do not now have any stereo panoramas with separate depth maps.
    But I am working very hard on software to create them.
    The main target is large multi-camera "bubble" arrays, but should work also with normal two-camera capture.
    In the mean time, the Blender guys should have no trouble making synthetic test panos of this type, I will ask Andrew Hazelden and der Mische.

    In my experience the sense of presence is much stronger in VR when the image is stereoscopic. But there is something missing, namely the parallax shifts that should be seen when you move your head. The brain so much expects to see those, that it creates phantom shifts; and I have found that the strength of the phantom parallax is a good indication of the quality of the stereo pair.
    The proposed rendering method should make those small parallax shifts real, and make stereo views even more compelling.

    Best, Tom

  • Hi,

    just let me know when you have images for testing. *thumbup*

    But I have to say at the moment I'm a bit skeptical... the stereo-separation is already 'baked-in' in stereo-pano-images (and changes between different horizontal sections of the image) and so the depth-information can't be just the direct distance from the 'camera-center' to the pixel... the original stereo-separation-offset used when building the image, theoretically also would need to taken into account for finding the original 3d position - but that information isn't available in this case...

    Anyway - maybe it will work well enough, trying and testing will tell. *wink*

    Best regards,
    Klaus

  • Hi!

    I have rendered a spherical stereo-panorama and the corresponding stereo-depthmap image.
    Not sure about both, there are a lot of options for it..

    ..and I don’t know nearly anything about stereoscopic photography.
    I never have tested it, never have looked through vr glasses.. *confused*

    So I hope the files are useful!

    8K download here:
    SphericalStereo_01.jpg
    SphericalStereoDepth_01.png

  • Thank you Adam. This is a big help.
    As Klaus says, the geometry of the viewpoints is also required, to render shifted views properly.
    In an omnistereo pano like this, the viewpoints lie on a horizontal circle around panocenter, Both the dirameter of the circle, and the angular separation of the viewpoints must be known. For example, in a typical photographic rig of mine, the circle has a diameter of about 80mm and the moving eye points are about 165 degrees apart; that is, the lens pupils are slightly forward of panocenter.

  • ...And of course the depth map must be calibrated so it can be related to the view geometry. The most useful depth maps give distance, in absolute units such as reciprocal meters. Stereo depth maps are often scaled relative to the separation of the pupils, but this is not feasible when there are more than two views.

  • Klaus, I have also been in touch with Andrew Hazelden, who is ready to help by rendering CG views with depth maps. He says he can easily make both omnistereo panoramas and simulated fish-eye photos with depth maps. I have sent you his email, and also asked him to join this forum.

  • That is definitely a stitch error.
    We should try to make or find some CGI panos with depth maps, those will be geometrically perfect though artificially simple.
    I am working with Andrew Hazelden on depth mapping some photographic stereo panos, using available tools that make almost-acceptable maps. Will upload when I have them.

  • Ah, I understand the distinction. The pano is OK but the depth map/3D model has discontinuities.
    I am just now studying how it may be possible to improve depth maps by forcing their edges into alignment with edges detected in the images. I hope to show some results soon.

  • Hi. I wanted to mention that I've been following this thread with great interest. I bought a KRPano license a few days ago to be able to better explore what is possible for depthmap playback workflows inside of this eco-system. *smile*

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!