Beiträge von tksharpless

    With 1.21.2 tools, when I build a tour using vtour-vr.config, the resulting tour does not work in VR on either android phone or Meta Quest browsers.

    I think this is a longstanding problem, but I have forgotten what to do about it.

    It would be better if the defaults worked.

    The current 'fake VR' mode is nearly useless on desktop platforms. But it could be quite useful for stereo panoramas, if it would automatically display them in anaglyph.

    I have tried to get this effect with the XML, but without any luck. The static side-by-side display always appears.

    As a further feature request, it would be good to have the option of grayscale anaglyph display.

    I don't know of any Quest Pro applications that demonstrate use of eye tracking. Meta/Oculus forums indicate that Meta Pro eye tracking is currently working (badly) only in game engines and not at all on webXR. Meta does intend eventually to support it in the Quest browser.

    The currently available VR headsets with eye tracking include the Meta Quest Pro, the HTC Vive XR Elite and Vive Pro Eye, the Pico Neo 3 Eye and the Varjo XR4. They range in price from $1000 (Quest Pro) to nearly $5000 (XR4) and are targeted at the corporate/professional market. That is a market krpano should target too, as there is much need for custom app development.

    Do you have any plans to support the eye tracking now available on several VR headsets?

    Eye tracking offers several possibilities for big improvements in viewing comfort as well as interactivity.

    Dynamic adjustment of the convergence distance, by angular shift between the left and right images, is perhaps the most important application. This is a basic element of cinematic 3D, and should be a feature of all future VR hmds.

    I just made a multiresolution tour with krpano tools 1.20.9 using vtour-multires.config. In tour.xml I include vtourskin_design_ultra_light.xml.
    In other words, everything is stock krpano latest release.

    Using krpano testing server, everything looks fine on PC bowsers, including 'fake VR' mode in response to VR button. This is true both on the machine hosting testing server and other PC or Mac on the same LAN; but when I load the tour in a phone or Oculus Go via the LAN there are no zoom, pan, or VR buttons on the control bar. Buttons for fullscreen, thumbs, hide/show bar and next/prev pano are present and work as expected.

    What must I do to get those missing buttons on mobile and headset?

    I fear that this might have something to do with the lack of https support by testing server, but I hope that is not the case as I intend to host these tours on AWS S3, which also does not support https.

    Ah, I understand the distinction. The pano is OK but the depth map/3D model has discontinuities.
    I am just now studying how it may be possible to improve depth maps by forcing their edges into alignment with edges detected in the images. I hope to show some results soon.

    That is definitely a stitch error.
    We should try to make or find some CGI panos with depth maps, those will be geometrically perfect though artificially simple.
    I am working with Andrew Hazelden on depth mapping some photographic stereo panos, using available tools that make almost-acceptable maps. Will upload when I have them.

    Klaus, I have also been in touch with Andrew Hazelden, who is ready to help by rendering CG views with depth maps. He says he can easily make both omnistereo panoramas and simulated fish-eye photos with depth maps. I have sent you his email, and also asked him to join this forum.

    ...And of course the depth map must be calibrated so it can be related to the view geometry. The most useful depth maps give distance, in absolute units such as reciprocal meters. Stereo depth maps are often scaled relative to the separation of the pupils, but this is not feasible when there are more than two views.

    Thank you Adam. This is a big help.
    As Klaus says, the geometry of the viewpoints is also required, to render shifted views properly.
    In an omnistereo pano like this, the viewpoints lie on a horizontal circle around panocenter, Both the dirameter of the circle, and the angular separation of the viewpoints must be known. For example, in a typical photographic rig of mine, the circle has a diameter of about 80mm and the moving eye points are about 165 degrees apart; that is, the lens pupils are slightly forward of panocenter.

    Hi Klaus
    I'm sorry to say I do not now have any stereo panoramas with separate depth maps.
    But I am working very hard on software to create them.
    The main target is large multi-camera "bubble" arrays, but should work also with normal two-camera capture.
    In the mean time, the Blender guys should have no trouble making synthetic test panos of this type, I will ask Andrew Hazelden and der Mische.

    In my experience the sense of presence is much stronger in VR when the image is stereoscopic. But there is something missing, namely the parallax shifts that should be seen when you move your head. The brain so much expects to see those, that it creates phantom shifts; and I have found that the strength of the phantom parallax is a good indication of the quality of the stereo pair.
    The proposed rendering method should make those small parallax shifts real, and make stereo views even more compelling.

    Best, Tom

    Is it, or could it be, possible to generate stereo views from two images, each with its own depth map, in such a way that each view gets pixels from both images, according to what is visible from the current viewpoint? This is a simple case of compositing using a z-buffer. It would largely alleviate the problem of "missing pixels" that results from rendering stereo pairs for a single image + depth map.
    This process could be extended to more than two source images and made more efficient by tiling. But for the moment I would just like to play stereo pairs that have individual depth maps.

    Quoted from "Klaus"

    what are layered depth maps?

    A layered depth map represents the appearance of a 3d space from a range of viewpoints, rather than just one; but a limited range, for example a sphere 1m in diameter. It is a specialized 3d model good for rendering high resolution stereoscopic views with parallax shift and occlusion, as might be seen by a stationary observer turning the head and tilting the body. Perfect for VR stills and video frames. In the context of CG rendering, the "height map" is a similar idea.

    This representation consists of several overlapping partial models, each with an associated texture. Each part covers a restricted range of depths. In VR the layers will be spherical shells. The textures have alpha channels, so a view from any point in the feasible range can be rendered simply by shifting the models and alpha blending the projected textures. In principle, a simple generalization of single-depthmap, single-texture process you have already implemented.

    This idea was developed 20 years ago by the Szeliski group (https://www.colinzheng.com/wp-content/dat…/ldp_cvpr07.pdf ) but not much used outside CG (http://frederikaalund.com/wp-content/upl…-Depth-Maps.pdf ) until recent progress made it feasible to create layered representations from photos. Now this is quickly becoming the preferred format for displaying 'lightfield' and '6-dof'images (https://augmentedperception.github.io/deepviewvideo/ ) (http://visual.cs.brown.edu/projects/matryodshkawebpage/ ). Several other groups have published work targeting layered depth maps. At the moment each group has a different way of packaging layered views, but clearly this can be done simply by combining existing formats and software.

    It turns out that 5 or 6 layers is enough, because layers mainly encode visibility, the actual depths being attached to the model vertices. Where there is continuous depth, alpha blending gives good visual continuity because in those places the depth difference between adjacent layers is small. The total data size can be considerably less than 5 or 6 times that of a single layer representation because only the occluded areas need to be multiple. I can imagine that smart tiling and indexing might yield LDM files only about twice the size of a single-layer pano with depth map.

    I look forward to being able to package 6-dof VR experiences with krpano.

    Kind regards,
    Thomas