I dont know the formula, but what I would do is model the bathroom in blender and project the pano on the faces. you can then extract the texture from each face.
in my experience, naming variables inside <action> that also are used by krpano can lead to some unexpected behaviour.
My suggestion would be to not use ox/oy as variable names, and name them (for example) "myox/myoy" instead. Maybe that could solve your issue?
To create offline tours:
1) add a service worker to your projectIntegrating the Cache API with Service WorkersOptimize your workers by using the Cache API.blog.openreplay.com
After this step, the tour can be already be viewed offline.
2) convert your tour into a PWAHow To Turn Website Into A Progressive Web App (PWA) In 2023PWA helps you offer fast and reliable experiences to users. In this article, we show you how to turn website into PWA and gain a competitive edge in the market.magenest.com
After these steps, the tour can be installed as an application on windows and Android. For Quest 2, see next step
3) convert your PWA into a APKPWAs auf Oculus Quest 2 | Articles | web.devOculus Quest 2 ist ein Virtual-Reality-Headset von Oculus, einem Unternehmen von Meta. Entwickler können jetzt progressive 2D- und 3D-Web-Apps (PWAs)…web.dev
The apk can then be installed on the quest using SideQuest.
Feel free to reach out for more details, but this should already give an idea of the steps involved.
The reason is that in the original example, the "onclick" is linked to the hotspot. Later, in the routines to calculate where to go to, there is an object used "caller".
set (x2, get(caller.tx));
More info on caller:
The doubleclick is not bound to an object/hotspot, but is a global event, which does not hold the "caller" object and hence, the routine does not have enough information to calculate the positions and thus caller.tx will always return 0 or NaN.
I was wondering if this is still an issue, or has the situation improved in the mean time?
If you can explain how to test this myself, I can have a go myself too...?
What is the accuracy you aim for in your projects?
I did some research and even though it may seem obvious, this is a very technical subject. Was not expecting that!
Here is some relevant information I found on the subject. Seems "Lidar Fusion" is the technical term. I hope you find a good flow for your project(s).
Matching Panoramic image to LIDAR scan geometry (google.com)
Sensors | Free Full-Text | Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping (mdpi.com)
PanoVLM: Low-Cost and accurate panoramic vision and LiDAR fused mapping - ScienceDirect
LiDAR-Camera Fusion: A Beginner’s Guide | by Shashank Agarwal | Medium
krpano = document.getElementById("krpanoSWFObject");
krpano = krpano.get("global")
Note: this is probably not needed inside an <action> as the krpano object will already be defined by krpano.
Then you can use krpano.call(), in your case
Seems very simular to this case:PostHi,
I have a tour with 5 panos and 17 video hotspots (small 2d .mp4 videos, 3-5 MB each). All hotspots are loaded onstart, and I am tracing that all videos are successfully loaded (for each hotspot trace shows that loadedbytes EQ to totalbtyes).
Problem occurs after few seconds the tour and all videos are loaded: ERROR: path_to_video_file - video decoding failed (corrupted data or unsupported codec)!
Strange thing is that each time I reload the tour, ERROR occurs for a different video file, and…vragec
October 20, 2021 at 2:20 PM
Nice catch and thanks for reporting back here, might be helpful for others too!
(Buy a matterport?)
I may have misread your post a bit. The process described in the video is photogrammetry and uses "normal" pictures + Lidar to reconstruct the environment in 3D.
I now see you want to capture panorama's and then enhance them with the Lidar scans... Need to think about that :)
Reality capture will calculate all the camera positions for you.
To have more accurate results, especialy for measuring, you can also use "Ground Control Points" (GCP). These are special markers that you know the position of (and their relative distance). You can then use the GCP inside Reality Capture to better align all camera positions and have more accurate measurements.
I have no hands-on experience, but I if I would do such a project, my choice would be Reality Capture.
This video gives a good understanding of the workflow withing that software:
I am still confused because the video in this 360 photo [virtual tour] is not green screen [edited video] how it can be possible to match the same location of video & image.
The way I see it, the video was recorded using a 360 camera. From this 360 video, the first frame was extraced and used as the panorama for when the tour is loaded.
From the 360 video, a cutout was made (probably to save loading time and/or bandwidth) in a smaller resolution (960x720) and then projected in the panorama.
Surely can be done with krpano with the videoplayer plugin linked by klaus.
Do you want to learn to make it?
Or do you require someone to make it for you?
You have a picture maybe that shows your issue?
Probably the normals for your model are incorrect. You can try to flip them.
In the video tutorial of PCA 1.7, there is a property for the style called "linkedscene".
I am using PCA 2.0 now but I'm unable to get the "linkedscene" value in the console output.
Did something change to the output? Or how can I get the linkedscene value in the output?