Hi,
Just like in the previous post, I’m sharing a link to a project I worked on:
Link: https://online.jeju43peace.or.kr/pTour.jsp
(Please note: the server is quite slow, so patience is needed.)
This project was completed a few years ago, so while my memory may not be perfect, I’ve tried to describe everything as accurately as possible, avoiding any errors.
The workflow I used is as follows:
- Scanning with Leica BLK G1:
I used the Leica BLK G1 to scan the site and generate a point cloud. Panorama Image Creation:
To create the panorama images, I used a Nikon D750 with 1.8/20mm prime lens mounted on a VR rig.
Photos were taken at 45-degree(maybe) intervals horizontally, and in three rows vertically.
I used PTGui for stitching the panorama images.
(Accurate stitching is critical for this process.)Using a camera level to ensure perfect horizontal alignment during shooting is also very important. Later, when matching the camera's position and rotation in Blender, it was extremely difficult to align everything correctly if the image wasn't perfectly level.
After extensive testing, I settled on this combination of camera, lens, 360 rig, and stitching software, but I’m sure there are even better methods available today.
Ultimately, it is crucial to produce a panorama image with minimal distortion, as close as possible to a render from a 3D engine.
(This is important for accurately matching the camera position later in Blender.)Generating Mesh and Textures:
I used AWS Thinkbox Sequoia to convert the point cloud data into a mesh with textures.
(Note: AWS Thinkbox Sequoia is no longer available for download.)I chose this software at the time because it was the best I could find for extracting high-resolution textures.
If I were to do this again today, I would likely try Cyclone 3DR or RealityCapture instead.
(Higher texture resolution makes it easier to match the camera position in Blender when using the panorama image.)- Import to Blender & Camera Matching:
I imported the mesh and textures into Blender, then precisely matched the panorama image’s camera position.
(Attached is an example material I used to align the panorama image with the 3D data.) - Final Steps:
Once this is done, the scanning-based workflow is almost complete.
The remaining task is to model low-polygon geometry suitable for Krpano, since the raw data from the point cloud is too heavy.
From there, the rest of the virtual tour can be built within Krpano.
This was a personal project I worked on 2–3 years ago, done mostly through trial and error.
It might not be the best or most optimal method, and I’m sure there are better tools and workflows available today.
I hope this information is helpful to you.
<Translated by ChatGPT>