First photogrammetry attempt

  • My first try with photogrammetry. It is far away from the quality that showed us in his models, but nevertheless, here it is.
    This little chapel was digitized using a combination of ground level and drone pictures (200 pictures - Nikon D3 / 85 pictures - Mavic 2 Pro) and have been merged using RealityCapture.

    I hope to make some (or a lot) more scans in the future and maybe one day, I'll get good at it *cool*

  • Update - I added a button inside the krpano project to show/hide all the cameras. You can now have a look at all the camera positions and pictures inside the project. It's really cool too to try and match the pictures and the final 3D model.

    And can you find me flying my drone? *g*

  • Its a nice Scan, well done!
    how many hours of work are needed to create such a model/texture?
    (after the shooting)

    Thank you for the compliment!

    The processing time will ofc depend on a few factors, mainly the number of images and secondly your computer setup.
    For reference, my PC is Intel i9, 64GB, GTX3080.

    For this model and number of images and on my pc and if you just press the "Create Virtual Reality" button: it takes about 20 minutes for aligning/meshing/texturing in the automated way.

    More details:
    As it was my first scan and first time with the software (Reality Capture), I took some time to learn the program.

    First, I imported all images as is and started processing. I got a model within the next 20 minutes on "normal" mesh settings. The mesh was around 30milion triangles Total cost for the PPI was $2.53
    The model showed some artifacts though, visible mostly in the roof part so I was not happy with the result at first.

    I then expirmented with different resolution of the images, for example, when scaling images to 50% cost went down to around $0,50 but ended up with less quality of the textures and the artifacts remained. But speed was faster so to get a quick understanding of the results this might be something I do more.

    Next, I started filtering out some images from making the mesh. It seemed that it were mostly the ground vs top/down images that were causing the misalignment, so I left out the top images from creating the mesh and that seemed to resolve the issues.

    As an alternative I also experemineted with setting up control points, manually aligning some images in the area that was having artifacts. That took some time, but results were very satisfying and the process can also be used to merge different sets of images together, so I think that this will be my main process.

    All in all, I think processing a model like this would be less than one hour once you get the hang of it...


  • maybe because the shots were taken with different cameras (?)

    Yes, that was most likely why. But as this will be the case in most scans I will create, I needed a solid way to deal with these images.
    Might have been better to import both sets seperately, but I'll experiment with that later.

    This video shows a bit the same problems, and then goes into details on how to solve it. And they did it the same way so I'm now a bit more confident that "this is the way" *g*

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!